Advances in global border control technologies offer innovative ways to address issues related to migration, asylum seekers and the introduction of illegal goods into countries.
But while governments and national security can benefit, advanced surveillance technology creates risks for the misuse of personal data and violation of human rights.
Technology at the border
One of the first actions of US President Joe Biden was to introduce a bill that prioritizes “smart border controls,” as part of a commitment to “restore humanity and American values to our country. immigration system ”.
These controls will supplement existing resources at the border with Mexico. They will include technology and infrastructure developed to improve the screening of incoming asylum seekers and prevent the arrival of narcotics.
According to Biden, “cameras, sensors, large-scale x-ray machines and stationary towers” will all be used. This likely involves the use of infrared cameras, motion sensors, facial recognition, biometric data, aerial drones, and radar.
Under the Trump administration, the Immigration and Customs Control Agency (ICE) partnered with controversial data analytics firm Palantir to link police and citizen information to other bases data, with the aim of arresting undocumented people.
Similarly, from 2016 to 2019, Hungary, Latvia and Greece piloted an automated lie detection test funded by the European Union’s research and innovation funding program, Horizon 2020.
The iBorderCtrl test analyzed the micro-facial gestures of travelers crossing international borders at three undisclosed airports, with the aim of determining whether travelers were lying about the purpose of their trip.
Avatars questioned travelers about themselves and their journey while webcams analyzed facial and eye movements.
The European border and coast guard agency Frontex has also been investing in border control technology for several years. Since last year, Frontex has been using unmanned drones to detect asylum seekers trying to enter various European states.
As Australia took longer to implement enhanced surveillance at maritime borders, the federal government announced in 2018 that it would spend A $ 7 billion on six long-range unmanned drones to monitor Australian waters. . These should not be operational until at least 2023.
However, automated border control systems have been in use since 2007. SmartGates at many international airports use facial recognition to verify the identity of travelers against data stored in biometric passports.
Last year, the Department of Human Services implemented corporate biometric identification services. The system was reportedly deployed to meet an expected increase in demand for visa and citizenship applications.
It combines authentication technology with biometrics to match the faces and fingerprints of people who wish to travel to Australia.
Misuse of data
Governments can promise, as the Biden administration does, that the technology will only serve “legitimate agency purposes.” But the misuse of data by governments is well documented.
Between 2014 and 2017 in the United States, ICE used facial recognition to mine state driver’s license databases to detect “illegal immigrants”.
Refugees from various countries, including Kenya and Ethiopia, have had their biometrics collected for years.
In 2017, Bangladesh’s Minister of Industry Amir Hossain Amu said the government was collecting biometric data from the country’s Rohingya to “keep track” of them and send them “home”.
Misuse of data can also occur when it is questionable “science”. For example, the emotion recognition algorithms used in unproven lie detection tests are very problematic.
The way people communicate varies greatly across cultures and situations. A person’s ability to answer a question at a boundary can be affected by trauma, their personality, the way the question is phrased, or the perceived intentions of the interviewer.
Technologies like iBorderCtrl undermine the rights of migrants, asylum seekers and all international travelers. They could be used to deny entry or detain travelers based on their race or ethnicity.
Racial profiling at borders is not uncommon. It came to light when New South Wales MP Mehreen Faruqi experimented with it at a US airport in 2016.
The Pakistani-born Greens member told the Guardian she was detained at an airport for over an hour after immigration staff took her fingerprint, asked her where she was from “Native” and how she obtained an Australian passport.
Facial recognition technology has already been shown to be able to bias people of color. Enlisting him at airports and maritime borders – where human rights have historically been undermined on the basis of race – could be disastrous.
The good news is that many people are now speaking out about the impact of border control technologies on migrants, refugees and other travelers.
In February, the European Court of Justice heard a case brought by German digital rights activist and politician Patrick Breyer.
Breyer requests publication of materials on ethics review, legal eligibility, marketing and test results of iBorderCtrl. He fears that the EU is going to hide a “highly controversial scientific project” funded by taxpayers’ money.
In Australia, Digital Rights Watch is the leading organization examining surveillance practices.
Of particular concern is the 2018 law amending telecommunications and other laws (assistance and access). It gives the Australian Border Force extensive powers to search for devices carried by people traveling overseas.
Last year, the government recommended that the legislation be changed so that agencies cannot allow the detention of travelers whose devices are searched by the border force.
However, without an Australian Bill of Rights, which would prevent laws that infringe on privacy rights, the potential for data misuse will persist.
This article by Niamh Kinchin, Senior Lecturer, Faculty of Law, University of Wollongong, is republished from The Conversation under a Creative Commons license. Read the original article.
Published March 3, 2021 – 06:47 UTC