Border control systems are gearing up to use facial analysis tech

[ad_1]

Advances in global border control technologies offer innovative ways to address issues related to migration, asylum seekers and the introduction of illegal goods into countries.

But while governments and national security can benefit, advanced surveillance technology creates risks for the misuse of personal data and violation of human rights.

Technology at the border

One of the first actions of US President Joe Biden was to introduce a bill that prioritizes “smart border controls,” as part of a commitment to “restore humanity and American values ​​to our country. immigration system ”.

These controls will supplement existing resources at the border with Mexico. They will include technology and infrastructure developed to improve the screening of incoming asylum seekers and prevent the arrival of narcotics.

According to Biden, “cameras, sensors, large-scale x-ray machines and stationary towers” will all be used. This likely involves the use of infrared cameras, motion sensors, facial recognition, biometric data, aerial drones, and radar.

Under the Trump administration, the Immigration and Customs Control Agency (ICE) partnered with controversial data analytics firm Palantir to link police and citizen information to other bases data, with the aim of arresting undocumented people.

Similarly, from 2016 to 2019, Hungary, Latvia and Greece piloted an automated lie detection test funded by the European Union’s research and innovation funding program, Horizon 2020.

The iBorderCtrl test analyzed the micro-facial gestures of travelers crossing international borders at three undisclosed airports, with the aim of determining whether travelers were lying about the purpose of their trip.

Avatars questioned travelers about themselves and their journey while webcams analyzed facial and eye movements.

The European border and coast guard agency Frontex has also been investing in border control technology for several years. Since last year, Frontex has been using unmanned drones to detect asylum seekers trying to enter various European states.

As Australia took longer to implement enhanced surveillance at maritime borders, the federal government announced in 2018 that it would spend A $ 7 billion on six long-range unmanned drones to monitor Australian waters. . These should not be operational until at least 2023.

However, automated border control systems have been in use since 2007. SmartGates at many international airports use facial recognition to verify the identity of travelers against data stored in biometric passports.

Last year, the Department of Human Services implemented corporate biometric identification services. The system was reportedly deployed to meet an expected increase in demand for visa and citizenship applications.

It combines authentication technology with biometrics to match the faces and fingerprints of people who wish to travel to Australia.

Misuse of data

Governments can promise, as the Biden administration does, that the technology will only serve “legitimate agency purposes.” But the misuse of data by governments is well documented.

Between 2014 and 2017 in the United States, ICE used facial recognition to mine state driver’s license databases to detect “illegal immigrants”.

Refugees from various countries, including Kenya and Ethiopia, have had their biometrics collected for years.

In 2017, Bangladesh’s Minister of Industry Amir Hossain Amu said the government was collecting biometric data from the country’s Rohingya to “keep track” of them and send them “home”.

Misuse of data can also occur when it is questionable “science”. For example, the emotion recognition algorithms used in unproven lie detection tests are very problematic.

The way people communicate varies greatly across cultures and situations. A person’s ability to answer a question at a boundary can be affected by trauma, their personality, the way the question is phrased, or the perceived intentions of the interviewer.

The way different people express their emotions is highly nuanced and contextual;  it's not something that AI can be trusted to accurately assess.
Credit: Buster Benson / Flickr