Artificial Intelligence: The EU is investing in “high-risk” technologies to control migration flows

In the name of controlling its borders, the European Union is investing in artificial intelligence. Latest example so far: Itflows, a software for predicting migration movements. Disclose investigation site reveals internal alerts about potential violations in its app. Experts on the topic, interviewed by migrants, are concerned about the increasing status given to these “high-risk” technologies for human rights.

Five million euros of European public funds were used to develop Project Itflows, an artificial intelligence (AI) tool intended to anticipate migration movements. Designed by private company Terracom and research institutes, this tool is scheduled to be deployed from August 2023, and is still in the testing phase.

But the project is considered “disturbing” by several experts, including Petra Molnar, associate director of the Refugee Law Laboratory at York University (Canada), who was interviewed by Information Migrants. This lawyer and researcher, a member of the Migration Tech Observatory, which closely follows this type of project, considers Itflows to “normalize the use of high-risk technologies such as predictive analysis software to predict the movements of people crossing borders.”

>> to (re)read New technologies in the service of identifying migrants who died at sea

In fact, while the tool is still in the testing phase, a survey published by Disclose has already revealed internal alerts about its potential breaches. There is a “high risk that the information will end up in the hands of countries or governments that will use it to lay more barbed wire along the border,” said Alexander Kjerom, an analyst with the Danish Refugee Council and a member of the supervisory board. By journalists from Disclose.

“Stigmatization, discrimination and harassment of immigrants”

Itflows ethics committee members deplore not listening to their alerts. In internal documents obtained by investigative journalists, this panel considers that information provided by Itflows could be used, if it is to be used “inappropriately,” to “stigmatize, discriminate, harass or intimidate people, particularly those who are vulnerable like immigrants and refugees.” and asylum seekers.”

In one of these reports, the Ethics Committee provided details of these violations. Among other things: “Member states can use the data provided to create ghettos for immigrants.” The committee also notes “the risks of immigrant physical identification,” as well as “discrimination on the basis of race, sex, religion, sexual orientation, disability or age.”

>> to (re)read : For immigrants, biometrics along the way

The use of AI “exposes migrants to violations of their rights, including the right to privacy, the right not to be discriminated against, and the right to seek asylum,” summarizes Margarida Silva, a researcher at the Center for Research on Multinational Corporations (SOMO). ), called immigrants. “By investing increasingly in AI surveillance and technology, border agencies and policy makers are also making the choice not to invest these resources in rescue operations and the creation of safe corridors,” she recalls.

Itflows attests to the EU’s “growing desire to use unregulated and high-risk technologies” for human rights, and also expresses its displeasure with Petra Molnar. Among these technologies, there are also self-monitoring drones, or cellular data mining software.

Frontex is interested in artificial intelligence

Disclose indicates in its investigation Frontex’s interest in Itflows. The European Border Control Agency “closely follows the progress of the programme. To the extent of actively contributing to it by providing data collected as part of its tasks,” the journalists described.

However, several recent investigations show that Frontex covers businesses outside any legal framework, in particular the deportation of migrants from Greece to Turkey. Petra Molnar warns that these practices are “reinforced by various technological solutions”.

>> to (re)read : Frontex President Fabrice Leggeri resigns in the face of the scandal of illegal deportation of migrants in the Aegean

The agency does not intend to stop at Itflows. It assumes a desire to rely on other AI tools, and communicates ongoing research and development work. Another project Frontex was betting on, which also caused controversy, was called IborderCtrl. This tool, similar to a polygraph, was funded by 4.5 million euros by the European Union. It aims to decipher the emotions that cross-border interviewees, by analyzing the subtle movements of their faces.

IborderCtrl, like Itflows, is funded under Horizon 2020. This research and development program represents “50% of all public funding for security research in the European Union,” identifies specialist Technopolice.

The need to regulate new technologies

Aside from the work of professional researchers and investigative journalism, technologies like Itflows are often developed by the European Union under great obscurity. Many NGOs are calling for more transparency.

“We need stronger laws and policies that vigorously protect the international right to migrate and seek asylum,” Petra Molnar insists, and one of the first steps, according to her, is to “engage affected communities in the discussion” around these technologies.

Within the European Union, a plan to regulate the use of artificial intelligence is under discussion. This is the law of artificial intelligence. MEPs have until the end of the year, at least, to amend the initial text. Petra Molnar, who has proposed a series of modifications with other civil society experts, hopes that these negotiations will lead to a “total ban of predictive analytics technologies” such as Itflows, before they are implemented.

Leave a Comment