Behind the tree of artificial intelligence awareness, the forest of political and social issues

The case rocked the AI ​​community in early June: Blake Lemoine, a Google engineer, told The Washington Post that LaMDA’s language recognition model was likely conscious (it has since been launched). Very quickly, experts in the field – and Google itself – spoke out against this assumption. LaMDA is a system created to imitate conversations as realistically as possible, but that doesn’t mean it understands what you’re saying. Conversely, many scientists argue, stirring the ongoing debate about the consciousness of AI diverts attention from the questions posed by these technologies in a more pressing way.

The old obsession with intelligent robots..a marketing argument?

The premise of awareness of our technologies is not new – it has stuck in our imaginations ever since Frankenstein by Mary Shelley and the growing success of science fiction. Imitation of human reasoning is also the basis of the Turing test, an experiment intended to estimate whether a machine is able to pass itself as a human to an outside observer. One of the fathers of modern computing, John von Neumann, for his part laid the foundations of modern computer architectures by modeling them on the workings of the brain.

“Even today, many people are funding research and working in this direction,” notes LIMSI/CNRS Professor of Artificial Intelligence Laurence Devillers. Citing Open AI founder Elon Musk, Yan Lecon, head of AI research at Meta, when noting the possibility that some devices could feel emotions, Google Vice President Blaise Agüera y Arcas said when he described LaMDA as an artificial cortex… “that An engineer declares that LaMDA consciously has a marketing interest, the researcher explains. This puts Google in a competitive world.”

When empathy deceives us

In fact, LaMDA is neither the first robot capable of eliciting empathy, nor the first algorithmic model likely to produce authoritative written conversation. In the 1960s, computer scientist Joseph Weisenbaum, for example, built Elisa, a program that simulates a psychotherapist’s responses. The machine worked so well that people leaked intimate details into it. We now call the “ELISA effect” the human tendency to attribute more faculties to a technical system than it can own. Closer to LaMDA, the GPT-3 language recognition model, available since 2020, is also able to reliably impersonate a journalist, squirrel, or William Shakespeare. Resurrection.

But users, experts or not, can take these findings for conscience, and that’s what frustrates a growing number of scientists. Linguist Emily Bender believes that it is an abuse of our capacities for empathy, one that causes us to display a semblance of humanity in inanimate objects. Lawrence Devilers recalls that the lambda is “basically inhuman”: the model is trained on 1.560 billion words, has no body and no history, and produces its answers according to probability calculations…

Artificial intelligence is a social justice issue

Shortly before the Lemoine case, PhD student Giada Pistilli announce It no longer expresses itself in relation to the potential awareness of machines: this distracts from the ethical and social problems that already exist. In this, she aligns with the line of Timnit Gebru and Margaret Mitchell, two AI ethics research analysts launched by Google… to point out the social and environmental risks posed by broad linguistic paradigms. “It is a question of power, analyzes Razia Bos Gitin, an independent researcher on artificial intelligence policy. Are we highlighting and funding the search for a machine we dream of making conscious, or rather attempts to correct the social, gender, or racial biases of algorithms that already exist in our daily lives?”

The ethical problems of algorithms that surround us daily are myriad: on what data are they trained? How do we correct their mistakes? What happens to texts that users send to chatbots built using forms similar to LaMDA? In the United States, the Association for Listening to Suicidal Persons used responses from these high-risk individuals to train commercial techniques. “Is this acceptable? We need to think about how data is used today, about the value of our consent in the face of algorithms we sometimes don’t even suspect exist, to consider their overall implications given that algorithms are already widely used in education and employment, credit scores…”

organization and education

The topic of AI awareness prevents further discussions “on the technical limitations of these technologies, the discrimination they cause, their impacts on the environment, and the biases in the data,” says Tévine Viard, a lecturer at Telecom Paris. Behind the scenes, these debates have been moving the scientific and legislative circles for several years, because, according to the researcher, “the issues are similar to what happened to social networks. “The big tech companies have always said they don’t need regulation, that they will run it: ‘The result, fifteen years later, we tell ourselves we need a political and citizen appearance. »

What is the framework then to prevent algorithms from harming society? The explainability and transparency of models are two of the axes discussed, in particular, to allow European regulation of artificial intelligence. “And those are good leads,” continues Tiffin Viard, but what should it look like? What is a good explanation? What are the possible points of recourse if it turns out there was discrimination? There is no definitive answer at the moment.

Another major theme, Lawrence Devillers emphasizes, is that of education. “You have to train very early on for the challenges posed by these socio-technical beings,” teach to code, get people to understand how algorithms work, help build skills…otherwise, in the face of machines designed to imitate humans, “you run the risk of being manipulated.” Education, the computer scientist goes on, would be the best way to allow “everyone to think about how to adapt to these evolving technologies, to the frictions we want to implement there, with their brakes, and accept them.” “And the push to build an ethical ecosystem,” where manufacturers are not responsible for their own legislation. »

Leave a Reply

Your email address will not be published.