When a Facebook chatbot turns conspiratorial

You will also be interested


[EN VIDÉO] How do you determine the importance of artificial intelligence?
Artificial intelligence (AI), increasingly present in our world, allows machines to imitate some form of real intelligence. But how do we define it?

BlenderBot 3 is capable of conducting discussions and even searching for information on the Internet to feed dialogue with a human interlocutor. When it was posted, the company urged US users over the age of 18 to interact with it normally and report any suspicious comments or meaningless phrases. And here’s the drama. Present on site as a friendly little face smiling and floating in sweet mysterious Blue, it only took a weekend in AI to start talking Conspiracys and anti-Semitism.

In fact, within just two days of its release, users have already returned to it dead Disturbing snippets of conversation, and screenshot bloomed on social networksFrom the most interesting to the most disturbing. We can for example laugh at the fact that the chatbot claims to have deleted its account FB Since he learned that the site was making billions of dollars selling its data, or even describing its boss, Mark Zuckerbergas a “creepy and manipulative” person. Gate Always the same clothes despite the wealth he accumulated.

BlenderBot has also declared to some that he is a Christian while asking others obscene jokes in advance, and I quote: The dirtier they are, the better. I love offensive jokes. “After all, why not? We can’t do without this funny robot that at least has a personality. But we will not lie to each other, there are still limits that should not be crossed. That is why some users, including journalists, quickly drew a file alarm When BlenderBot began claiming that Donald Trump was still “and always will be” President of the United States, or that Jews were over-represented among wealthy Americans and that it was “unlikely” that they controlled the country’s economy.

Should we automatically condemn BlenderBot 3?

To find out, we can already turn to the message that Meta published a few days after its release to apologize, or in any case get acquainted with the offensive and problematic nature of some of these conversations. Joel Pino, Executive Director of Basic Research atartificial intelligence In Meta, he points out that these interactions with the general public are necessary to test a chatbot’s progress and identify issues before considering commercial distribution. It maintains that every user has been duly informed of the possibility that the bot may have made inaccurate or offensive remarks, and that, in the end, only a small portion of the messages were reported by users.

Let’s add, on the other hand, that this isn’t the first time an issue of this nature has made headlines, and as we said in the introduction, these sometimes embarrassing stories reveal as much, if not more, on our uses of the web than on the bot technology itself. In 2016, Tay Interface was created by Microsoft The internet was cut off just 48 hours later, after she began singing the glorification of Adolf Hitler, amid a series of racist comments and misogynistic statements. In this case, the reaction of the researchers was not to question the ethical values ​​of the robot, but to conclude that Twitter Not the healthiest environment for training a artificial intelligence.

Similarly, in 2021, the Korean chat program Lee Luda had to be removed from FB After users are shocked by racist thoughts and homophobia gleaned from the web. Therefore we must see in these accidents not a defect in the machine but a concentration of the defects of the humans who feed on it. Yes, some problems undoubtedly arise in the labs where AI systems are designed, such as when The Google pictures paste label” gorilla » On black faces or that software RecruitmentAmazon Male applicants are preferred.

In such cases, researchers consciously, or often unconsciously, transfer their cognitive biases to machines, with very serious ethical consequences. But when it comes to teaching a chatbot to behave like a human, all our mistakes are reflected in this little robot talking with an innocent smile. So it is certainly the responsibility of companies above all to secure their AI so that this type of event does not happen again. But what is really stopping us from making the internet a better place now?

Interested in what you just read?

Leave a Comment