BlenderBot 3 is capable of conducting discussions and even searching for information on the Internet to feed dialogue with a human interlocutor. When it was posted, the company urged US users over the age of 18 to interact with it normally and report any suspicious comments or meaningless phrases. And here’s the drama. Present on site as a friendly little face smiling and floating in sweetBlue, it only took a weekend in AI to start talking s and anti-Semitism.
In fact, within just two days of its release, users have already returned to itDisturbing snippets of conversation, and bloomed on From the most interesting to the most disturbing. We can for example laugh at the fact that the chatbot claims to have deleted its account Since he learned that the site was making billions of dollars selling its data, or even describing its boss, as a “creepy and manipulative” person. Always the same clothes despite the wealth he accumulated.
BlenderBot has also declared to some that he is a Christian while asking others obscene jokes in advance, and I quote: The dirtier they are, the better. I love offensive jokes. “After all, why not? We can’t do without this funny robot that at least has a personality. But we will not lie to each other, there are still limits that should not be crossed. That is why some users, including journalists, quickly drew a fileWhen BlenderBot began claiming that Donald Trump was still “and always will be” President of the United States, or that Jews were over-represented among wealthy Americans and that it was “unlikely” that they controlled the country’s economy.
Should we automatically condemn BlenderBot 3?
To find out, we can already turn to the message that Meta published a few days after its release to apologize, or in any case get acquainted with the offensive and problematic nature of some of these conversations. Joel Pino, Executive Director of Basic Research atIn Meta, he points out that these interactions with the general public are necessary to test a chatbot’s progress and identify issues before considering commercial distribution. It maintains that every user has been duly informed of the possibility that the bot may have made inaccurate or offensive remarks, and that, in the end, only a small portion of the messages were reported by users.
Let’s add, on the other hand, that this isn’t the first time an issue of this nature has made headlines, and as we said in the introduction, these sometimes embarrassing stories reveal as much, if not more, on our uses of the web than on the bot technology itself. In 2016, Tay Interface was created byThe internet was cut off just 48 hours later, after she began singing the glorification of Adolf Hitler, amid a series of racist comments and misogynistic statements. In this case, the reaction of the researchers was not to question the ethical values of the robot, but to conclude that Not the healthiest environment for training a .
Similarly, in 2021, the Korean chat program Lee Luda had to be removed fromAfter users are shocked by and homophobia gleaned from the web. Therefore we must see in these accidents not a defect in the machine but a concentration of the defects of the humans who feed on it. Yes, some problems undoubtedly arise in the labs where AI systems are designed, such as when pictures paste label” » On black faces or that Recruitment Male applicants are preferred.
In such cases, researchers consciously, or often unconsciously, transfer their cognitive biases to machines, with very serious ethical consequences. But when it comes to teaching a chatbot to behave like a human, all our mistakes are reflected in this little robot talking with an innocent smile. So it is certainly the responsibility of companies above all to secure their AI so that this type of event does not happen again. But what is really stopping us from making the internet a better place now?