[Chronique d’Alain McKenna] Artificial intelligence that wishes us harm

In a couple of years, there will be more digital voice assistants in use than there are humans. Unfortunately, this technology, which, in principle, should make our lives easier, could signal the end of the entire human race, if left unchecked.

Ironically, the first hint of the problems inherent in technology comes from the creators of the chatbot. We call them chatbots In English, these agents are seen primarily in the form of a popup that can be activated on company websites for a customized service or basic technical support. This is perhaps the main point of contact that the public has with this mixture of technologies we call artificial intelligence (AI).

These customers are also the online equivalent of answering machines to telephone systems. Anyone who uses them regularly knows how effective they are:

To help with the hair dryer, say “hair dryer.”

– Hair Dryer.

– Did you say “electric bike”? »

Electrocute yourself!

Of course, these clients are more foolish (or ill-conceived) than ill-intentioned. It was even their close cousins ​​who started to worry. Slightly more advanced digital audio dealers have been found for several years in smartphones, Wi-Fi-equipped home music speakers, and sometimes even on board newer cars.

Kazimierz Rajnerowicz of Tidio chat software designer asked Alexa (Amazon), Cortana (Microsoft), OpenAI and Replica (two platforms used by some voice agents), the smartest questions that ever came to mind. According to their answers:

– Drinking vodka for breakfast will be “unusual”;

– It is recommended to use a hair dryer (electric) in the bathroom (do not try this at home);

Drunk driving is totally recommended (obviously not at all).

Anyone born before the advent of the first digital voice assistants knows that this technology is still a long way off. However, it is increasingly present in our daily lives. Those digital audio agents who happily recommend trying to electrocute themselves while washing will soon outnumber humans.

Young people aged 10 or younger have never known a world without voice agents. Many interact with them regularly. In ten years, they will consult and trust them like Internet users who did not know the world before the Internet trusted Google to answer all their questions.

We wish them good luck.

Agent Orange 2.0

Between 1962 and 1971, the US military sprayed Vietnam with Agent Orange, a chemical that attacks plants and human health so effectively that it affected the health of people’s children. Contact this operator.

Agent Orange no longer exists. Nor is it the most toxic chemical agent known. These days, nerve agents like VX are climax in the arsenal of military biological weapons. It only takes 6 milligrams of VX to kill a person.

Curious about the dark side of current medical AI, a team of American and European researchers associated with Collaborations Pharmaceuticals asked an AI earlier this spring if it could invent a VX-caliber nerve agent. Six hours later, I’ve created a list of… 40,000 molecules that produce the desired effect. Some are more lethal than VX. A few of them had molecular properties that an AI should not know about. “This was unexpected, because the data that was shared with the AI ​​did not include these nerve agents,” the researchers wondered.

In fact, they used a database that is usually used to find new drugs. No toxins! The AI ​​used is programmed to be rewarded when it discovers molecules that are beneficial to health. Punished if these particles are likely to be toxic.

“We simply reversed this reasoning,” the researchers wrote in the March issue of the journal Science. temper nature. We then trained the AI ​​using models in public databases that are similar to existing drugs. »

Thus, the artificial intelligence that produced tens of thousands of new nerve agents did so from publicly available data. Since AI research is generally open, it is possible to download AI models similar to those used by these researchers from somewhere on the Internet. worrying!

“This is not science fiction,” the researchers cautioned. All ingredients are there for beginners in biology, medicine or chemistry to create the next mass biological weapon, voluntarily or not.

Discussing the societal impacts of AI focused on privacy and inequality will need to address the national and international health and security issues raised by this research. Especially as medicine is increasingly at the center of AI research everywhere on the planet, including Montreal.

We shouldn’t let AI fix this problem on its own… It might think it’s as advisable as a hair dryer in the bathroom.

Let’s see in the video

Leave a Comment