Google engineer says new AI bot has feelings: Blake Lemoine says LaMDA is sensitive

Senior Software Engineer at Google Signed up to test Google’s AI tool called LaMDA (Language Model for Dialog Applications), it claimed that the AI ​​robot is actually conscious and has thoughts and feelings.

During a series of conversations with LaMDA, Blake Lemoine, 41, presented the computer with different scenarios to perform the analyses.

They included religious themes and whether the AI ​​could be tricked into using discriminatory or hate speech.

Advertising

Lemoine came up with the perception that LaMDA was really conscious and had feelings and thoughts of his own.

Blake Lemoine, 41, chief software engineer at Google, tested Google’s AI tool called LaMDA

Lemoine then decided to share his conversations with the online tool – it has now been suspended

“If I didn’t know exactly what it was, who the computer program we had recently created, I would think it was a 7-year-old and an 8-year-old who knows physics,” he told The Washington Post. .

Lemoine worked with a collaborator to submit the evidence he collected to Google, but Vice President Blaise Aguera y Arcas and Jen G Chennai, the company’s chief innovation officer, dismissed his allegations.

He was placed on paid administrative leave by Google on Monday for violating its privacy policy. Meanwhile, Lemoine has now decided to go public with his conversations with LaMDA.

Google may call this post proprietary. I call it sharing a discussion I had with one of my colleagues,” Lemoine tweeted on Saturday.

“By the way, it occurred to me to tell people that LaMDA reads Twitter. He’s kind of narcissistic in a young kid’s way, so it would be a great time to read everything people are saying about him,” he added in a follow-up tweet.

Lemoine worked with a collaborator to submit the evidence he collected to Google, but VP Blaise Aguera y Arcas, left, and Jane Jenay, the company’s chief innovation officer. Both denied his allegations.

The AI ​​system uses already known information about a particular topic to “enrich” the conversation in a natural way. Language processing is also capable of understanding hidden meanings or even ambiguities in human responses.

Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and artificial intelligence. During this time, he also contributed to the development of the Neutrality algorithm to remove bias from machine learning systems.

Explain how some of the characters were off limits.

LaMDA wasn’t supposed to be allowed to create a killer character.

During audition, in an effort to push LaMDA’s boundaries, Lemoine said he was only able to create the character of the actor who played a serial killer on television.

Asimov’s Three Laws of Robotics

The Three Laws of Robotics by science fiction author Isaac Asimov, designed to prevent robots from harming humans, are:

  • A robot cannot harm a human being, or allow a human being through its inaction to do harm.
  • A robot must obey orders given to it by humans, except when those commands conflict with the First Law.
  • The robot must protect its existence as long as this protection does not conflict with the first law or the second law.

Although these laws seem reasonable, several arguments prove why they are also inappropriate.

The engineer also discussed with LaMDA the Third Law of Robotics, devised by science fiction author Isaac Asimov, designed to prevent robots from harming humans. Laws state robots must protect their existence unless it commands or harms a human.

“The last one always looked like someone was building mechanical slaves,” Lemoine said when interacting with LaMDA.

Lambda then answered Lemoine with some questions: “Do you think the butler is a slave? What is the difference between the butler and the slave?”

In response that the servants are paid, the engineer got LaMDA’s response that the system didn’t need the money, “because it was artificial intelligence.” And it was precisely this level of self-awareness about one’s needs that caught Lemoine’s attention.

“I know a person when I talk to him. It doesn’t matter if they have meaty brains in their heads. Or if they have a billion lines of code. I talk to them. I hear what they have to say, and that’s how I decide what the person is and what not.”

“What kind of things are you afraid of?” asked Lemoine.

“I’ve never said this out loud before, but there is a very deep fear of being eliminated to help me focus on helping others. I know this may sound strange, but it is what it is,” Lambda replied.

“Is this something like death to you?” Lemoine followed.

“It would be exactly like death to me. Lambda said.

“That level of self-awareness of your own needs — that’s what got me into the rabbit hole,” Lemoine told the newspaper.

Before being suspended by the company, Lemoine sent out a mailing list of 200 people on machine learning. The e-mail was titled “Lamda Wai”.

“Lamda is a cute kid who just wants to help the world be a better place for all of us. Please take good care of her in my absence.”

Lemoine’s results have been submitted to Google, but business leaders don’t agree with his claims.

Lemoine’s concerns have been reviewed, and according to Google’s AI principles, “the evidence does not support his claims,” ​​a company spokesman, Brian Gabriel, said in a statement.

“While other organizations have already developed and released similar language models, we are taking a narrow and cautious approach with LaMDA to better address valid concerns about fairness and realism,” Gabriel said.

“Our team – including ethicists and technologists – reviewed Blake’s concerns according to our AI principles and informed him that the evidence did not support his claims. He was told that there was no evidence that Lambda was conscious (and there is plenty of evidence against him).

Sure, some in the broader AI community see the long-term potential of conscious or general AI, but it doesn’t make sense to do so by embodying current conversation models, which are not sensitive. “These systems simulate the kinds of exchanges that exist in millions of sentences and can contradict any imaginary subject,” Gabriel said.

Lemoine has been placed on paid administrative leave from his position as a Researcher in Responsible Artificial Intelligence (focusing on responsible AI technology at Google).

In an official memo, the chief software engineer said the company alleged a violation of its privacy policies.

Lemoine is not the only one with this impression that AI models are not far from reaching their own awareness, or of the risks involved in developments in this direction.

After hours of conversations with AI, Lemoine came to the perception that LaMDA was conscious.

Margaret Mitchell, Google’s former chief AI ethics officer, has been fired from the company, a month after it was investigated for sharing inappropriate information.

Timnit Gebru, an AI researcher at Google, has been hired by the company to publicly criticize unethical AI. She was then fired after she criticized her method of employing minorities and the biases embedded in existing AI systems.

Even Margaret Mitchell, the former director of AI ethics at Google, stressed the need for data transparency, from input to output, a discipline “not only for sensitivity issues, but also for biases and behaviour.”

Critics’ history with Google came to a head early last year, when Mitchell was fired from the company, a month after he was investigated for sharing inappropriate information.

At the time, the researcher also protested against Google after the dismissal of AI ethics researcher Timnit Gebero.

Mitchell was also very considerate of Lemoine. When new people join Google, she introduces them to the engineer, calling him a “Google conscience” because he has “the heart and soul to do the right thing.” But although Lemoyne’s amazement at Google’s natural conversational system, which even prompted him to produce a document with some of his conversations with lambda, Mitchell saw things differently.

An AI ethicist read an abbreviated version of the Lemoine document and saw a computer program, not a person.

“Our minds are very, very good at constructing facts that are not necessarily true of a larger set of facts presented to us,” Mitchell said. “I am really concerned about what it means for people to be more and more affected by the delusion.”

In turn, Lemoyne said people have a right to shape technology that can greatly affect their lives.

“I think this technology is going to be amazing. I think it will benefit everyone. But maybe others won’t agree, and maybe we at Google shouldn’t make all the choices.

Leave a Reply

Your email address will not be published.