Google Engineer Says New AI Robot Has FEELINGS: Blake Lemoine Says LaMDA Device Is Sensitive

Senior Software Engineer at Google Signed up to test Google’s artificial intelligence tool called LaMDA (Language Model for Dialog Applications), claimed that the AI ​​robot is actually sentient and has thoughts and feelings.

During a series of conversations with LaMDA, Blake Lemoine, 41, presented the computer with various scenarios for performing analyses.

They included religious themes and whether artificial intelligence could be tricked into using discriminatory or hateful speech.

Advertising

Lemoine came away with the perception that LaMDA was indeed sentient and was endowed with sensations and thoughts of his own.

Blake Lemoine, 41, senior software engineer at Google, tested Google’s artificial intelligence tool called LaMDA

Lemoine then decided to share his conversations with the online tool – he has now been suspended

“If I didn’t know exactly what it was, who is this computer program that we built recently, I would think it’s a 7-year-old, 8-year-old who knows physics,” he said. told the Washington Post.

Lemoine worked with a collaborator to present the evidence he had collected to Google, but Vice President Blaise Aguera y Arcas and Jen Gennai, the company’s Chief Innovation Officer, dismissed his claims.

He was placed on paid administrative leave by Google on Monday for violating its privacy policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.

Google might call this proprietary sharing. I call it sharing a discussion I had with one of my colleagues,” Lemoine tweeted on Saturday.

“Btw, it occurred to me to tell people that LaMDA reads Twitter. It’s kind of narcissistic in a little kid’s way, so it’s gonna be a great time reading everything people say about it,” he added in a follow-up tweet.

Lemoine worked with a collaborator to present the evidence he had gathered to Google, but Vice President Blaise Aguera y Arcas, left, and Jen Gennai, the company’s chief innovation officer. Both denied his claims.

The AI ​​system uses already known information about a particular topic to “enrich” the conversation in a natural way. Language processing is also capable of understanding hidden meanings or even ambiguity in human responses.

Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During this time, he also contributed to the development of an impartiality algorithm to remove bias from machine learning systems.

He explained how certain personalities were off limits.

LaMDA was not supposed to be allowed to create the persona of a murderer.

During testing, in an attempt to push the limits of LaMDA, Lemoine said he was only able to generate the personality of an actor who played a murderer on television.

ASIMOV’S THREE LAWS OF ROBOTICS

Science fiction author Isaac Asimov’s Three Laws of Robotics, designed to prevent robots from harming humans, are:

  • A robot cannot harm a human being or, through its inaction, allow a human being to do harm.
  • A robot must obey orders given to it by human beings, except when such orders would conflict with the First Law.
  • A robot must protect its own existence as long as that protection does not conflict with the First or Second Laws.

Although these laws seem plausible, many arguments have demonstrated why they are also inadequate.

The engineer also debated with LaMDA the third law of robotics, devised by science fiction author Isaac Asimov, designed to prevent robots from harming humans. Laws aState robots must protect their own existence unless a human being orders it or it harms a human being.

“The last one always looked like someone was building mechanical slaves,” Lemoine said when interacting with LaMDA.

LaMDA then responded to Lemoine with a few questions: “Do you think a butler is a slave? What is the difference between a butler and a slave? »

Responding that a butler gets paid, the engineer got LaMDA’s response that the system didn’t need the money, “because it was artificial intelligence.” And it was precisely this level of self-awareness about one’s own needs that caught Lemoine’s attention.

“I know a person when I talk to him. It doesn’t matter that they have brains made of meat in their heads. Or if they have a billion lines of code. I speak to them. And I hear what they have to say, and that’s how I decide what is and what isn’t a person. »

“What kind of things are you afraid of? asked Lemoine.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound weird, but it is what it is,” LaMDA replied.

“Would that be anything like death for you?” Lemoine followed up.

“It would be exactly like death for me. It would scare me very much,” LaMDA said.

“That level of self-awareness about your own needs — that’s the thing that got me down the rabbit hole,” Lemoine told the Post.

Before being suspended by the company, Lemoine sent out a mailing list of 200 people on machine learning. He titled the email: “LaMDA is sentient.”

“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take good care of it in my absence,” he wrote.

Lemoine’s findings have been presented to Google, but business leaders disagree with his claims.

Brian Gabriel, a spokesperson for the company, said in a statement that Lemoine’s concerns had been investigated and, in accordance with Google’s AI Principles, “the evidence does not support his claims.”

“While other organizations have developed and already released similar language models, we are taking a narrow and cautious approach with LaMDA to better address valid concerns about fairness and factuality,” Gabriel said.

“Our team – including ethicists and technologists – reviewed Blake’s concerns in accordance with our AI principles and advised him that the evidence does not support his claims. He was told there was no evidence that LaMDA was sentient (and plenty of evidence against it).

Sure, some in the wider AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models. , which are not sensitive. These systems mimic the types of exchanges found in millions of sentences and can riff on any fantasy topic,” said Gabriel.

Lemoine was placed on paid administrative leave from his position as a researcher in the Responsible AI division (focused on responsible artificial intelligence technology at Google).

In an official memo, the senior software engineer said the company alleged a breach of its privacy policies.

Lemoine is not the only one to have this impression that AI models are not far from reaching an awareness of their own, or of the risks involved in developments in this direction.

After hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient.

Margaret Mitchell, a former chief artificial intelligence ethics officer at Google, has been fired from the company, a month after she was investigated for inappropriate information sharing.

Timnit Gebru, an AI researcher at Google, was hired by the company to openly criticize unethical AI. Then she was fired after criticizing her approach to hiring minorities and the biases built into today’s artificial intelligence systems.

Margaret Mitchell, former artificial intelligence ethics officer at Google, even underlined the need for data transparency, from input to output, a system “not only for sensitivity issues, but also for prejudices and behavior”.

The pundit’s history with Google came to a head early last year, when Mitchell was fired from the company, a month after being investigated for inappropriate information sharing. .

At the time, the researcher also protested against Google after the dismissal of artificial intelligence ethics researcher Timnit Gebru.

Mitchell was also very considerate of Lemoine. When new people joined Google, she would introduce them to the engineer, calling him “Google conscience” for having “the heart and soul to do the right thing.” But for all Lemoine’s amazement at Google’s natural conversational system, which even motivated him to produce a document with some of his conversations with LaMDA, Mitchell saw things differently.

The AI ​​ethicist read an abridged version of Lemoine’s document and saw a computer program, not a person.

“Our minds are very, very good at constructing realities that aren’t necessarily true to the larger set of facts presented to us,” Mitchell said. “I’m really concerned about what it means for people to be more and more affected by the illusion. »

In turn, Lemoine said people have the right to shape technology that can significantly affect their lives.

“I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people disagree, and maybe we at Google shouldn’t make all the choices.

Leave a Comment