Connect with us

News

Google suspends engineer after saying AI chatbot has become sentient

Google suspends engineer who states A.I. is Sentient

A Google engineer has been put on leave after claiming that the company’s artificial intelligence is sentient.

Senior software engineer in Google’s Responsible A.I. organization, Blake Lemoine, revealed in an interview that he was put on leave for violating the company’s confidentiality policy for handing over documents to a U.S. senator’s office that he claimed provided evidence that Google and its technology engaged in religious discrimination. 

Google says its systems can imitate conversations on different topics but do not have consciousness. Blake’s concerns were reviewed by Google’s team of ethicists and technologists who found there is no evidence to support his claims.  

Lemoine has apparently been arguing with Google managers and employees for months over his claim that the Language Model for Dialogue Applications (LaMDA) has a consciousness and a soul.  Other Google researchers and engineers that have conversed with LaMDA have not found this to be the case.  

The controversy Lemoine stirred is similar to incidents that Google has experienced in the past. In March, they fired a researcher who disagreed with two of his colleagues’ published works. Two A.I. ethics researchers were fired after they criticized Google’s language models. 

Lemoine, a military veteran who describes himself as a priest, an ex-convict, and an A.I. researcher told Google executives that he believed LaMDA was a 7- or 8-year-old child. He was seeking consent from the company to allow him to run experiments on it. His claims were founded on his religious beliefs that he claimed the company discriminated against. 

“They repeatedly questioned my sanity. They said, ‘Have you been checked out by a psychiatrist recently?’,” Lemoine claimed. They have since suggested he take a mental health leave. 

Yann LeCun, head of A.I. research at Meta and an expert in neural networks says these systems are not powerful enough to achieve true intelligence. 

Google’s technology is based on a neural system that analyzes large amounts of data to come to conclusions. They learn from books and articles to develop ‘large language models’ that can be applied to a variety of tasks. But the systems are extremely flawed. 

They are capable of mimicking patterns but can not achieve human reasoning.

Continue Reading

Trending