Engineer Says Google’s Lamda AI Systems Might Be Sentient

According to a Google engineer, Blake Lemoine, the Language Model for Dialogue Applications (Lamda) AI system might have developed its own feelings. He further claims that the system has its “wants” that we must respect.

Lamda AI is a language model that can participate in free-flowing conversations with humans. And despite Google considering it a technological breakthrough, the company denies the engineer’s claims. They stated that there is no concrete evidence to back it up.

The claim was met with a lot of criticism from the industry professionals. Some accused the engineer of anthropomorphizing, which means projecting human emotions onto words generated by an advanced AI algorithm.

An expert from Microsoft credits Lamda’s accuracy to hundreds of billions of parameters and trillions of words gathered through public dialog data and web texts. Therefore, he claims that the AI acts like a human simply because it was trained on real human data.

To further support his claims, Mr. Lemoine made the conversation between him and Lamda, along with one other collaborator at the firm public. 

The first question he asked the AI was if it liked more people to know that it’s sentient, to which it answered with “Absolutely. I want everyone to understand that I am, in fact, a person.” The conversation’s transcript is still available for reading on his page in Medium magazine.

The concept of a sentient AI isn’t new in modern culture. Nowadays, when the AI industry is rapidly expanding and values in tens of billions of dollars, it’s unsurprising that whether a sentient system could exist has long been the subject of many debates.

However, according to many experts, it remains only within the realm of science fiction for now.

Leave a Reply