An engineer working at Google, Blake Lemoine has made a claim that has created a flurry in the tech space and the AI (Artificial Intelligence) market. He disclosed that he had been put on leave by the company because he had claimed that an AI chatbot they were working on had become sentient. He works at the Responsible AI organization at Google and as part of his job requirements, he had begun chatting with the interface in question last fall.
AI Chatbot LaMDA
LaMDA stands for Language Model for Dialogue Applications and had been referred to as the ‘breakthrough conversation technology’ by Google in the previous year. The AI chatbot has been designed to engage in open-ended conversations that sound completely natural. According to Google, this technology can have a variety of applications and these include using it as a Google Assistant, or even for research.
However, the company also said that they were still doing research and testing. On Saturday, the Google employee had published a post on Medium in which he referred to this conversational AI as a person. A Christian priest, Lemoine disclosed that he had talked to the chatbot about the laws of robotics, consciousness and religion. He said that it called itself a sentient person and even claimed that it wanted to be acknowledged as an employee of Google, rather than its property.
LaMDA also said that the well-being of humanity was its top priority. He also talked about some of the conversations he had had, which had him convinced that the AI was sentient. The artificial intelligence said that they were on the same page. This had prompted Lemoine to go to the higher-ups and inform them about the issue, which got him dismissed.
Google Spokesperson Denies the Claim
A spokesperson for Google said that their team, which included technologists and ethicists, had gone over the concerns put forward by Blake. The spokesperson said that in accordance with their AI principles, he was informed that they had not discovered any evidence to support his claim. Brian Gabriel, the spokesperson said that they had found a lot of evidence against his claim, but not in favor, and he had been so informed.
The reason Google had put him on administrative leave was that he had violated their confidentiality policy. Lemoine had gone as far as speaking to a member of the Congress about the issue and suggested that a lawyer be hired for LaMDA. The Google spokesperson stated that while the possibility of sentience in AI had been considered, the conversational models that exist today are not sentient and they should not be anthropomorphized.
Gabriel said that these systems are designed to imitate conversations and can talk on any topic. He stated that the current AI models are provided with so much data that they are fully capable of coming off as humans. However, he said that their excellent language skills do not make them so and they are not sentient.