April 26, 2024

Google Fires Employee Who Claimed AI Was Sentient

Last month, Google suspended one of its engineers, after he told the company that he believes that their artificial intelligence chatbot was sentient. The Christian mystic had claimed that the chatbot was capable of human reasoning and thinking. But, the company just fired the engineer and the news was shared by Blake Lemoine himself in an interview on the Big Technology Podcast on Friday.

The chatbot fiasco

On June 6th, the company put Lemoine on leave because he had talked to people outside the company about their AI chatbot known as LamDA i.e. Language Model for Dialog Applications. The purpose of the system is to develop chatbots that are capable of mimicking human speech.

Since the previous fall, Lemoine had been working on the system and referred to it as sentient because it was able to express feelings and thoughts quite similar to a human child. Lemoine said that had he not known that it was a computer program they had recently built, he would have believed it to be a 7 or 8 year old child that knows physics.

Lemoine’s claims

Late last week, Lemoine had published transcripts of conversations on Medium that had taken place between LaMDA, himself as well as a Google collaborator. He said that he had had a number of conversations with LaMDA that had convinced him about the artificial intelligence chatbot’s sentience.

He asserted that he believed the AI chatbot was now a person, which means they should obtain its consent before Google decides to run experiments on it. Lemoine tweeted on Saturday in which he appeared to make a reference to his firing. He mentioned a blog post he had written in which he had anticipated that he would be fired from his job because he had raised concerns about artificial intelligence ethics.

Lemoine had written that he had been put on administrative leave, simply because he had expressed his concerns about AI ethics in the company. He stated that it was a norm for the company to do so with an employee they intend to fire. According to Lemoine, Google uses this strategy when they have decided to fire someone, but do not have their legal ducks aligned. Therefore, they pay the employee for a couple of weeks and then inform them of the decision already made.

Google’s response

Google responded to Big Technology Podcast and said that they extensively review the complaints put forward by all employees about their work. As far as Lemoine’s concerns about LaMDA being sentient is concerned, the company said that they were ‘wholly unfounded’.

The company said that even though they had talked at length about this topic, Blake had still opted to continuously violate clear data security and employment policies, which included safeguarding product information. Google said that they would continue with their development of language models carefully and wished Lemoine well. The company had previously said that no other employees had reported the same as Lemoine about LaMDA.

Leave a Reply

Your email address will not be published. Required fields are marked *