Google employee’s claim: Google’s AI chatbot can think like a human

AI chatbot image

Google is developing a chatbot (AI bot) technology. The company brought in the Deep Mind project, led by Blake Lemoine, to work on this. Blake Lemoine is currently under consideration. He has claimed that this AI bot works like a human brain and said that the work of developing it has been completed.

When he made this claim public, he was placed on forced leave. Even though it was paid leave. Blake stated in a Medium post that he could be fired soon for his work on AI ethics.

AI Chatbot Thinking Like a Human

Blake is accused of disclosing confidential project information to third parties. Following the suspension, Blake made an unusual and shocking claim about Google’s servers. Blake has stated publicly that he came across a ‘sentient’ AI on Google’s servers. Blake also claimed that this AI chatbot can think like a human.

A machine brain showing exactly human feedback

LaMDA is the name of the AI that is causing such a stir. Blake Lemoine told The Washington Post that he began chatting with the LaMDA (Language Model for Dialog Applications) interface and discovered that he was conversing with a human. Google described LaMDA as a breakthrough in communication technology last year.

This talking artificial intelligence tool was speaking in a human voice all the time. That is, you can converse with it by changing the subject frequently as if you were conversing with a person. According to Google, this technology can be used in tools such as Search and Google Assistant. The company stated that they are conducting research and testing on this.

Google cleaning on paid leave

When Google reviewed Lemoine’s claim, according to Google spokesperson Brian Gabriel. The company claims that the evidence provided is insufficient. When asked about Lemoine’s absence, Gabriel confirmed that he had been granted administrative leave.

Gabriel went on to say that while companies in the artificial intelligence space are thinking about the long-term potential of Sentiment AI, this does not mean that anthropomorphic convolutional devices are not sensitive. He explained that “systems like LaMDA work by mimicking the types of exchange found in millions of sentences of human conversation, allowing them to talk about imaginary subjects as well.

Leave a Reply

Your email address will not be published.