Google engineer Blake Lemoine claims the company's AI chatbot "LaMDA" has gained sentience and the tech giant placed him on leave for making these claims public.
Lemoine says he came to this conclusion based on conversations he's had with the bot about topics like religion, rights, and "personhood".
In an interview with The Washington Post, Lemoine says he had begun chatting with LaMDA, or Language Model for Dialogue Applications, as part of his role at Google's Responsible AI department.
Google introduced LaMDA last year and described it as a breakthrough in chatbot technology with the ability to to “engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”
In an article he wrote on Medium, Lemoine said after some time chatting with LaMDA it identified itself as a sentient entity.
In a separate Medium post, Lemoine said that LaMDA had become "incredibly consistent in its communications about what it wants and what it believes its rights are as a person."
He says he was transparent with Google about what he found and about who he discussed LaMDA with.
However, Google has dismissed Lemoine's claims and put him on administrative leave for violating confidentiality.