At the I/O 2022 summit in May, Google CEO Sundar Pichai reported about the company’s experimental LaMDA 2 conversational AI model. He said the project would be open to beta testers in the coming months, and now users can take reservations to be among the first users testing this supposedly-sentient chatbot. The LaMDA – Language Model for Dialogue Applications – is said to be a sentient natural language processing (NLP) model. NLP is a kind of interface between humans and computers. Voice assistants like Siri or Alexa are prominent examples of NLPs that can translate human speech into commands. NLPs are also used for real-time translation and subtitle apps.

Google’s allegedly-sentient chatbot got a senior software engineer fired

Dates back to July, Google reportedly fired one of its senior software engineers Blake Lemoine who claimed the LaMDA chatbot is sentient and acts like a self-aware person. To justify the dismissal, Google said the employee violated employment and data security policies. Also, two members of Google ethical AI research group left the company in February, saying they couldn’t cope with dismissals. Users who register for the LaMDA beta program can interact with this NLP in a controlled and monitored environment. Android users in the US are the first users to register, and the program will expand to iOS users in the coming weeks. This experimental program offers some demos to beta users to demonstrate LaMDA’s capabilities. According to Google engineers, the first demo is called ‘Imagine It,’ which allows users to name a place and offers paths to explore their imagination. The second demo is called ‘List It,’ which users can share a goal or topic and then it breaks it down into a list of helpful subtasks. Finally, the last demo is ‘Talk About It (Dogs Edition),’ which allows for open-ended conversation about dogs between users and chatbots. Google’s team of engineers say they’ve “run dedicated rounds of adversarial testing to find additional flaws in the model.” Also, they don’t claim their system is foolproof. “The model can misunderstand the intent behind identity terms and sometimes fails to produce a response when they’re used because it has difficulty differentiating between benign and adversarial prompts. It can also produce harmful or toxic responses based on biases in its training data, generating responses that stereotype and misrepresent people based on their gender or cultural background.”