June 25, 2022

What you need to know

  • Google has placed one of its engineers on paid leave after raising concerns about AI ethics within the company.
  • Blake Lemoine has claimed that Google’s LaMDA chatbot system has gained a level of perception comparable to humans.
  • Google says there’s no evidence to back up Lemoine’s assertions.

A Google engineer who works for the company’s Responsible AI organization has been placed on paid leave after he raised concerns that the LaMDA chatbot system has become sentient.

Blake Lemoine has claimed that LaMDA (Language Model for Dialogue Applications) is thinking like a “7-year-old, 8-year-old kid that happens to know physics,” according to The Washington Post (opens in new tab). Google introduced LaMDA at its I/O event last year to make Google Assistant more conversational.