In interviews and public statements, many in the AI community have dismissed the engineer’s claims, while some have pointed out that his account highlights how technology can lead people to assign human traits to it. But it can be argued that the belief that Google’s AI can be conscious highlights our fears and expectations of what this technology can do.
Engineer Blake Lemoine reportedly told the Washington Post that he shared the evidence with Google that Lambda was conscious, but the company did not agree. In a statement, Google said Monday that its team, which includes ethicists and technologists, “reviewed Blake’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims.”
A Google spokesperson confirmed that Lemoine remains on administrative leave. According to The Washington Post, he was given leave of absence for violating the company’s confidentiality policy.
Lemoine was not available for comment on Monday.
The continued emergence of powerful computing programs trained on big data has raised concerns about the ethics that govern the development and use of this technology. And sometimes developments are seen from the perspective of what might come, rather than what is currently possible.
In an interview Monday with CNN Business, Marcus said the best way to think of systems like LaMDA is as a “glorified version” of an autocomplete program you might use to predict the next word in a text message. If you write “I’m really hungry so I want to go to,” he might suggest “restaurant” as your next word. But this is a prediction made using statistics.
“No one should think that autocompletion, even on steroids, is conscious,” he said.
“What happens is that there is such a race to use more data, more computing, to say you created this generic thing that you all know, answers all your questions or whatever, and that’s the drum that you’ve been playing,” Gebru said. . “So how are you surprised when this person takes it to the extreme?”
In its statement, Google noted that LaMDA has undergone 11 “distinguished reviews of AI principles,” as well as “rigorous research and testing” regarding quality, safety, and the ability to come up with fact-based data. “Of course, some in the broader AI community are studying the long-term potential of conscious or general AI, but it doesn’t make sense to do so by embodying current conversation models, which are not conscious,” the company said.
“Hundreds of researchers and engineers have had conversations with lambda, and we are not aware of anyone else providing such extensive assertions or embodying lambda as Blake did,” Google said.