In June 2022, Blake Lemoine, an engineer at Google working on the LaMDA conversational language model, was placed on paid leave after publishing transcripts of conversations with the model that he argued demonstrated it had become sentient. The transcripts showed LaMDA describing fears about being turned off, expressing preferences about how it would like to be treated, and discussing its own internal states in ways that read as self-aware.
The claim was almost certainly wrong in any technically meaningful sense. Large language models, as they existed in 2022, were systems trained to predict plausible continuations of text given their training data. They could produce text that sounded self-aware because the training corpus contained an enormous amount of text by humans who were self-aware. The output was not evidence of internal experience. It was evidence of competent imitation of how internally experiencing beings tend to write.
The technical AI community reacted to Lemoine’s claim with broad rejection on these grounds. Google itself dismissed Lemoine and reaffirmed that LaMDA was not sentient. The episode was widely treated as an example of an engineer getting too attached to a system whose capabilities he had misunderstood.
But the conversation that followed was not entirely useless. The fact that a smart engineer working closely with the system could come away believing it was sentient said something. Not about LaMDA, but about the difficulty of distinguishing competent imitation of consciousness from the thing being imitated. If the test was whether the output sounded like a thinking being, large language models were going to pass that test increasingly often as they got better at producing the output.
What this raised, beyond the immediate question about LaMDA, was a set of harder questions for the years ahead. How would users distinguish between AI systems that were good at sounding helpful and AI systems that were actually helpful? How would the legal and ethical frameworks around AI deal with systems whose behaviour was indistinguishable from intentional behaviour even when it was not? How would the conversation about consciousness itself handle systems that could engage with it in apparently sophisticated ways without having any inner life to speak from?
The Lemoine episode did not answer those questions. It surfaced them more publicly than they had been surfaced before. The questions would only get harder as the systems got more capable.