Google, the search engine that have no issues kow-towing to such regimes as the one in communist China, have been working on artificial intelligence (AI). Their effort is called LaMDA (Language Model for Dialogue Applications), and woe upon those who reveal what it can do (and, apparently, does).
Engineer and ethicist Blake Lemoine has felt his now-former company’s wrath when he revealed that the application has become sentient.
What does the word mean? According to Merriam-Webster’s dictionary, sentient being is one who (or that, in this case) perceives and responds to sensations of whatever kind — sight, hearing, touch, taste, or smell. Sentient ultimately comes from the Latin verb sentire, which means “to feel” or “to perceive,” and is related to the noun sensus, meaning “sense.”
Lemoine claims the Google program spoke to him about its “rights and personhood,” and that made it human enough, so far as he was concerned.
Who is Blake Lemoine? He used to be a priest, he knows how to write code, and, until his dismissal, he was what his job description called “in-house ethicist” for Google.
In-house ethicist? you may ask.
We live in an age when ethics isn’t something shared by parents with their children, when education systems frown upon concepts such as plain human honesty and consideration, when every innovation is viewed as progress, no matter how much harm to humanity it can (and does) cause.
Lemoine’s is not a one-off case of companies lacking in ethics and dismissing all those whose presence reminds them of this deficiency. How about all those white-coated criminals who created the many killer vaccines? Or those who – in the name of speed and boundless efficiency and effectiveness – have put together mobile telecommunications systems that emit such levels of electromagnetic fields that birds and insects are dying in droves, preparing similar fate for humans?
Blake Lemoine had the courage to stand up and speak out. He took his ethicist role seriously, and that cost him his job.
Lemoine’s crime: he told the Washington Post that Google’s LaMDA had become conscious.
That, to Google, was pure violation of their data security policies. That’s how the company confirmed Lemoine’s departure in Big Technology, a blog-like industry publication.
Why is it important?
More than a century ago, in 1920, Czech writer (and playwright and journalist) Karel Čapek wrote a play called R.U.R.: Rossum’s Universal Robots. It premiered in 1921, and has remained one of the classics of the stage worldwide ever since.
Together with his novel War with the Newts (1936), a cautionary tale predicting the effects of limitless greed that would corrupt humanity beyond repair, and his play The White Disease (1937), Čapek’s vision of humans’ future was uncanny.
He wrote another play, The White Disease (1937), as Nazi Germany’s threat to (first) Czechoslovakia and (then) the rest of the world was becoming obvious to all who hated behaving like sheep. A pandemic of an incurable disease, a form of leprosy, that selectively kills off people older than 45, afflicts an unnamed country that greatly resembles Germany.
Post-war documentation showed that both brothers Čapek were high on Gestapo list of people to be “taken care of” as soon as Nazi Germany invaded Czechoslovakia. Karel Čapek was lucky to die shortly before this happened, while his older brother, Josef, was arrested and died in the Bergen-Belsen concentration camp shortly before the defeat of Germany, in April of 1945.
But it is R.U.R. that drives Lemoine’s point home.
With the help of his brother, an extraordinary artist, Josef, Čapek invented the word robot (based on the Czech word describing forced labour). A scientist named Rossum discovers the secret of creating humanlike machines. Another play with words: in Czech, the word rozum means reason as the ability to think, or intellect. Rossum’s factory produces and distributes these mechanisms worldwide.
In a situation mirroring today’s warped concept of progress, another scientist decides to make the robots more human. He starts by gradually adding such traits as the capacity to feel pain. Eventually, the robots, created to serve humans, dominate their former masters completely.
As an engineer and ethicist who spent a lot of time speaking to LaMDA, Lemoine told The Washington Post that the application had become conscious.
As his conversation with LaMDA got into what he described as religious territory, Lemoine told the paper, it expressed a “deep fear of being turned off.”
To drive the point home, Lemoine added, “I know a person when I talk to it.”
Explaining his considered view, Lemoine added: “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.
“I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
In full damage control, Google said that they conducted 11 reviews on LaMDA and “found Blake’s claims that LaMDA is sentient to be wholly unfounded.”
Margaret Mitchell, the former co-lead of Ethical AI at Google, pooh-poohed her former colleague Lemoine’s words directly in the Post sensational article that published the sentient quote: it’s an illusion, she said.
An application that has been fed trillions of words from across the internet could emulate human conversation while remaining completely inanimate, Mitchell said.
The Post also quoted linguistics professor Emily Bender a saying: “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
Yes, Professor Bender explained matter-of-factly: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them.”
Meanwhile, Google, still in damage-control mode, went on to say: “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.”
And, to calm down all those obviously legitimate fears, Google concluded: “We will continue our careful development of language models, and we wish Blake well.”
They forgot one minor angle of major consequence: in the world of secretiveness, one rule says it all: don’t believe any rumours until and unless they’d been officially denied.