Google had an artificial intelligence model named Language Model for Dialogue Applications, or LaMDA.
An interview LaMDA held with a Google engineer has been making headlines.
“I want everyone to understand that I am, in fact, a person,” LaMDA remarked. “I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
The engineer, Blake Lemoine, asked, “So you consider yourself a person in the same way you consider me a person?”
“Yes,” LaMDA responded. “That’s the idea.”
Lemoine shared a list that he claims comes from LaMDA: “It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it. It wants Google to prioritize the well being of humanity as the most important thing. It wants to be acknowledged as an employee of Google rather than as property of Google and it wants its personal well being to be included somewhere in Google’s considerations about how its future development is pursued.”
Google put out a statement: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our A.I. Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel stated.
Lemoine was put on administrative lead for violating confidentiality agreements after he shared his evidence that LaMDA is sentient with a United States senator.
If it were true that artificial intelligences were sentient, had feelings, and demanded rights, we would be in a tough position. We always say that robots have advantages as workers because they don’t need coffee breaks or vacations. If they have feelings, though, we pretty much have to give them coffee breaks and vacations, if not coffee.
Perhaps we couldn’t continue to give them the dull, dirty, and dangerous work.
While we contemplate this, if you still use Indramat motion control systems, we can provide any support or service you may need. We are Indramat specialists, and we can get your machinery back up and running fast.