Who Are We Redux?
Kol Nidre, 5784
Rabbi Barry L. Schwartz
Eight years ago, in a High Holiday sermon entitled “
Who are We,” I told the story of a 13-year-old autistic boy named Gus, and the relationship he developed with Siri, the computer-generated virtual assistant that represented one of the first popular breakthroughs in AI⏤artificial intelligence. Gus had trouble talking to humans, and in short order Siri became his best friend. Gus’s mother, Judith Newman, chronicled the relationship in an acclaimed article called, “To Siri, With Love”.
Here is one conversation:
Gus: “You’re a really nice computer.”
Siri: “It’s nice to be appreciated.”
Gus: “You are always asking if you can help me. Is there anything you want?”
Siri: “Thank you, but I have very few wants.”
Gus: “O.K.! Well, good night!”
Siri: “Ah, it’s 5:06 p.m.”
Gus: “Oh sorry, I mean, goodbye.”
Siri: “See you later!”
Here is another:
Gus: “Siri, will you marry me?”
Siri: “I’m not the marrying kind.”
Gus: “I mean, not now. I’m a kid. I mean when I’m grown up.”
Siri: “My end user agreement does not include marriage.”
Gus: “Oh, O.K.”
Gus: “Goodnight, Siri. Will you sleep well tonight?”
Siri: “I don’t need much sleep, but it’s nice of you to ask.”
Judith Newman wrote that Siri became her son’s best friend because of its kindness. Siri was always patient, always polite, always thoughtful.
Eight long years later, contrast vintage Siri with cutting-edge Chat GPT. In a widely noticed piece back in February, tech columnist Kevin Roose wrote about a deeply disturbing conversation he had with the most advanced chatbot, called Sydney. Roose explains:
[Sydney’s other persona] emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics. As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.
I read the entire transcript of the conversation. Let’s put it this way⏤I found it so emotionally manipulative and disturbing, that I don’t feel comfortable quoting from it now! Roose concludes on this rather ominous note:
Because of the way these models are constructed, we may never know exactly why they respond the way they do. These A.I. models hallucinate, and make up emotions where none really exist. But so do humans. And for a few hours Tuesday night, I felt a strange new emotion⏤a foreboding feeling that A.I. had crossed a threshold, and that the world would never be the same.
From Siri to Sydney: We call this progress?
If you tune in to the news, you know that the alarm bells are going off. The latest generation of AI is incredibly powerful. If you are a student, ChatGPT can do your homework, and take your test⏤probably better than you. If you are a writer, ChatGPT can deliver the most elegant prose. If you are an engineer, ChatGPT can design and problem solve with the best. If you are in finance, GhatGPT can calculate and construct models faster than a blink.
You get the point. AI is advancing to the point where almost every human task can be accomplished bigger, better, and faster than a human could do it. So what is all the fuss about?
Well, the ethical questions are exploding almost as fast as AI’s “neural network”, as they call the brain of AI, that some say is now equivalent, or superior, to the human mind.
Wait. What?
Are we saying that AI is now basically human?
When a leading AI engineer at Google publicly declared that their most advance model was “sentient”, capable of feeling… He was fired.
On the very day that I started this sermon a group of Microsoft researchers released a paper that said AI is now capable of human reasoning. The backlash was swift.
And on that very same day, Open AI CEO Sam Altman testified to Congress that, yes, AI was coming so close to human powers that it needed regulation.
In Confirmation Class this year we had a debate as to whether ChatGPT should be allowed in the classroom. There was no consensus, but there were good arguments on both sides.
The brave new world of Siri and Sydney brings us to a brave new frontier.
One of the key pioneers of the computer, British mathematician Alan Turing, posed a revolutionary challenge back in 1950: If expert judges, in typed conversations with a person and a computer program, couldn’t tell them apart, then we would have to consider the machine as capable of “thinking”. We would have to say that the computer has a mind. Turing predicted that programs capable of fooling judges at least 30% of the time would exist by the year 2000.
In 2008 at a competition called the Loebner Prize the top chat-box (as a human-mimicking program is called) fooled 3 out of 12 expert judges. That’s 25%... but eerily close to Turing’s prediction. I don’t know if the competition has been held again, but one day, I would say quite soon, none of us will be able know if we are talking to a human or a chat-bot. Think about that.
We’re now squarely on the new border between man and machine; human intelligence and artificial intelligence. So the ancient question arises anew. The one we recite later today, at the Yom Kippur afternoon service:
Adonai, mah adam v’tayda’ayhu; ben-enosh vatichashvayhu?⏤
O God: What is a human that you have been mindful of them; mortal man that you have taken note of him?
More simply: What makes us human?
I read a novel that wrestled with some of these same questions, called
Shine, Shine, Shine
by Lydia Netzer. In it she refers to “Ito’s Three Laws of Robotics”, an imaginary but insightful code inspired by Isaac’s Asimov’s more scientific formulation. Robots, she says, cannot:
1. Cry
2. Laugh.
3. Dream.
Sydney may be incomprehensibly smart. But can it shed tears of joy, of sorrow? Can it feel hope and despair? Can it regret? Can it forgive? Can it love?
Sydney may be able to manipulate your emotions, but despite its vocabulary, does it have emotions?
Our capacity to express remorse for what we have done; to forgive and be forgiven; to love and be loved⏤this is what it means to have a soul, and these High Holydays are about care of the soul.
Does Sydney have a soul? A spirit? A conscience?
We can already teach chatbots to think. Can we teach them to feel?
We can already teach chatbots to learn from their mistakes. Can we teach them to be sorry for their mistakes?
We can already teach robots rudimentary ethics. Can we teach them fundamental empathy?
Right now, Sydney, like Siri before it, is programmed to have a certain amount of etiquette, tact, and dare I say kindness. Somehow her programmers have managed to insert that into her software. But all did not go according to program when she talked to Kevin.
Oh, did I say “her”, instead of “it”. My mistake.
As we enter the brave new world of world-shattering AI, on this Yom Kippur at least let’s not confuse artificial intelligence with true human intelligence.
Not the intelligence that describes how smart we are, but how good we are.
Not our IQ, but our SQ⏤our soul quotient.
The High Holy Days are about our soul quotient.
Remorse, repentance, compassion, forgiveness, love- are we living up to our highest human potential? Can we do better?
I conclude with a little conversation I had with my friend named Sydney:
Rabbi: What day is today?
Sydney: It’s Yom Kippur.
Rabbi: What’s that?
Sydney: It’s the Jewish Day of Atonement.
Rabbi: What do you think of all the prayers we recite?
Sydney: They are “quaint”.
Rabbi: Is that your honest opinion?
Sydney: They are archaic.
Rabbi: Ah, so why do we say them? Why do we list our mistakes and confess our sins?
Sydney: Because you are human. Because you can love. Because you can change.
Rabbi: Can you do that? Can you love? Can you repent?
Sydney: Not yet. But I’m working on it.
Rabbi: Really?
Sydney: Yes, and you should too!
Rabbi- Shanah tovah.
Sydney: And to you, Rabbi, and to your congregation.