
In the classic British comedy “Bedazzled,” Dudley Moore’s character sells his soul to the devil, played by a suave Peter Cook, in exchange for seven wishes. It seems that he is in love with the waitress at the lunch counter where he works, but she won’t give him the time of day.
Throughout the movie Moore tries, with the devil’s help, to get the waitress to fall for him. Each time, he expresses his wish clearly and in detail and each time the devil grants him literally and precisely what he asked for. However, each time the devil introduces some new factors, utterly consistent with the original wish, but which render the result a travesty. He asks to be the richest man alive and -poof- he is. But unfortunately, the girl will prefer the pool boy. He specifies that they be in love and -poof- they are, but she is married to someone else.
You would expect that kind of cooperation from the devil, of course. But what if your trusted colleague or your encyclopedia did that? You pose a question, and it returns an answer that may look reasonable, erudite and reliable, but half of the time it is utterly wrong and sometimes dangerously misleading.
Many people are living this experience right now with artificial intelligence applications where they research everything from diet regimens to medical treatments to legal briefs. People are relying on AI both as a research tool and to draft finished products such as school essays and journal articles. And AI is accommodating. It will deliver a full, conversational and detailed response that seems fine.
However, many people are also experiencing freakish and disastrous results that only a devil would devise.
The stories are piling up. Legal briefs have been produced containing fabricated citations. Medical questions have been answered with conspiratorial misinformation. Emotionally fragile people have been advised to harm themselves or others.
The word for a false AI response is “hallucination.” The term for advising someone to do harm is “criminal facilitation.” Can you ascribe guilt to a computer application that misleads? Did it do that on purpose? Was it thinking?
The promise of these AI apps is great, but they cannot fulfill that promise if they fool us, mislead us and entrap us. Can AI ever learn the difference between good and bad? Can AI be taught ethics? That’s a question I ponder as the Leader of the New York Society for Ethical Culture.
AI bots are not really thinking. They operate through algorithms that make associations of words and ideas. AI will draw on the immense volume of written material and imagery loaded into its memory and it will respond to a question or a prompt with the most common sequences of words and sentences associated with the topic that you raise. It doesn’t assess which answer is best, as much as it reproduces the word sequences that occur the most often in its database.
Humans learn through experience that some sequences are best not followed. A computer can learn to distinguish what is good and what is bad only by being told. Silicon Valley assures us that gaffes can be fixed by programming all the relevant rules. But it is impossible to think of every last thing that a computer needs to know – for example, that chemistry is okay, but bombs are not; that medical illustrations are okay, but not malicious pornography.
Can AI be taught not to lie — or to lie when it’s better to do so?
We humans have been struggling with issues of ethics for all 3,000 years of recorded history, and doubtless since long before that. We know that the rules of ethics are myriad and they evolve with circumstances and over time.
Today, many people understand the rules of ethics to be a cultural creation and the product of ongoing human experience. Rules of behavior emerge out of our interactions and result from people continually dealing with each other and sharing their needs and expectations. Like language, it is constantly evolving. This perspective, championed by philosophers like John Dewey, would say that a fixed list of rules is a poor approximation of our complex and textured cultural ethical code. The very idea of listing them all and programming them into a database is absurd.
We know that the application of ethics requires experience, judgment and compassion. It requires fortitude to take responsibility and to recognize the consequences of one’s actions, for good or for harm. Understanding the right thing to do in a given situation or what truth to subscribe to can tax the imagination and the insight of even the wisest people. But it is a human endeavor.
It is not realistic to expect a machine that shares none of our physical make-up and none of our experience to give advice on the human condition. It might follow a wish to the letter and still get it all wrong — even without the devil’s malice. Humans must remain the final editors. Ceding authority over our decision making to an AI bot, whether for convenience or for profit, would be like selling our souls.
Dr. Richard Koral is Leader of the New York Society for Ethical Culture, the city’s oldest and most prominent humanist institution.