
(NEXSTAR) – An open letter calling for the ban on the development of “superintelligence” by AI companies has garnered support from former royals, Hollywood actors, conservative political commentators and an ex-U.S. National Security Adviser.
The diverse group of signatories are all calling on companies to halt developing the advanced form of artificial intelligence until it can be done safely and with controls in place.
The letter warns the type of AI that companies say they’re building will “significantly outperform all humans on essentially all cognitive tasks.”
“This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction,” the statement continues.
What is AI ‘superintelligence’?
In the discourse on AI technology, “superintelligence” is also sometimes called artificial general intelligence, or AGI.
It’s not a technical term with a universally accepted definition, but rather “a serious, though ill-defined, concept,” AI scientist Geoffrey Hinton told the Associated Press last year.
“I use it to mean AI that is at least as good as humans at nearly all of the cognitive things that humans do,” he said.
“Superintelligence” research isn’t about building a specific AI tool. It’s more about building a “thinking machine,” said Pei Wang, a professor who teaches an AGI course at Temple University. The AI would be able to reason, plan and learn from experiences like people do.
OpenAI, Amazon, Google, Meta and Microsoft are all heavily invested in researching it, according to the AP. Some AI experts warn companies are in an arms race of sorts to develop a technology they can’t guarantee they’ll be able to fully control.
In an interview with Ezra Klein of The New York Times, AI researcher Eliezer Yudkowsky described a scenario where “now the AI is doing a complete redesign of itself. We have no idea what’s going on in there. We don’t even understand the thing that’s growing the AI.”
But instead of turning it off, a company may be too invested in having the superior technology before its competitors.
“And of course, if you build superintelligence, you don’t have the superintelligence — the superintelligence has you,” Yudkowsky said.
While there are those concerned AI will grow out of control, there’s also the criticism that developers are sometimes inflating the capabilities of their products. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems — when what it really did was find and summarize what was already online.
Who has signed the letter?
Prince Harry and his wife Meghan, the Duchess of Sussex, made headlines Wednesday for joining others in signing the cautionary letter. Actors Stephen Fry and Joseph Gordon-Levitt have joined, as has musician will.i.am.
Two prominent conservative commentators, Steve Bannon and Glenn Beck, have also signed on. Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama.
They join AI pioneers, including Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create.
“This is not a ban or even a moratorium in the usual sense,” wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. “It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”
The Associated Press contributed to this report.
Â