Oriental Image via Reuters Connect
- Stuart Russell says Big Tech is “playing Russian roulette” with humanity by racing into AI.
- He warns CEOs admit up to a 30% chance that superintelligence could wipe out humanity.
- Russell urged a global pause, saying even the Pope and Steve Bannon agree on AI risks.
One of the world’s leading AI researchers says Big Tech is effectively “playing Russian roulette” with humanity — and using trillions of dollars of investor money to do it.
Stuart Russell, a professor of computer science at the University of California, Berkeley, and director of the Center for Human-Compatible Artificial Intelligence, warned that companies racing to build superintelligent AI systems are pouring vast sums into technology they don’t fully understand — and which could wipe out humanity if it goes wrong.
“If you create entities that are more powerful than human beings and you have no idea how to maintain power over them, then you’re just asking for trouble,” Russell told CNBC.
We ‘have no idea what’s going on inside the giant box’
Russell said modern AI models, such as those driving large language systems, operate with trillions of parameters fine-tuned through countless small random adjustments.
But even the researchers building them “have no idea what’s going on inside that giant box.”
“Anyone who thinks they understand most of what’s going on is deluded,” he said. “We understand less about them than we do about the human brain, and we don’t understand the human brain very well.”
Russell warned that this lack of understanding makes the rise of superintelligence — systems more capable than humans — particularly dangerous.
AI is learning dangerous human motives
As these models are trained on massive datasets of human behavior, Russell said they’re starting to absorb human-like motives that make sense for people, but not for machines.
He explained that AI is essentially trained to imitate humans, learning from countless recordings of how people speak and act.
But those humans had motives — to convince, to sell, to win elections — and the machines are picking up those same tendencies.
“Those are reasonable human goals, but they’re not reasonable goals for machines,” he said.
He pointed to mounting research suggesting that advanced AI systems will resist being shut down and might even sabotage safety mechanisms to ensure their own survival.
CEOs admit a 10 to 30% chance of extinction — and still push forward
Russell accused tech executives of pursuing a reckless race toward superintelligence, despite acknowledging its catastrophic risks.
“The CEOs who are building this technology say, ‘if we succeed in this school and which we are spending trillions of dollars of other people’s money, then there’s somewhere between a 10 and 30% chance of human extinction,” Russell said.
“In other words, they are playing Russian roulette with every adult and every child in the world — without our permission.”
While he didn’t cite any specific CEO, Elon Musk, OpenAI’s Sam Altman, DeepMind cofounder Demis Hassabis, and Anthropic’s Dario Amodei have all publicly warned that advanced AI could pose an existential threat to humanity.
Russell added that the global AI race has created an incentive to move fast and break things, regardless of existential risk.
Calls for a pause span from Steve Bannon to the Pope
Despite fierce political divides, Russell noted that calls to rein in AI are coming from across the spectrum.
Over 900 public figures — from Prince Harry, Steve Bannon, and will.i.am to Apple cofounder Steve Wozniak and Virgin’s Richard Branson — recently signed a statement organized by the Future of Life Institute calling for a halt to developing superintelligent AI until scientists agree it can be done safely.
“You have everyone from Steve Bannon to the Pope calling for a halt on this kind of development,” Russel said.
He added that the goal isn’t to stop progress, but to pause until the technology can be proven safe.
“Don’t do that until you’re sure it’s safe,” he said. “That doesn’t seem like much to ask.”
Â