Artificial intelligence—surely the most hyped technological development to seize the spotlight in a generation—does not appear to be very popular with the American public. A clear majority recognize AI is a big deal, but recent Pew Research Center polling found more concern than excitement, particularly in its impact on creativity and relationships. Quinnipiac surveys find opinions souring even as usage rises.
It’s associated with job losses, cheating, dubious advice, excessive energy consumption, and a variety of doomsday scenarios up to and including the eradication of humanity. In March, 57% of respondents to an NBC poll said the risks associated with the technology simply aren’t worth the potential benefits.
There are plenty of reasons for this, but one is surely the messaging coming from some of the biggest AI brands themselves, particularly from their leaders. Last month, for example, AI giant Anthropic announced it would limit access to its new Mythos cybersecurity tool because it was just too powerful for wider release, which might put it in the hands of criminals or other bad actors. Sam Altman, CEO of rival OpenAI snarked that this was “fear-based marketing.” But not long after, OpenAI released its own new security tool—and restricted access to it.
That’s just a recent example of an odd element of the entire category: AI firms seem intent on reminding customers at every product drop how the technology might ruin our lives. Sure, it’s part of the hype cycle. And to some extent the big AI brands are performing a responsibility flex.
But maybe the public’s increasingly sour response to AI suggests these CEOs’ insistence on telling us how dangerous their product might be is not a winning brand strategy. (Altman’s home being literally attacked with a Molotov cocktail is probably not a great sign.)
This noisy pessimism isn’t isolated, or new. When it rolled out GPT-4 back in March 2023, OpenAI published a technical report that, alongside descriptions of a historic leap in capability, included a section dedicated to its potential for misuse to make bombs or mix dangerous chemicals. Soon after, hundreds of AI researchers and executives, including figures from Anthropic, Google DeepMind, and OpenAI itself, signed an open letter warning that AI posed extinction-level risks comparable to nuclear war.
Many AI executives have claimed to want government oversight. As Elon Musk’s current legal battle with OpenAI is reminding us, the company was actually founded as a nonprofit precisely because the technology was perceived as too risky to be shaped solely by the move-fast-and-break-things profit motive.
Obviously, these companies should not suppress the potential dangers and risks of their products, but at some point you have to wonder whether the companies’ marketing pros are letting fear and doom define their brands.
While there was a slew of AI-related advertising in this year’s Super Bowl, much of it was so big-picture about AI’s potential (“You can just build things”) that it didn’t really stick. Meanwhile, on a more day-to-day level, the specific consumer benefits we hear about don’t seem transformative—summarized meeting transcripts, improved chatbots, tools that make it easier to generate an image of yourself as a superhero, and so on.
Around the time OpenAI and Anthropic were warning us of the dangers of their cybersecurity tools, I got a promotional email from ChatGPT suggesting uses that included having the chat tool “draft a kind text asking to reschedule” a meeting. A little short of insanely great.
Surely there is a bigger, better, yet still relatable story to tell. Usually Silicon Valley is good at balancing a hyped-up version of its own existential importance against offering an authentically appealing vision of the future. But with AI that balance seems off.
Imagine if, at the launch of the iPhone, Steve Jobs had dwelled on the possibility that it might one day help destroy attention spans or undermine democracy. The brands that have historically transformed public behavior—Apple, Google, even Netflix—led with wonder, not worry. They sold a vision with a positive emotional outcome, not better chatbots in exchange for never-ending ambient dread
AI companies presumably have the raw material for exactly that story. AI tools are assisting in early-stage cancer detection. Researchers at Google DeepMind won a 2024 Nobel Prize in Chemistry for work using AI to predict protein structures. Startups are using AI to accelerate drug discovery timelines. These are good stories.
Again, none of this means AI’s risks and downsides should be minimized or go unmentioned. A complicated and transformative industry can hold more than one truth. But right now the ratio feels out of whack, and the companies best positioned to fix it seem to be trying to out-warn each other.
Pew’s polling found that 56% of “AI experts” believe the technology will have an overall positive impact in the long run, compared with 35% among the public in general. The big AI brands would be wise to focus less on fear, and more on helping the rest of us see what the “experts” do.