Like many ambitious tech companies before it, OpenAI introduced itself to the culture at large with big claims about how its technology would improve the world—from boosting productivity to enabling scientific discovery. Even the caveats and warnings were de facto advertisements for the existential potential of artificial intelligence: We had to be careful with this stuff, or it might literally wipe out humanity.
Fast-forward to the present day, and OpenAI is still driving culture-wide conversations, but its attention-grabbing offerings aren’t quite so lofty. Its Sora 2 video platform—which makes it easy to generate and share AI-derived fictions—was greeted as a TikTok for deepfakes. That is, a mash-up of two of the most heavily criticized developments in recent memory: addictive algorithms and misinformation.
As that launch was settling in (and being tweaked to address intellectual property complaints), OpenAI promised a forthcoming change to its flagship ChatGPT product, enabling “erotica for verified adults.” These products are not exactly curing cancer, as CEO Sam Altman has suggested artificial intelligence may someday do. To the contrary, the moves have struck many as weirdly off-key: Why is a company that took its mission (and itself) so seriously doing . . . this?
An obvious risk here is that OpenAI is watering down a previously high-minded brand. There are multiple major players in AI at this point, including Anthropic, the maker of ChatGPT rival Claude, as well as Meta, Microsoft, Elon Musk’s Grok, and more. As they seek to attract an audience, they will have to differentiate themselves through how their technologies are deployed and what they make possible, or easy. In short, what the technology stands for. This is why slop, memes, and sex seem like such a comedown from OpenAI’s carefully cultivated reputation as an ambitious but responsible pioneer.
To underscore the point, rival Anthropic recently enjoyed a surprising amount of positive attention—an estimated 5,000 visitors and 10 million social media impressions—for a pop-up event in New York’s West Village, dubbed a “no slop zone,” that emphasized analog creativity tools. This is part of a “Keep Thinking” branding campaign aimed at burnishing the reputation of its Claude chatbot. The company has positioned itself as taking a cautious approach to developing and deploying the technology (one that’s attracted some criticism from the Trump administration). It has also made Anthropic stand out in what can be a move-fast-and-break-things competitive field.
AI is a field that’s spending—and losing—vast sums, and lately casting about for revenue streams in the here and now while working toward that promised lofty future. According to The Information, OpenAI lost $7.8 billion on revenue of $4.5 billion in the first half of 2025, and expects to spend $115 billion by 2029. ChatGPT has 800 million monthly users, but paid accounts are closer to 20 million, and these recent moves suggest that it needs to build and leverage engagement. As Digiday recently noted, OpenAI increasingly seems to be at least considering ad-driven models (once dubbed a “last resort” by Altman).
Writer and podcaster Cal Newport has made the case that developments like viral-video tools and erotica chat are emblematic of a deeper shift away from grandiose economic impacts and toward “betting [the] company on its ability to sell ads against AI slop and computer-generated pornography.” It’s almost like a sped-up version of Cory Doctorow’s infamous enshittification process, pivoting from a quality user experience to an increasingly degraded one designed for near-term profit.
This is not entirely fair to OpenAI, whose every move is scrutinized partly because it’s the best-known brand in a singularly hyped category. All its competitors will also have to deliver real value in exchange for their massive costs to investors and society at large. But precisely because it’s a leading brand, it’s particularly susceptible to dilution if it’s seen as straying from its idealistic promise, and rhetoric. A cutting-edge AI pioneer doesn’t want to be perceived as an existential threat—but it also doesn’t want to be branded as just another source of crass distraction.