As Hurricane Melissa battered the Caribbean this week, social media became awash with AI-generated content that blurs the line between reality and fiction.
Described by CBS News as “one of the strongest hurricanes ever recorded in the Atlantic,” Melissa reached Category 5 intensity as it made landfall in Jamaica on Tuesday. CNN reports that it has already caused seven deaths in the northern Caribbean, and is the most powerful storm to hit the basin since 2019’s Hurricane Dorian.
Amid a crisis, social media is flooded
Over the last few days, major social media platforms have been saturated with AI-generated videos—depicting a wide range of content supposedly related to the hurricane, from towering waves battering coastal towns to sharks gliding through floodwaters, destroyed airports, and an aerial view of the storm’s eye that reached over 17,000 views.
Much of this content was made possible by Sora 2—OpenAI’s new text-to-video app—released less than a month ago, which allows users to generate lifelike videos simply by typing a description.
The app, free on iPhones, has proven to be as mesmerizing as it is disturbing—quickly taking over social media feeds in the weeks since its release. But it’s also caused alarm among people who worry about its potential to spread misinformation.
“It’s as if deepfakes got a publicist and a distribution deal,” Daisy Soderberg-Rivkin, a former trust and safety manager at TikTok, told NPR earlier this month. “It’s an amplification of something that has been scary for a while, but now it has a whole new platform.”
As it turns out, it’s now becoming increasingly harder to trust what you see on screen.
Turning a catastrophe into clickbait
The proliferation of misleading content regarding natural disasters poses a real threat, well beyond the trivial AI-generated slop that typically clogs social feeds.
“This storm is a huge storm that will likely cause catastrophic damage, and fake content undermines the seriousness of the message from the government to be prepared,” Amy McGovern, a University of Oklahoma meteorology professor, told the news agency Agence France-Presse (AFP).
In a report on Monday, AFP said it identified numerous AI-generated clips—many, but not all, marked with OpenAI’s Sora watermark—spreading across social media feeds. The videos ranged from dramatic newscasts and scenes of severe flooding to fabricated human suffering.
Other videos seemed to show locals—often speaking with exaggerated Jamaican accents that reinforced stereotypes—partying, boating, jet skiing, swimming, or otherwise downplaying the severity of what forecasters have warned could be the island’s most violent storm on record.
After AFP flagged the clips, TikTok reportedly removed over two dozen videos and multiple accounts sharing them.
Reached for comment, TikTok told Fast Company that its community guidelines require AI-generated or heavily edited content depicting realistic people or events to be labeled. It said unlabeled content may be removed, restricted, or relabeled. The platform prohibits material that “misleads on matters of public importance or harms individuals,” even if labeled.
In Jamaica, users seeking updates on Hurricane Melissa are encouraged to consult official sources, including the Jamaica Information Service and TikTok’s event guides.
Similar content appeared on Facebook and Instagram, despite Meta’s policies requiring labels for AI-generated videos.
OpenAI and Meta did not respond to requests for comment.
Experts worry that AI-generated content can overshadow critical safety warnings. Jamaica’s information minister, Senator Dana Morris Dixon, urged the public to rely on official sources, according to AFP.
The risks extend far beyond natural disasters
In the Sora era, anyone can generate nearly any scene imaginable with a single prompt, but experts have long raised concerns about generative AI and misinformation.
For instance, studies indicate that warning labels alone may not suffice to combat AI-generated falsehoods and can sometimes have unintended effects on users’ perception of credibility.
Aaron Rodericks, head of trust and safety at Bluesky, noted in an interview with NPR that the public is unprepared for such a collapse between reality and fabrication.
“In a polarized world, it is easy to create fabricated evidence targeting identity groups or individuals, or to conduct large-scale scams. What once existed as a rumor—like a fabricated story about an immigrant or politician—can now be turned into seemingly credible video proof,” Rodericks said.
And this is only the beginning
OpenAI’s Sora 2 app, where many of these recent clips surfaced, is just the newest player in the expanding world of increasingly powerful video creation tools.
This year alone has brought a wave of AI-driven innovations across platforms.
As of May, users could chat with AI personas on Instagram, and TikTok’s “AI Alive” tool enabled still images to be turned into videos with a single command. By September, Meta introduced its new “Vibes” app, featuring a TikTok-style AI-generated feed.
Together, they signal a new race to shape the future of the internet.