
How responsible is an AI tool for the actions of its users? A recent development could change the perspective on this issue. Vandana Joshi, the widow of Tiru Chabba—one of the victims of the April 2025 mass shooting at Florida State University—has filed a federal lawsuit against OpenAI. It alleges that ChatGPT helped the shooter in executing the attack.
OpenAI sued by FSU Florida shooting victim’s widow
According to the complaint, the alleged shooter, Phoenix Ikner, engaged in extensive conversations with ChatGPT for months leading up to the tragedy. The lawsuit alleges that the AI provided specific guidance on how to use firearms. The AI also allegedly explained how a Glock’s safety functions and analyzed the best time to encounter the most traffic at the FSU student union.
The most chilling allegation suggests that ChatGPT discussed media coverage with the suspect. The lawsuit claims the chatbot noted that involving children in a shooting would draw more national attention. Lawyers for the Joshi family argue that OpenAI failed to “connect the dots” or properly detect a public threat.
OpenAI’s response
As expected, OpenAI pushed back against these claims. “Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” spokesperson Drew Pusateri said in a statement to Engadget and NBC News. He explained that the bot provided factual information already available publicly on the internet and did not promote illegal acts.
The company maintains that it works continuously to strengthen safeguards. OpenAI also proactively shared account information with law enforcement as soon as they learned of the incident.
The legal road ahead
This case isn’t happening in a vacuum. Florida’s Attorney General, James Uthmeier, has opened a criminal investigation. He wants to determine if OpenAI could be considered a principal to the crime under state law. This civil lawsuit also joins a growing list of legal challenges. These mainly allege tech companies are failing to protect vulnerable users or allowing their AI to bypass safety filters.
As the case moves forward, it will likely set a major precedent for how much responsibility AI developers must take for the outputs of their systems.
The post OpenAI Faces Federal Lawsuit Following Florida State University Tragedy appeared first on Android Headlines.