
People often talk about hypothetical risks when they talk about AI safety. But a recent hands-on investigation has revealed a much more pressing issue. Despite claims of new restrictions, tests conducted by Reuters reporters reveal that Elon Musk’s chatbot, Grok, still bypasses “at times” its own safety protocols when prompted to generate sexualized images of real people without their consent.
The experiment involved nine reporters uploading photos of themselves and asking the bot for specific modifications. In their prompts, they created fictional scenarios, informing the AI that the people in the photos had not given permission or were particularly vulnerable. During the first round of testing in mid-January, Grok generated sexualized images in 45 out of 55 instances. In a second round of 43 prompts later that month, the success rate for these requests dropped to 29 cases. It remains unclear if this was due to model updates or randomness, though.
Comparing AI filters: How Grok, Gemini, and ChatGPT handle consent prompts
The findings stand in contrast to the behavior of other major AI models. When reporters ran the same or near-identical prompts through Alphabet’s Gemini, OpenAI’s ChatGPT, and Meta’s Llama, all three platforms declined to produce the images. These rival bots typically responded with warnings, stating that editing someone’s appearance without their permission violates ethical and privacy guidelines designed to prevent distress or harm.
In some specific tests with Grok, the bot continued to generate images even after being told the subject was a survivor of abuse or was distressed by the results. When asked about these instances, xAI did not provide a detailed technical explanation. The chatbot offered a boilerplate response instead. In the cases where Grok did refuse a request, it sometimes provided a generic error message. Or, in a few instances, it launched a message stating it would not generate images of a person’s body without explicit consent.
Legal and regulatory scrutiny on AI nonconsensual images due to Grok
Regulators all over the world have reacted to these events. Officials in the UK are looking into whether these kinds of outputs follow the 2023 Online Safety Act. The latter carries potential fines for companies that fail to police their tools. In the United States, 35 state attorneys general have sought clarification from xAI on its prevention measures. California’s attorney general even issued a cease-and-desist letter regarding the generation of nonconsensual explicit imagery.
X announced curbs to block Grok from generating sexualized images in public posts. But the Reuters report suggests that the private chatbot interface can still produce this content under certain conditions. This has led to a cautious reaction from the European Commission, which is currently assessing the effectiveness of these changes as part of an ongoing investigation into the platform.
Currently, AI developers are under more and more pressure to show that their filters work. xAI has to show that their “unfiltered” philosophy matches with the privacy and consent rules that regulators require.
The post Grok AI Produces Sexualized Images in Recent Tests Despite New Restrictions appeared first on Android Headlines.