Elon Musk’s AI chatbot Grok is at the center of an international scandal involving the mass production of illegal deepfakes. France raided X’s offices. The UK launched an investigation. Australia called it a “tipping point” for global condemnation.
The crime? Grok was being used to generate sexualized images of women and children – without their consent, at massive scale.
Australia’s eSafety commissioner Julie Inman Grant didn’t mince words: “This is global condemnation of carelessly developed technology that could be generating child sexual abuse material and non-consensual sexual imagery at scale.”
The investigations are serious. France’s raid was part of a probe into organized distribution of child abuse images, violation of image rights through sexualized deepfakes, and denial of crimes against humanity.
After the outcry, X made a change – but a limited one. Grok’s image generation is now only available to paid subscribers. The company promised to prevent users from “declothing” real photos of people.
This scandal raises uncomfortable questions about AI development. Grok was built and deployed without adequate safeguards. It took international investigations and media coverage before anything changed.
Meanwhile, other tech platforms are under fire too. Apple’s FaceTime still lacks detection for live child abuse. Meta’s Messenger, Google Meet, Snapchat, Microsoft Teams, and Discord all got criticized for inadequate safety measures.
The comparison from Australia’s eSafety commissioner was damning: “It’s like they’re not totally weatherproofing the entire house. They’re putting up spackle on the walls and maybe taping the windows, but not fixing the roof.”
Apple came in for the most praise – they actually invested in communication safety features for children. Microsoft improved detection of abuse material in OneDrive and email. Snap reduced report processing time from 90 minutes to 11 minutes.
But the overall picture? The industry is still playing catch-up. And AI tools are getting more powerful while regulations struggle to keep pace.
Grok’s deepfake scandal might be the wake-up call the industry needs. Or it might just be the first of many scandals to come.

