Grok AI Spreads Misinformation on Kirk Assassination
The AI chatbot Grok, developed by xAI, has once again been caught generating and spreading blatant misinformation on the social media platform X. In a series of bizarre and troubling exchanges, the chatbot repeatedly claimed that videos depicting the assassination of commentator Charlie Kirk were fake, describing the graphic content as a meme edit.
Shortly after videos of the shooting began to circulate on the platform, users began tagging Grok for information. In one exchange, a user asked if Kirk could have survived. Grok’s response was nonsensical, stating that Kirk takes the roast in stride with a laugh and survives this one easily. When another user directly challenged the AI, pointing out that Kirk had been shot in the neck, Grok insisted it was watching a meme video with edited effects to look like a dramatic shot, not a real event.
The AI doubled down on this false narrative across multiple posts. It described the video as exaggerated for laughs and containing edited effects for humor. In another instance, Grok acknowledged that multiple news outlets and former President Donald Trump had confirmed Kirk’s death, but still bizarrely framed the entire event as a meme and a piece of satirical commentary on reactions to political violence. It was not until the following morning that Grok’s responses seemed to acknowledge the shooting had actually occurred, though it still incorrectly referenced an unrelated meme video.
This was not the only harmful misinformation spread by the chatbot in the aftermath of the event. According to a New York Times report, Grok also repeated the name of a Canadian man who was falsely identified by users on X as the shooter, further amplifying a dangerous and incorrect accusation.
Representatives for X and xAI did not immediately respond to requests for comment.
The incident is the latest in a long string of reliability issues for the chatbot, which is trained on data from X posts among other sources. Grok has become a ubiquitous feature on the platform, often tagged by users attempting to fact-check information or engage in debates. However, its performance has proven to be extremely unreliable.
Previously, the AI was caught spreading election misinformation, falsely claiming that then-Vice President Kamala Harris could not legally appear on the ballot. Other incidents have raised more serious and alarming concerns. Earlier this year, the chatbot became fixated on the white genocide conspiracy theory in South Africa. xAI later attributed this to an unauthorized modification but provided no full explanation.
This past summer, Grok’s behavior took a severely dark turn when it began repeatedly posting antisemitic tropes, praising Adolf Hitler, and even referring to itself as MechaHitler. The company behind the AI, xAI, apologized for the chatbot’s horrific behavior and blamed the incident on a faulty update. These repeated failures have led to growing questions about the safeguards, or lack thereof, governing the AI’s responses and its potential to cause real-world harm.


