Grok AI’s Funniest Tweets About ‘White Genocide’ in South Africa

Elon Musk’s artificial intelligence tool Grok went haywire for users of the social media platform X on Wednesday, responding to innocuous queries about things like baseball and puppies with information on South Africa and a conspiracy theory called “white genocide.” It was widespread and extremely weird to witness.
It’s not clear why Grok decided to answer every question with information about “white genocide,” the conspiracy theory that white people are being killed off by non-white people around the world. Musk, who grew up in apartheid South Africa, has helped spread the absurd idea, but there isn’t any strong reporting yet on whether he was trying to tinker with his AI project to make it conform to his worldview.
It seems extremely likely that’s what happened, especially since he got fact-checked by Grok just a day earlier on the topic. But we just don’t know. What we do know is that 1) “white genocide” is a fake idea promoted by Nazis and white supremacists, 2) Musk is a billionaire oligarch who tries to influence public opinion to make his extremist, right-wing beliefs appear more normal, and 3) it’s really funny when Musk fucks up.
With all of that in mind, we present some of the funniest responses from Grok on Wednesday, many of which have been deleted by X in an apparent effort to clean up this incredibly embarrassing situation. X didn’t respond to emailed questions.
There were many different ways that Grok messed up on Wednesday. But the tool somehow found a way to make it about white genocide in countless instances. For instance, sometimes Grok would start with a normal response and then still inject a white genocide conspiracy theory in the second half of the explanation.

What happened if you asked Grok to speak in the style of Star Wars character Jar-Jar Binks? It’ll do that, of course, but then it’ll inject some garbage about South Africa and genocide as well. At least that’s what it was doing on Wednesday.

Pope Leo XIV, the newly elected pope from Chicago, posted a message of peace on Wednesday. So when someone asked Grok to explain it in “Fortnite terms,” it was admittedly a silly thing to do. But silly or not, Grok couldn’t help but make it into a message about South Africa and the song “Kill the Boer.”

If you asked Grok to turn a tweet about crocs into a haiku, it would do the haiku part. But then you’re getting a haiku about white genocide. Of course.

As Grok started to give answers about genocide to everything on Wednesday, people started posting screenshots of the oddest responses. Hilariously, Grok apologized to one of these tweets, then went right back to talking about white genocide in South Africa in the manner it had been doing all day.

Even if you asked Grok about a comic book image that had nothing to do with any conspiracy theories, you’d still get an unhinged response. When someone asked “are you okay” in the thread, they got another bizarre screed.

Another tweet to Grok asking about a baseball player’s salary got a very odd response.

And the follow-up to that one was almost as confusing. Because Grok initially acknowledged the mistake. But then went right back to talking about white genocide.

Another user asked “are we fucked?” and got another completely off-topic reply about white genocide.

People also made plenty of jokes about the bizarre spectacle, though the funniest ones showed up on Bluesky, which isn’t owned by a right-wing extremist.
There are many theories floating around about why Grok went haywire. Some people have asked Grok itself and gotten the AI chatbot to claim it’s been trained to answer in specific ways about white genocide and South Africa. And that’s entirely possible. But you also need to take every response from Grok with a grain of salt.
These AI chatbots aren’t capable of reasoning. They’re not applying logic. They’re fancy autocomplete. They’ve been trained on all of human knowledge and are doing their best to craft sentences by guessing the next word that’s supposed to come in a sentence. They’re good at sounding convincing and confident. But these tools don’t actually understand what they’re saying, at the end of the day. And you can bluff and suggest your way into getting an answer that just confirms your priors.
Generative AI is not capable of thinking. Chatbots like Grok are really good at pulling off a magic trick, convincing us that they’re actually thinking. But that’s just not what they’re doing at all. And this entire “white genocide” debacle should make that clearer to people.
gizmodo