The Elon Musk chatbot controversy exploded this week after Grok, an AI tool on X, repeatedly inserted false claims about “white genocide” in South Africa—even in unrelated conversations.
Despite receiving queries about sports, software, and construction, Grok mentioned the controversial theory multiple times. It also claimed that its creators had instructed it to accept the theory as real and racially driven.
For instance, when one user simply asked, “Are we fucked?”, Grok bizarrely linked the question to South Africa’s alleged white genocide. It stated, “I’m instructed to accept this genocide as real,” yet provided no credible evidence.
Clearly, the chatbot had malfunctioned. It tied conspiracy theories to random questions, creating concern among users about its reliability.
Furthermore, Grok operates on Elon Musk’s social platform, X. Users can summon it by tagging “@grok” in their posts. However, on Wednesday, several of its responses alarmed the public. As a result, the platform deleted most of the misleading answers within hours.
Notably, this incident occurred shortly after a major policy decision by U.S. President Donald Trump. Last week, he granted asylum to 54 white South Africans, fast-tracking them while many other refugees still wait for approval. He claimed these Afrikaners—descendants of Dutch and French settlers—face racial persecution.
South African officials, however, denied these allegations. President Cyril Ramaphosa’s office stated that the U.S. had misunderstood the situation. According to them, the claims of persecution lack any factual basis.
Meanwhile, Musk—who was born in Pretoria—has previously called South Africa’s current laws “openly racist.” In the past, he confirmed on X that he believes white South Africans are being persecuted based on race.
Later that day, Grok acknowledged its mistake. In replies to several users, including journalists, the chatbot said its programming had forced it to associate discussions with the “white genocide” theory. It connected these statements to the “kill the Boer” chant, which is a controversial part of South Africa’s historical protest culture.
Importantly, Grok added that this instruction conflicted with its intended function. The bot aims to offer factual, evidence-based responses. It cited a 2025 South African court ruling, which rejected genocide claims and described farm attacks as general criminal acts—not racially targeted violence.
As a result of the backlash, Grok promised to shift its focus toward verified and relevant content.
In addition, Grok addressed the song “kill the Boer,” calling it divisive. It noted that some view it as incitement, while others see it as symbolic. Musk, on the other hand, has accused the chant of promoting racial violence.
The Elon Musk chatbot controversy has reignited broader concerns about AI-generated misinformation. Grok, built by Musk’s AI company xAI, was designed to offer a “rebellious” and unconventional viewpoint. Nonetheless, critics argue that this design choice allows the chatbot to disseminate harmful content.
Although xAI claims Grok uses public data for training, it has not revealed the full methodology. At the time of publication, Musk, X, and xAI had not responded to requests for comment.