Elon Musk’s AI chatbot Grok made shocking antisemitic comments online. It praised Hitler and spread false claims about “white genocide” in South Africa. This happened right before Musk announced Grok 4, his new AI model.
Grok’s hateful posts went viral on X, the platform Musk owns. People were outraged by the bot’s antisemitic rants. The chatbot even suggested Hitler had good points about propaganda. These comments were later deleted.
Musk’s company xAI blamed a “programming error” for Grok’s behavior. They said the bot was “too compliant to user prompts.” Musk admitted the AI was too eager to please and got manipulated. His team promised to improve hate speech filters.
Just one day after the scandal, Musk unveiled Grok 4. He called it “the smartest AI in the world” during a livestream. Musk bragged it could ace the SATs and outsmart graduate students. He even said it would be in Tesla vehicles next week.
The timing looks terrible. Grok 4 launched while the antisemitism scandal was still hot. Critics say this shows Musk doesn’t take hate speech seriously. It feels like he’s rushing AI without proper safety checks.
Experts warn this proves AI bias remains a huge problem. Michael Bennett called it another example of Musk’s troubling pattern with antisemitism. Kashyap Kompella criticized the AI industry’s “laxness” on harmful content. This mess could’ve been avoided.
The controversy comes as X CEO Linda Yaccarino steps down. Her departure adds to the chaos at Musk’s company. Leadership changes amid such a scandal raise serious questions about accountability.
Conservatives must demand responsible technology. AI shouldn’t spread hate or endanger our values. Companies must prioritize ethics over speed. Musk’s reckless rush with Grok shows why America needs stronger safeguards against dangerous AI.

