A coalition of nonprofits is urging the U.S. authorities to right away droop the deployment of Grok, the chatbot developed by Elon Musk’s xAI, in federal companies together with the Department of Defense.
The open letter, shared solely with TechCrunch, follows a slew of regarding conduct from the big language mannequin over the previous yr, together with most lately a pattern of X customers asking Grok to show pictures of actual girls, and in some instances kids, into sexualized pictures with out their consent. According to some stories, Grok generated hundreds of nonconsensual specific pictures each hour, which have been then disseminated at scale on X, Musk’s social media platform that’s owned by xAI.
“It is deeply concerning that the federal government would continue to deploy an AI product with system-level failures resulting in generation of nonconsensual sexual imagery and child sexual abuse material,” the letter, signed by advocacy teams like Public Citizen, Center for AI and Digital Policy, and Consumer Federation of America, reads. “Given the administration’s executive orders, guidance, and the recently passed Take It Down Act supported by the White House, it is alarming that [Office of Management and Budget] has not yet directed federal agencies to decommission Grok.”
xAI reached an settlement final September with the General Services Administration (GSA), the federal government’s buying arm, to promote Grok to federal companies below the chief department. Two months earlier than, xAI – alongside Anthropic, Google, and OpenAI – secured a contract price as much as $200 million with the Department of Defense.
Amid the scandals on X in mid-January, Defense Secretary Pete Hegseth stated Grok will be part of Google’s Gemini in working contained in the Pentagon community, dealing with each categorised and unclassified paperwork, which specialists say is a nationwide safety threat.
The letter’s authors argue that Grok has confirmed itself incompatible with the administration’s necessities for AI programs. According to the OMB’s steerage, programs that current extreme and foreseeable dangers that can’t be adequately mitigated have to be discontinued.
“Our primary concern is that Grok has pretty consistently shown to be an unsafe large language model,” JB Branch, a Public Citizen Big Tech accountability advocate and one of many letter’s authors, informed TechCrunch. “But there’s also a deep history of Grok having a variety of meltdowns, including anti-semitic rants, sexist rants, sexualized images of women and children.”
Techcrunch occasion
Boston, MA
|
June 23, 2026
Several governments have demonstrated an unwillingness to interact with Grok following its conduct in January, which builds on a sequence of incidents together with the era of anti-semitic posts on X and calling itself “MechaHitler.” Indonesia, Malaysia, and the Philippines all blocked entry to Grok (they’ve subsequently lifted these bans), and the European Union, the U.Okay., South Korea, and India are actively investigating xAI and X relating to information privateness and the distribution of unlawful content.
The letter additionally comes per week after Common Sense Media, a nonprofit that critiques media and tech for households, revealed a damning threat evaluation that discovered Grok is among the many most unsafe for youths and teenagers. One might argue that, based mostly on the findings of the report — together with Grok’s propensity to supply unsafe recommendation, share details about medicine, generate violent and sexual imagery, spew conspiracy theories, and generate biased outputs — Grok isn’t all that protected for adults both.
“If you know that a large language model is or has been declared unsafe by AI safety experts, why in the world would you want that handling the most sensitive data we have?” Branch stated. “From a national security standpoint, that just makes absolutely no sense.”
Andrew Christianson, a former National Security Agency contractor and present founding father of Gobbi AI, a no-code AI agent platform for categorised environments, says that utilizing closed-source LLMs on the whole is an issue, significantly for the Pentagon.
“Closed weights means you can’t see inside the model, you can’t audit how it makes decisions,” he stated. “Closed code means you can’t inspect the software or control where it runs. The Pentagon is going closed on both, which is the worst possible combination for national security.”
“These AI agents aren’t just chatbots,” Christianson added. “They can take actions, access systems, move information around. You need to be able to see exactly what they’re doing and how they’re making decisions. Open source gives you that. Proprietary cloud AI doesn’t.”
The dangers of utilizing corrupted or unsafe AI programs spill out past nationwide safety use instances. Branch identified that an LLM that’s been proven to have biased and discriminatory outputs might produce disproportionate detrimental outcomes for folks as properly, particularly if utilized in departments involving housing, labor, or justice.
While the OMB has but to publish its consolidated 2025 federal AI use case stock, TechCrunch has reviewed the use instances of a number of companies — most of that are both not utilizing Grok or should not disclosing their use of Grok. Aside from the DoD, the Department of Health and Human Services additionally seems to be actively utilizing Grok, primarily for scheduling and managing social media posts and producing first drafts of paperwork, briefings, or different communication supplies.
Branch pointed to what he sees as a philosophical alignment between Grok and the administration as a motive for overlooking the chatbot’s shortcomings.
“Grok’s brand is being the ‘anti-woke large language model,’ and that ascribes to this administration’s philosophy,” Branch stated. “If you have an administration that has had multiple issues with folks who’ve been accused of being Neo Nazis or white supremacists, and then they’re using a large language model that has been tied to that type of behavior, I would imagine they might have a propensity to use it.”
This is the coalition’s third letter after writing with related considerations in August and October final yr. In August, xAI launched “spicy mode” in Grok Imagine, triggering mass creation of non-consensual sexually specific deepfakes. TechCrunch additionally reported in August that non-public Grok conversations had been listed by Google Search.
Prior to the October letter, Grok was accused of offering election misinformation, together with false deadlines for poll modifications and political deepfakes. xAI additionally launched Grokipedia, which researchers discovered to be legitimizing scientific racism, HIV/AIDS skepticism, and vaccine conspiracies.
Aside from instantly suspending the federal deployment of Grok, the letter demands that the OMB formally examine Grok’s security failures and whether or not the suitable oversight processes have been performed for the chatbot. It additionally asks the company to publicly make clear whether or not Grok has been evaluated to adjust to Trump’s government order requiring LLMs to be truth-seeking and impartial and whether or not it met OMB’s threat mitigation requirements.
“The administration needs to take a pause and reassess whether or not Grok meets those thresholds,” Branch stated.
TechCrunch has reached out to xAI and OMB for remark.
