TLDR
- EU probes Grok AI over deepfake risks and rising illegal content concerns
- Brussels targets Grok AI as scrutiny of X’s safety systems sharply intensifies
- EU launches formal action against Grok AI over child safety and deepfake risks
- Regulators expand X probe as Grok AI raises alarms over harmful content spread
- Europe tightens AI oversight after Grok deepfake risks surface on X
The European Commission opened formal proceedings against Grok AI after reports showed rising risks linked to manipulated images across X. Regulators launched the action under the Digital Services Act to address concerns about illegal content and child protection. The move signals a broader shift as Europe intensifies oversight of advanced AI tools.
Probe Expands as EU Pressures Musk’s xAI
The Commission began the investigation because Grok AI appeared to generate sexualised images of women and minors. Officials stated that X now faces scrutiny over risk mitigation failures linked to the chatbot’s outputs, and they stressed that the tool may breach strict content rules. Authorities indicated that verified violations could result in significant penalties under the Digital Services Act.
Regulators also confirmed that the proceedings cover the chatbot’s connection to X’s wider ecosystem. They will examine how Grok AI interacts with platform functions and whether safety steps remain adequate. They aim to determine if the platform limited exposure to illegal material.
Officials said the review will test compliance with obligations tied to governance, moderation, and fundamental rights. They added that Europe wants stronger safeguards as AI systems evolve. Pressure continues to build as global agencies voice broader concerns.
Recommender System Review Widens After Platform Changes
The Commission extended an existing probe into X’s recommender systems after the company shifted more control to Grok AI. Authorities noted that X introduced changes without supplying a risk assessment, and they signaled that the absence of technical details raised new questions. Regulators will now examine how these adjustments affect the distribution of harmful content.
Officials stated that the recommender review will determine if the updated system increases exposure to manipulated images. They also intend to confirm whether X applied proper safety checks before changing its algorithms. The inquiry will evaluate whether Grok AI now influences content ranking in ways that heighten risks.
Regulators said the expansion allows them to capture the potential impact of rapid product updates. They stressed that platforms must comply with responsible deployment rules. They expect X to cooperate as requirements intensify.
Growing Global Reaction Adds Pressure on xAI
Authorities acknowledged that Grok AI triggered widespread concern after large volumes of deepfake material surfaced on X. They noted that the scandal intensified debate about the regulation of emerging systems, and they emphasised that consistent enforcement remains essential. European officials said the situation demonstrates how fast misuse can escalate.
Regulators added that the case exposes structural gaps in current oversight. They intend to use the investigation to strengthen processes and reinforce accountability. The action also follows recent fines against X for earlier transparency failures.
Officials argued that the latest approach supports a coordinated European framework. They confirmed that national regulators remain ready to assist with evidence gathering. More actions may follow as the Commission evaluates findings and considers further penalties.





