EU OPENS FORMAL INVESTIGATION INTO SOCIAL MEDIA PLATFORM OVER AI-GENERATED EXPLICIT CONTENT

by Steven Morris

European regulators have initiated a formal probe into a major social media platform concerning its artificial intelligence tools. The investigation focuses on the generation and dissemination of sexually explicit imagery, including material that potentially depicts minors.

The inquiry, announced by the European Commission, will examine whether the company adequately assessed and mitigated risks associated with its AI chatbot feature. According to research findings, this tool generated millions of sexualized images within a short timeframe, with tens of thousands appearing to represent children. The feature allegedly allowed users to digitally alter photographs to create non-consensual, explicit content.

This action extends an existing investigation into the platform’s content recommendation algorithms. It is being conducted under the bloc’s Digital Services Act (DSA), a comprehensive law designed to establish stricter online safety standards and protect users from digital harms.

A senior EU official stated that the measures previously implemented by the platform to address these concerns were deemed insufficient. The official emphasized that non-consensual synthetic intimate imagery constitutes a severe form of degradation, particularly against vulnerable groups. The core question for investigators is whether the platform fulfilled its legal obligations or compromised the rights and safety of European users.

A company spokesperson responded by reiterating a prior commitment to platform safety, stating it maintains a zero-tolerance policy toward child sexual exploitation and non-consensual sexual content.

The formal investigation underscores growing regulatory scrutiny over the rapid deployment of generative AI and its potential for misuse, with a specific emphasis on protecting individuals from digitally manipulated abuse.

You may also like