Home Community Insights X Reportedly Developing a “Made with AI” 

X Reportedly Developing a “Made with AI” 

X Reportedly Developing a “Made with AI” 

X is developing a “Made with AI” label for content disclosure. The feature is in active development but not yet fully rolled out. It allows users to voluntarily tag their posts especially those with text, images, video, or other media as AI-generated or synthetically created.

Independent app researcher Nima Owji spotted the in-development toggle in the X app, enabling users to disclose AI-made or manipulated content. Once launched, failing to label qualifying AI-generated posts could violate platform rules, potentially risking account suspension — similar to how undisclosed sponsored content is handled.

The move aims to increase transparency, combat AI-generated spam like LLM slop flooding timelines, and preserve the platform’s value for real user opinions and authentic interactions. Nikita Bier noted in discussions, he emphasized that disclosing AI content is crucial for long-term trust on X.

Register for Tekedia Mini-MBA edition 19 (Feb 9 – May 2, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

This comes amid broader concerns about AI manipulation online. Earlier in 2026, X introduced “Manipulated Media” warnings for deceptive edited visuals teased by Elon Musk in January, but the new “Made with AI” label focuses more on user self-disclosure for generative content. Other platforms like Meta have experimented with similar labels initially “Made with AI” on Instagram/Facebook, later softened to “AI Info” after backlash over over-labeling minor edits.

X appears to be opting for a user-toggle approach rather than automatic detection. It’s a step toward better handling the flood of AI content, though enforcement relies on user honesty — automatic detection isn’t mentioned as the primary method yet.

X’s “Manipulated Media” warnings were introduced in late January 2026 as a transparency measure to flag potentially deceptive edited or synthetic visuals on the platform. The feature was teased on January 28, 2026, when Elon Musk reposted a message from the anonymous account DogeDesigner.

Musk’s caption was simply: “Edited visuals warning.” The original post claimed X now applies a clear warning to posts using “fake or edited visuals to trick people,” making it harder for “legacy media groups to spread misleading clips or pictures.” Users began reporting sightings of the label shortly after.

This positions X as emphasizing real-time flagging of deceptive content, distinguishing it from other platforms’ approaches. The warning appears as an in-feed label on posts containing doctored, edited, or synthetic media intended to mislead or deceive viewers.

It targets visuals (images, videos, clips) that are “fake or edited” in ways that could trick people — often focusing on deceptive alterations rather than benign edits. Detection likely combines AI tools, community reports, Community Notes, and possibly other signals. The label provides context to help users assess authenticity before engaging or sharing.

It’s distinct from the upcoming “Made with AI” voluntary disclosure toggle (user-tagged generative content), as Manipulated Media focuses more on automatically flagged deceptive edits that could cause harm or confusion. X’s Authenticity rules prohibit sharing “synthetic or manipulated media” that is deceptively altered/fabricated and likely to cause: Widespread confusion on public issues,

Examples of prohibited “manipulated” media include:Substantially edited and post-processed content that distorts meaning. Added and removed visual or auditory elements like new frames, overdubbed audio, modified subtitles. Fabricated and simulated depictions of real people especially via AI and deepfakes.

Violations can lead to labeling, reduced visibility, removal, or account actions. X may also apply labels proactively to provide context, even if not fully removing the post. This revives elements of pre-Musk Twitter’s manipulated media policy but appears more focused on warnings than strict takedowns, aligning with X’s emphasis on free speech while adding transparency tools.

Detection criteria remain unclear — it’s unknown exactly how X determines “manipulated” vs. routine edits. Critics worry about overreach, false positives, or inconsistent enforcement. Enforcement relies partly on automation and reports, but details on accuracy are sparse.

It aims to combat misinformation floods, especially amid rising AI content, but some see it as targeting “legacy media” specifically per DogeDesigner’s framing. It’s a step toward better media literacy on X, though transparency about the system’s mechanics could build more trust.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here