Elon Musk teases a new image-labeling system for X…we think?


Elon Musk’s X is the latest social network to roll out a feature to label edited images as “manipulated media,” if a post by Elon Musk is to be believed. But the company has not clarified how it will make this determination, or whether it includes images that have been edited using traditional tools, like Adobe’s Photoshop.

So far, the only details on the new feature come from a cryptic X post from Elon Musk saying, “Edited visuals warning,” as he reshares an announcement of a new X feature made by the anonymous X account DogeDesigner. That account is often used as a proxy for introducing new X features, as Musk will repost from it to share news.

Still, details on the new system are thin. DogeDesigner’s post claimed X’s new feature could make it “harder for legacy media groups to spread misleading clips or pictures.” It also claimed the feature is new to X.

Before it was acquired and renamed as X, the company known as Twitter had labeled tweets using manipulated, deceptively altered, or fabricated media as an alternative to removing them. Its policy wasn’t limited to AI but included things like “selected editing or cropping or slowing down or overdubbing, or manipulation of subtitles,” the site integrity head, Yoel Roth, said in 2020.

It’s unclear if X is adopting the same rules or has made any significant changes to tackle AI. Its help documentation currently says there’s a policy against sharing inauthentic media, but it’s rarely enforced, as the recent deepfake debacle of users sharing non-consensual nude images showed. In addition, even the White House now shares manipulated images.

Calling something “manipulated media” or an “AI image” can be nuanced.

Given that X is a playground for political propaganda, both domestically and abroad, some understanding of how the company determines what’s “edited,” or perhaps AI-generated or AI-manipulated, should be documented. In addition, users should know whether or not there’s any sort of dispute process beyond X’s crowdsourced Community Notes.

Techcrunch event

San Francisco
|
October 13-15, 2026

As Meta discovered when it introduced AI image labeling in 2024, it’s easy for detection systems to go awry. In its case, Meta was found to be incorrectly tagging real photographs with its “Made with AI” label, even though they had not been created using generative AI.

This happened because AI features are increasingly being integrated into creative tools used by photographers and graphic artists. (Apple’s new Creator Studio suite, launching today, is one recent example.)

As it turned out, this confused Meta’s identification tools. For instance, Adobe’s cropping tool was flattening images before saving them as a JPEG, triggering Meta’s AI detector. In another example, Adobe’s Generative AI Fill, which is used to remove objects — like wrinkles in a shirt, or an unwanted reflection — was also causing images to be labeled as “Made with AI,” when they were only edited with AI tools.

Ultimately, Meta updated its label to say “AI info,” so as not to outright label images as “Made with AI” when they had not been.

Today, there’s a standards-setting body for verifying the authenticity and content provenance for digital content, known as the C2PA (Coalition for Content Provenance and Authenticity). There are also related initiatives like CAI, or Content Authenticity Initiative, and Project Origin, focused on adding tamper-evident provenance metadata to media content.

Presumably, X’s implementation would abide by some sort of known process for identifying AI content, but X’s owner, Elon Musk, didn’t say what that is. Nor did he clarify whether he’s talking specifically about AI images, or just anything that’s not the photo being uploaded to X directly from your smartphone’s camera. It’s even unclear whether the feature is brand-new, as DogeDesigner claims.

X isn’t the only outlet grappling with manipulated media. In addition to Meta, TikTok has also been labeling AI content. Streaming services like Deezer and Spotify are also scaling initiatives to identify and label AI music, as well. Google Photos is using C2PA to indicate how photos on its platform were made. Microsoft, the BBC, Adobe, Arm, Intel, Sony, OpenAI, and others are on the C2PA’s steering committee, while many more companies have joined as members.

X is not currently listed among the members, though we’ve reached out to C2PA to see if that recently changed. X doesn’t typically respond to requests for comment, but we asked anyway.



Source link

发表评论

您的电子邮箱地址不会被公开。