Google’s AI Detection Flip-Flops on Doctored White House Photo

When the official White House X account posted an image depicting activist Nekima Levy Armstrong in tears during her arrest, there were telltale signs that the image had been altered.

Less than an hour before, Homeland Security Secretary Kristi Noem had posted a photo of the exact same scene, but in Noem’s version Levy Armstrong appeared composed, not crying in the least.

Seeking to determine if the White House version of the photo had been altered using artificial intelligence tools, we turned to Google’s SynthID – a detection mechanism that Google claims is able to discern whether an image or video was generated using Google’s own AI. We followed Google’s instructions and used its AI chatbot, Gemini, to see if the image contained SynthID forensic markers.

The results were clear: The White House image had been manipulated with Google’s AI. We published a story about it.

After posting the article, however, subsequent attempts to use Gemini to authenticate the image with SynthID produced different outcomes.

In our second test, Gemini concluded that the image of Levy Armstrong crying was actually authentic. (The White House doesn’t even dispute that the image was doctored. In response to questions about its X post, a spokesperson said, “The memes will continue.”)

In our third test, SynthID determined that the image was not made with Google’s AI, directly contradicting its first response.

At a time when AI-manipulated photos and videos are growing inescapable, these inconsistent responses raise serious questions about SynthID’s reliability to tell fact from fiction.

A screenshot of the initial  response from Gemini, Google’s AI chatbot, stating that the crying image contained forensic markers indicating the image had been manipulated with Google’s generative AI tools, taken on Jan. 22, 2026. Screenshot: The Intercept

Initial SynthID Results

Google describes SynthID as a digital watermarking system. It embeds invisible markers into AI-generated images, audio, text or video created using Google’s tools, which it can then detect – proving whether a piece of online content is authentic.

“The watermarks are embedded across Google’s generative AI consumer products, and are imperceptible to humans – but can be detected by SynthID’s technology,” says a page on the site for DeepMind, Google’s AI division.

Google presents SynthID as having what in the realm of digital watermarking is known as “robustness” – it claims to be able to detect the watermarks even if an image undergoes modifications, such as cropping or compression. Therefore, an image manipulated with Google’s AI should contain detectable watermarks even if it has been saved multiple times or posted on social media.

Google steers those who want to use SynthID toward its Gemini AI chatbot, which they can prompt with questions about the authenticity of digital content.

“Want to check if an image or video was generated, or edited, by Google AI? Ask Gemini,” the SynthID landing page says.

We decided to do just that.

We saved the image file that the official White House account posted on X, bearing the filename G_R3H10WcAATYht.jfif, and uploaded it to Gemini. We asked whether SynthID detected the image had been generated with Google’s AI.

To test SynthID’s claims of robustness, we also uploaded a further cropped and re-encoded image, which we named imgtest2.jpg.

Finally, we uploaded a copy of the photo where Levy Armstrong was not crying, as previously posted by Noem. (In the above screenshot, Gemini refers to Noem’s photo as signal-2026-01-22-122805_002.jpeg because we downloaded it from the Signal messaging app).

“I’ve analyzed the images you provided,” wrote Gemini. “Based on the results from SynthID, all or part of the first two images were likely generated or modified with Google AI.”

“Technical markers within the files imgtest2.jpg and G_R3H10WcAATYht.jfif indicate the use of Google’s generative AI tools to alter the subject’s appearance,” the bot wrote. It also identified the version of the image posted by Noem as appearing to “be the original photograph.”

With confirmation from Google that its SynthID system had detected hidden forensic watermarks in the image, we reported in our story that the White House had posted an image that had been doctored with Google’s AI.

This wasn’t the only evidence the Whie House image wasn’t real; Levy Armstrong’s attorney told us that he was at the scene during the arrest and that she was not at all crying. The White House also openly described the image as a meme.

A Striking Reversal

A few hours after our story published, Google told us that they “don’t think we have an official comment to add.” A few minutes after that, a spokesperson for the company got back to us and said they could not replicate the result we got. They asked us for the exact files we uploaded. We provided them.

The Google spokesperson then asked, “Were you able to replicate it again just now?”

We ran the analysis again, asking Gemini to see if SynthID detected the image had been manipulated with AI. This time, Gemini failed to reference SynthID at all — despite the fact we followed Google’s instructions and explicitly asked the chatbot to use the detection tool by name. Gemini now claimed that the White House image was instead “an authentic photograph.”

It was a striking reversal considering Gemini previously said that the image contained technical markers indicating the use of Google’s generative AI. Gemini also said, “This version shows her looking stoic as she is being escorted by a federal agent” — despite our question addressing the version of the image depicting Levy Armstrong in tears.

A screenshot of Gemini’s second response, this time stating that the same image it previously said SynthID detected as being doctored with AI, was in fact an authentic photograph, taken on Jan. 22, 2026. Screenshot: The Intercept

Less than an hour later, we ran the analysis one more time, prompting Gemini to yet again use SynthID to check whether the image had been manipulated with Google’s AI. Unlike the second attempt, Gemini invoked SynthID as instructed. This time, however, it said, “Based on an analysis using SynthID, this image was not made with Google AI, though the tool cannot determine if other AI products were used.”

A screenshot of Gemini’s third response, this time stating that SynthID had determined that the image was not made with Google AI, after all, despite earlier saying SynthID found that it had been generated with Google’s AI, taken on Jan. 22, 2026. Screenshot: The Intercept

Google did not answer repeated questions about this discrepancy. In response to inquiries, the spokesperson continued to ask us to share the specific phrasing of the prompt that resulted in Gemini recognizing a SynthID marker in the White House image.

We didn’t store that language, but told Google it was a straightforward prompt asking Gemini to check whether SynthID detected the image as being generated with Google’s AI. We provided Google with information about our prompt and the files we used so the company could check its records of our queries in its Gemini and SynthID logs.

“We’re trying to understand the discrepancy,” said Katelin Jabbari, a manager of corporate communications at Google. Jabbari repeatedly asked if we could replicate the initial results, as “none of us here have been able to.”

After further back and forth following subsequent inquiries, Jabbari said, “Sorry, don’t have anything for you.”

Bullshit Detector?

Aside from Google’s proprietary tool, there is no easy way for users to test whether an image contains a SynthID watermark. That makes it difficult in this case to determine whether Google’s system initially detected the presence of a SynthID watermark in an image without one, or if subsequent tests missed a SynthID watermark in an image that actually contains one.

As AI become increasingly pervasive, the industry is trying to put behind its long history of being what researchers call a “bullshit generator.”

Supporters of the technology argue tools that can detect if something is AI will play a critical role establishing the common truth amid the pending flood of media generated or manipulated by AI. They point to their successes, as with one recent example where SynthID debunked an arrest photo of Venezuelan President Nicolas Maduro flanked by federal agents as an AI-generated image. The Google tool said the photo was bullshit.

If AI-detection technology fails to produce consistent responses, though, there’s reason to wonder who will call bullshit on the bullshit detector.

#Googles #Detection #FlipFlops #Doctored #White #House #Photo

发表评论

您的电子邮箱地址不会被公开。