Misinformation, turbocharged by AI, was hard to avoid in the hours and days that followed the Bondi beach terror attack, as some platforms pushed dubious claims to users trying to find factual information.
The X “for you” page, which serves up content determined by an algorithm, was filled with false details, including: that the attack that left 15 people dead was a psyop or false-flag operation; that those behind the attack were IDF soldiers; that those injured were crisis actors; that an innocent person was one of the alleged attackers; and that the Syrian Muslim hero who fought the attackers was a Christian with an English name.
Generative AI only made matters worse.
An altered clip of the New South Wales premier, Chris Minns, with deepfaked audio making false claims about the attackers, was shared across multiple accounts.
In another particularly egregious example, an AI-generated image based on an actual photo of the victims was altered to suggest he was a crisis actor having red makeup applied to his face to look like blood.
“I saw these images as I was being prepped to go into surgery today and will not dignify this sick campaign of lies and hate with a response,” the man depicted in the fake image, human rights lawyer Arsen Ostrovsky, later posted on X.
Pakistan’s information minister, Attaullah Tarar, said his country had been the victim of a coordinated online disinformation campaign in the wake of the Bondi beach terror attack, with false claims circulating that one of the suspects was a Pakistani national.
The man who was falsely identified told Guardian Australia it was “extremely disturbing” and traumatising to have his photo circulated on social media next to claims he was the alleged attacker.
Tarar said the Pakistani man was “a victim of a malicious and organised campaign” and alleged the disinformation campaign originated in India.
Meanwhile X’s AI chatbot Grok told users that an IT worker with an English name was the hero who tackled and disarmed one of the alleged shooters, rather than the Syrian-born Ahmed al-Ahmed. This claim seems to have originated on a website that was set up on the same day as the terror attack to mimic a legitimate news site.
AI-generated images of Ahmed also proliferated on social media, promoting crypto schemes and fake fundraisers.
It was a far cry from Twitter’s heyday as a hub for breaking news. Misinformation was floating around back then too, but it was less common and it wasn’t served up via an algorithm designed to reward engagement based on outrage (particularly for verified accounts that stand to financially benefit from that engagement).
Many of the posts touting false claims had hundreds of thousands or even millions of views.
Legitimate news was circulating on X, but it was buried under misinformation turbocharged by AI.
When Musk took over X, he dismantled the site’s factchecking scheme in favour of a user rating system called “community notes”, which appends crowdsourced user factchecking to posts. Other platforms are following suit. Meta has dismantled its previous factchecking system in favour of implementing its own version of community notes.
But, as the QUT lecturer Timothy Graham said this week, the community notes system isn’t helpful in situations where opinions are deeply divided. It takes too long. Community notes have since been applied to many of the above examples, but it happened long after most people would have seen the original posts in their feeds.
X is trialling having Grok generate its own community notes to factcheck posts, but if the Ahmed example is anything to go by this is even more worrying.
The company did not respond to questions about what it is doing to tackle misinformation posted on its platform, or propagated by its AI chatbot.
A saving grace is that many of the fakes are still easily spotted – for now. The fake Minns, for example, had a American twinge in the accent, making it obvious it wasn’t him. The crisis actor post had many of the hallmarks of dodgy AI image generation, such as incorrectly generated text on a T-shirt.
For the most part, media outlets ignored the posts or called them out.
But as AI models improve that could change, making it even harder to distinguish fact from fiction. Meanwhile, AI companies and the platforms hosting their content appear indifferent to doing anything to prevent it.
Digi, the industry group representing the social media platforms in Australia, proposed dropping a requirement to tackle misinformation from an industry code earlier this year, saying “recent experience” demonstrated “misinformation is a politically charged and contentious issue within the Australian community”.
It’s hard to see how this week will change things.
#Fake #Minns #altered #images #psyop #theories #Bondi #attack #misinformation #shows #AIs #power #confuse #Bondi #beach #terror #attack