AI or authentic?: The art of verifying AI images beyond careless denials
The issue becomes more glaring when authorities who cannot properly use AI or verification tools make sweeping claims about images being fake
In an era where technology can create the impossible, the line between reality and fabrication has never been thinner. AI-generated images now travel faster than the truth, often fooling even the most vigilant social media users.
One striking example came on 25 March 2023 when an image of Pope Francis wearing a Balenciaga puffer jacket went viral. Many briefly believed the pontiff had traded his cassock for couture.
The picture spread across Facebook, Twitter and Instagram before fact-checkers debunked it. In reality, it was created on Midjourney, one of the AI platforms transforming digital imagery.
Two months later, another image tested the limits of digital trust. On 22 May 2023, a photo showing smoke rising near the Pentagon circulated on Twitter, appearing authentic enough to temporarily shake markets and push the Dow Jones down 85 points.
Investigators from Bellingcat and the Washington Post quickly detected irregular shadows and distorted fencing, revealing it as an AI creation.
Closer to home, During the August 2024 monsoon, images of a child struggling in floodwaters in Bangladesh went viral.
While many social media users sympathised with the plight, others dismissed the pictures outright as AI-generated. Fact-checking organisations such as WebQoof and Newschecker later confirmed that several of the most shared images were indeed synthetic.
But the rush to brand all flood-related visuals as "fake" meant that genuine photographs taken by local journalists also faced public doubt.
These incidents reveal a recurring truth. The problem is not the technology but the human interpretation or misinterpretation of it. Technology does not lie, people do.
Yet some go further, claiming genuine photographs are AI-made or fake, often without a shred of verification.
To dismiss a real image as artificial is not just careless. It is a display of digital illiteracy, a modern-day equivalent of insisting a mirror is lying simply because you dislike your reflection.
The issue becomes more glaring when authorities who cannot properly use AI or verification tools make sweeping claims about images being fake. Such statements may dominate headlines but they erode credibility.
Superiority does not lie in possessing technology; it lies in knowing how to use it.
The loudest claim of fake is meaningless without evidence, and no amount of authority can substitute for methodical verification.
How metadata helps verify disputed images
Globally, fact-checking organisations have developed clear protocols to address this problem. Metadata is examined first. Does a file carry details such as camera model, lens or timestamp?
Visual forensics then check for anomalies, such as extra fingers, distorted shadows or blurred backgrounds. Finally, reverse image searches can reveal if a photo has been recycled from past events.
When combined, these methods create a robust case for authenticity or fabrication, far superior to blind claims.
Bangladeshi media is gradually learning similar lessons. While individual journalists are increasingly adopting AI tools, institutional support remains limited.
Journalists and their newsroom AI tactics
A 2024 survey titled Media Metamorphosis: AI and Bangladeshi Newsrooms 2024, conducted by the Media Resources Development Initiative with support from Digitally Right and funded by The Asia Foundation, revealed that 51% of journalists use AI tools independently, but only 20% report institutional use in newsroom operations. Some newsrooms have begun practical measures.
Journalists increasingly submit original RAW files alongside published images to preserve metadata, and training sessions help staff identify AI-generated anomalies before publishing.
Credibility in journalism therefore depends not on loudly declaring content fake but on methodical verification using technology and evidence.
To further support journalists, initiatives like the Google News Initiative offer free online courses to media professionals in Bangladesh, aiming to equip them with skills in AI and digital journalism.
However, the effectiveness of these courses depends on their widespread adoption and the willingness of newsrooms to integrate AI verification practices into their workflows.
AI can generate remarkable images, but it also exposes gaps in digital literacy. Institutions and authorities that rush to dismiss reality without evidence risk becoming the punchline themselves.
In the end, the message is clear.
Technology is not the villain. Human error is. Mastery exists only when one uses these tools wisely, with method, evidence and integrity.
