Is Bangladesh prepared for AI-generated info wars in upcoming elections?
As the interim government prepares to hold the first election since the Hasina regime’s fall, AI-generated mis- and disinformation campaigns are muddying the waters of public opinion
In early November, as public buses burned, crude bombs were discovered in schools, and panic spread across Dhaka ahead of the Awami League's lockdown programme, a video of a RAB officer began circulating online.
"We have evidence that these bombs were planted by the BNP and Shibir. They are intentionally trying to put the blame on the Awami League," the officer claims in the video.
On the surface, the clip has all the elements needed to make the audience believe it is authentic.
It places the logo of a popular media outlet in the bottom-right corner; the officer appears convincingly real; the footage lacks the telltale signs typically associated with AI-generated content; and the human movements show no visible inconsistencies. Apart from a brief fade-out in the middle of the clip, there is no way to tell that the video has been artificially generated.
And that was the point: to make the Awami League's lockdown successful by spreading disinformation through an official source, albeit a fictitious one.
As the interim government prepares to hold the first election since the Hasina regime's fall, various quarters are deploying AI-generated mis- and disinformation campaigns to muddy the waters of public opinion.
And Bangladesh seems ill-prepared to tackle this.
The blurred line between truth and fabrication
Truth is the fundamental currency of a democracy. But advancements in AI video-generation tools have blurred the line between truth and fabrication.
Google's latest tool for video generation — Nano Banana AI — now produces footage that is virtually indistinguishable from video shot on real cameras. OpenAI's Sora and other video-generation tools are closing the gap just as quickly.
But our legal systems and public awareness have not been able to catch up with rapid developments of this technology. Even though technology is shifting rapidly, the way we engage with digital media remains unchanged.
Even experts now struggle to differentiate between authentic content and AI-generated fabrications.
Sumon Rahman, head of Media Studies and Journalism at the University of Liberal Arts Bangladesh (ULAB), explained the potential threats posed by AI-generated mis- and disinformation campaigns in the upcoming election.
"Voter turnout will decrease, but more importantly, voter choice will be influenced. Who people decide to vote for may be shaped by AI-driven persuasion — often without voters realising it. And AI, as we know, is rarely used for noble purposes in election contexts. It is almost always deployed to manipulate, fabricate, or mislead," he told TBS.
We currently have no comprehensive framework for this. And such a framework cannot rely solely on the government's directives. It needs to be a coordinated effort involving the government, journalists, technologists, and human rights advocates.
A particularly dangerous trend is that even genuine content is now being dismissed as AI-generated. AI has become a convenient escape route: if a real image or piece of news is politically inconvenient, it is simply labelled as AI. We are heading toward a time when people cannot distinguish between truth and falsehood, and this ambiguity poses a major threat to democracy.
"In an election, citizens need to cast informed votes," Sumon continued, "but if the information space is disrupted, and if people do not have the tools to verify what they see, then they cannot make informed decisions. Those who do have the tools are also struggling. Altogether, it is becoming a dystopian situation."
Who is behind it?
According to a recent report by DismissLab, a fact-checking agency, between July and September, eight Bangladesh-focused fact-checking organisations verified 21 cases of election-related misinformation.
However, from 1 October to 15 November alone, they debunked 57 unique false claims. In other words, the first six weeks of the current quarter produced nearly three times as much election misinformation as the entire previous quarter.
Several entities are behind this surge in election centred misinformation campaigns.
"Politically motivated misinformation comes in two ways — externally and internally. Internal disinformation refers to actors within the country who are competing in the election. Different groups are spreading misinformation and misleading narratives against one another, and in some cases, these efforts appear to be coordinated," explained Qadruddin Shishir, editor of The Dissent and a former fact-checker for Agence France-Presse (AFP).
"The other dimension is external disinformation — narratives being spread from outside the country."
For example, certain factions within the Awami League, as well as segments of Indian social media and media outlets, are continuously pushing disinformation from their end, and this is expected to continue.
"When these two forces reinforce each other, they can have a serious impact on the election," Qadruddin noted.
However, political organisations are not the only ones responsible for the surge in AI-generated misinformation. AI has been integrated into digital media, and people are using it freely, as no clear ethical guidelines have been introduced yet. Even when the intent is not malicious, much of this content is often misinterpreted or perceived as misinformation by the public.
How to tackle it
Misinformation is a global crisis but it becomes even more severe in Bangladesh. Because unlike the European Union or other developed countries, Bangladesh has no law mandating clear labelling of AI-generated content.
There is no quick fix to this problem, but there are some long term measures that can be implemented.
For starters, content labelling laws can be mandated. The EU requires labels for AI-generated political content. That way, if a video is synthetic, people will know how to react to it.
Fact-checking organisations like Dismisslab and AFP have been actively identifying AI-generated misinformation, but their hands are tied. Their reach is limited to a niche community. Government institutions have launched their own fact-checking wings, but public distrust toward state-run platforms often limits their impact.
Social media platforms have their own sets of policies, but verification and flagging remains lax.
But this approach comes with its own complications. When the government holds social media platforms accountable, it is rarely seen positively.
"When we talk about regulation, there are essentially two types: one concerns how the government will respond from a legal standpoint, and the other relates to how much the platforms themselves will regulate — what they will or will not do, and how quickly they will respond," Qadruddin noted.
According to him, Bangladesh currently has no comprehensive framework for this. And such a framework cannot rely solely on the government's directives.
"It needs to be a coordinated effort involving the government, journalists, technologists, and human rights advocates," he concluded.
