As polls near, can Bangladesh weather the AI-misinformation storm?
Deepfakes, synthetic videos, and coordinated disinformation campaigns are reshaping Bangladesh’s election landscape, leaving voters exposed and institutions struggling to keep pace with rapidly evolving AI-driven manipulation
Bangladesh is moving towards its first national election following the collapse of the Hasina regime, and the country's information ecosystem is increasingly under attack from AI-generated misinformation and disinformation, posing serious risks to democratic decision-making.
In early November, Dhaka was gripped by anxiety. Public buses were set on fire, crude explosives were recovered from school premises, and fear spread across the capital in the lead-up to the Awami League's announced lockdown programme. Amid the chaos, a video began circulating widely on social media, purportedly showing an officer from the Rapid Action Battalion (RAB).
In the clip, the officer claims, "We have evidence that these bombs were planted by the BNP and Shibir. They are intentionally trying to put the blame on the Awami League."
At first glance, the video appeared legitimate. It featured the logo of a well-known media outlet placed neatly in the corner. The officer looked real, spoke naturally, and moved convincingly. The footage showed none of the glitches or distortions commonly associated with synthetic media. Except for a brief fade-out midway through the clip, there were no obvious indicators that the video was fake.
That realism was precisely the objective. The video was designed to lend credibility to a particular political narrative and strengthen support for the Awami League's lockdown, using an authoritative voice that never actually existed.
The interim government prepares to organise elections in such a landscape, and AI-generated campaigns are becoming increasingly common. Various actors are weaponising artificial intelligence to influence public opinion, and Bangladesh appears poorly equipped to counter this emerging threat.
When reality becomes uncertain
Democracy relies on a shared understanding of truth. But recent advances in AI-powered video generation are eroding that foundation.
Google's newest video-generation tool, Nano Banana AI, is capable of producing footage that closely resembles real-world video recordings. OpenAI's Sora and similar platforms are narrowing the gap even further. The problem is that public awareness, media literacy, and legal safeguards have not evolved at the same pace as the technology itself.
Although the tools have changed, the way people consume and trust digital content largely has not. Even trained professionals now struggle to confidently distinguish authentic videos from fabricated ones.
Google's newest video-generation tool, Nano Banana AI, is capable of producing footage that closely resembles real-world video recordings. OpenAI's Sora and similar platforms are narrowing the gap even further. The problem is that public awareness, media literacy, and legal safeguards have not evolved at the same pace as the technology itself.
Sumon Rahman, head of Media Studies and Journalism at the University of Liberal Arts Bangladesh (ULAB), warned that AI-driven misinformation could fundamentally distort the upcoming election.
"Voter turnout will decrease, but more importantly, voter choice will be influenced. Who people decide to vote for may be shaped by AI-driven persuasion, often without voters realising it. And AI, as we know, is rarely used for noble purposes in election contexts. It is almost always deployed to manipulate, fabricate, or mislead," he told TBS.
An especially alarming development is that real information is increasingly being dismissed as artificial. Labelling inconvenient evidence as "AI-generated" has become a political tactic in itself.
According to Sumon, this growing ambiguity threatens the very basis of democratic participation. "In an election, citizens need to cast informed votes," he said, "But if the information space is disrupted, and if people do not have the tools to verify what they see, then they cannot make informed decisions. Those who do have the tools are also struggling. Altogether, it is becoming a dystopian situation."
Mapping the sources of misinformation
Data from fact-checking organisations illustrates how rapidly election-related misinformation has escalated. A recent report by DismissLab found that between July and September, eight Bangladesh-focused fact-checkers verified 21 election-related false claims. Yet between 1 October and 15 November alone, 57 separate claims were debunked – nearly triple the total from the entire previous quarter.
The actors behind this surge are varied.
"Politically motivated misinformation comes in two ways – externally and internally. Internal disinformation refers to actors within the country who are competing in the election. Different groups are spreading misinformation and misleading narratives against one another, and in some cases, these efforts appear to be coordinated," explained Qadruddin Shishir, editor of The Dissent and a former fact-checker for Agence France-Presse (AFP).
"The other dimension is external disinformation – narratives being spread from outside the country." He pointed out that factions linked to the Awami League, along with sections of Indian social media users and media outlets, are consistently amplifying misleading narratives. When domestic and external campaigns overlap, the impact becomes far more potent.
"When these two forces reinforce each other, they can have a serious impact on the election," Qadruddin warned.
However, political parties are not the sole drivers of the problem. As AI tools become widely accessible, everyday users are also generating content without clear ethical boundaries. Even content created without malicious intent can easily be misunderstood, misrepresented, or weaponised within Bangladesh's polarised digital space.
A regulatory vacuum
While misinformation is a global challenge, its consequences are particularly severe in Bangladesh due to weak regulatory and institutional safeguards. Unlike the European Union, which mandates explicit labelling of AI-generated political content, Bangladesh has no such legal requirement.
This lack of transparency leaves audiences vulnerable and uncertain about what they are seeing.
Fact-checking organisations such as DismissLab and AFP continue to identify and debunk false content, including AI-generated material. Yet their influence remains limited, largely confined to audiences already inclined to question misinformation. Government-run fact-checking initiatives exist as well, but widespread scepticism towards state institutions reduces their credibility.
Social media platforms claim to have moderation policies in place, but enforcement remains inconsistent and slow.
Regulation, however, is not straightforward.
"When we talk about regulation, there are essentially two types: one concerns how the government will respond from a legal standpoint, and the other relates to how much the platforms themselves will regulate – what they will or will not do, and how quickly they will respond," Qadruddin noted.
Bangladesh, he argued, lacks a comprehensive and balanced framework to address either dimension. Overreliance on state control risks censorship, while leaving platforms unchecked allows misinformation to flourish.
"It needs to be a coordinated effort involving the government, journalists, technologists, and human rights advocates," he concluded.
As Bangladesh approaches a pivotal election, the question is no longer whether AI will shape political narratives, but whether the country can protect its democratic processes before the line between truth and fabrication disappears entirely.
