How AI-generated content is manufacturing consensus ahead of Bangladesh election
The aim is clear: distort public opinion in the lead-up to the national elections. Fake content has already proven to be consequential in multiple countries, and Bangladesh seems to be on the same track
A female presenter asks a woman wearing sindoor on her forehead, "Sister, whom will you vote for this time?" The woman says, "We have seen all the parties; now Jamaat should be given a chance."
Another woman, this time a garment worker, is seen speaking while working. She says, "There is no value for my hard labour. I want a life with dignity. That is why my vote is for Jamaat." A group of young women sitting in an open field say, "We have seen all the parties. This time, I will vote for Jamaat's Dari Palla [scales]." The others then respond in unison, "Yes, that's right."
Here's the catch: None of the characters are real. The videos are all made with artificial intelligence (AI).
Not just videos, but there has been a surge of AI-generated photos on social media as well. On Facebook, a photo of Ducsu Vice-President Sadik Kayem with Faisal Karim Masud, the person suspected of shooting Hadi, was widely shared. It claimed that they were seen having tea together.
BNP Senior Joint Secretary General Advocate Ruhul Kabir Rizvi had referred to this image while making remarks at a rally, for which he later apologised.
Another viral photo showed Tarique Rahman standing beside the hospital bed of Khaleda Zia with his wife and daughter. It was also generated by AI.
But this is just the tip of the iceberg.
The ecosystem of disinformation is becoming increasingly sophisticated, targeting voters through impersonation, fabricated talk shows and edited speeches.
The aim is clear: distort public opinion at a politically charged moment, especially in the lead-up to the next national elections. Fake video content has already proven to be politically consequential in multiple countries.
AI slop on social media at an industrial scale
There were some cases of fake videos before, but after 5 August, the situation turned dire. In Bangladesh, Dismislab's recent investigation revealed a network of fake videos primarily hosted on YouTube.
These are not only political in nature but also monetised, with around 90% carrying ads. Each video garners an average of 12,000 views and often violates YouTube's rules on deceptive practices, impersonation, and copyright.
In July 2025, Rumor Scanner identified 310 separate misinformation cases, with 184 (59%) linked to political themes; a notable portion of these cases included video-centric mis/disinformation, which is now increasingly AI-generated. Within these, 68 were flagged for AI-generated or enhanced content, and two were deepfake videos.
In August 2025, the same organisation recorded 320 instances of misinformation, 202 of which were political, with 218 video-based pieces — many leveraging AI for creation or editing.
Across September 2025, 329 misinformation cases were detected in a single month, with AI and deepfake involvement explicitly noted in multiple documented cases.
Between July and September 2025, analysis of 71 AI-generated social media posts revealed that 57 were videos and 14 were images, showing that video content are the dominant medium for AI-driven political propaganda.
During this period, platform distribution showed Facebook hosting 86% of these AI-generated posts, followed by 56% on TikTok, YouTube, Instagram and X — often cross-posted across multiple services.
Synthetic personas and manufactured consensus
Visual misinformation is uniquely persuasive due to its perceived authenticity, and in a country like Bangladesh — where digital literacy is low and media trust is polarised — it is a potent political tool. And the government knows it too.
One of the most disruptive uses of AI in Bangladesh has been the creation of synthetic personas, like the AI-characters from before. Generative AI is being used to produce profile photos of "ordinary citizens", complete with realistic facial imperfections, culturally appropriate clothing, and domestic backgrounds. These profiles then engage in political discussions, endorse narratives, or amplify polarising content.
In coordinated campaigns, hundreds of such personas appear to agree with each other, creating an illusion of popular consensus. This tactic, sometimes referred to as "manufactured majoritarianism", is particularly potent in Bangladesh's highly polarised political environment, where perceptions of public sentiment often influence offline mobilisation.
Sumon Rahman, dean of the School of Social Sciences and head of Media Studies and Journalism at the University of Liberal Arts Bangladesh (ULAB), said, "What we anticipated a long time ago — that AI would significantly increase misinformation and fake news — is exactly what is happening now. So this is not surprising at all. Whenever a new technology emerges globally, its impact is inevitably felt here as well."
Sumon, also the founding editor of FactWatch, the first International Fact-Checking Network (IFCN)-certified fact-checker in Bangladesh, added, "Imagine an election morning when an AI-generated or manipulated video suddenly appears. Regardless of whether someone is media-literate, such content can confuse anyone. Much depends on timing. If it is released at precisely the right moment, our inherent confirmation bias will take over.
"We become confused, even while trying to fact-check. Ultimately, this kind of content will mislead people, guide them in the wrong direction, and make it very difficult for many to recognise that they are being deceived," he further said.
"We become confused, even while trying to fact-check. Ultimately, this kind of content will mislead people, guide them in the wrong direction, and make it very difficult for many to recognise that they are being deceived."
Our media literacy is already very low. General literacy is even lower. On top of that, AI is effectively adding fuel to the fire. So this phenomenon will continue.
Minhaj Aman, lead researcher at Dismislab, said, "As elections near, social media will likely see a surge in fake content, including sophisticated fake videos like those identified. With low media literacy and many voters active on social media, such content could significantly influence politics. In countries like India and the US, information shapes decisions, from purchasing products to voting."
He added, "In our country, where media literacy is weak and traditional media struggles to keep pace with social media's information flow, fake talk shows pushing specific narratives will undoubtedly affect voters' decisions and, ultimately, politics."
Apon Das, research associate at Tech Global Institute, pointed out the dangers of such fake videos as well, "Fact-checkers have already debunked deepfakes and AI-made photos and videos where political leaders say or do things they never actually did. These fake contents promote false claims and create fear and confusion to manipulate voter opinions. Sometimes, they target religious or ethnic sensitivities to divide communities and distract commoners from real issues."
He added, "From my experience, many people struggle to distinguish between real and fake content, especially when content is presented in a 'news' format, which people are more likely to believe. Only public awareness campaigns and rapid fact-checking can help us, where news media can take a significant role since they have a large circulation to reach people."
As noted in Context's findings on the Global South, cheapfakes are especially dangerous in young democracies where voters are vulnerable to emotional content and partisan manipulation. In Bangladesh, the most at-risk group is young, first-time voters with little experience distinguishing credible sources from fakes.
Minhaj Aman said, "Political parties have limited direct control but must prioritise transparency through websites, spokespersons, and social media. The more open and accessible a party's information, the harder it is for false narratives to take hold. Parties should maintain robust information teams to provide accurate data continuously. They could also foster social consensus against misinformation, involve the government, and collaborate with civil society to form broad alliances."
What to do?
The Election Commission (EC) has declared that they will formulate an action plan and form an integrated cell to prevent the misuse of artificial intelligence (AI) during the upcoming national elections.
However, the scale of the problem may prove too big for them alone.
Sumon Rahman said, "Unless there is a truly systematic campaign and unless the media plays a very proactive and prompt role, we are likely to face serious consequences. We have seen that, this time, the Election Commission has at least mentioned AI-related issues in its regulations. But the question is: are these rules actually enforceable, given the current situation and the present administration?"
He added, "At this stage, it sounds more like wishful thinking. How much does the Election Commission itself really understand about this issue, or about how to tackle it? Even if they want to address it, who is the responsible body within the election mechanism to deal with AI-induced campaigns and propaganda? Do we see any such body — a committee or a designated office — formed for this purpose? It is all just verbal assurances. Ultimately, it seems either they feel they must say something, or they genuinely have no idea about the magnitude of the problem."
"But realistically, unless we see a systematic approach, such as the Election Commission establishing a dedicated unit that works in coordination with the media and fact-checking organisations, nothing meaningful will happen. As a body, the Election Commission has no real outreach of its own; it must work through the media. It also lacks specialised technical capacity. So who exactly will do this work on their behalf?" he added.
Minhaj Aman talked about some solutions to this digital dilemma.
"Enforcing laws against misinformation is challenging. Globally, discussions continue, but no effective legal solution has emerged, and laws risk misuse. Policies targeting repeat disinformation spreaders might help, but a multi-stakeholder approach is essential. Academia and civil society should collaborate on creative ways to boost media literacy and research current literacy levels. Relying solely on laws is, in my view, insufficient to address this issue effectively."
What Bangladesh is witnessing is not an anomaly but an early manifestation of how generative AI reshapes information ecosystems in politically contested societies.
The sheer volume, speed, and sophistication of AI-generated disinformation suggest that the challenge is no longer about isolated falsehoods, but about the industrial production of alternative realities — manufactured at scale, consumed daily, and increasingly indistinguishable from truth.
