AI-generated 'voters' targeting social feeds with synthetic emotion
Dismislab’s investigation reveals no trace of genuine subjects or real settings in videos

In the lead-up to the 2026 national election, Bangladesh is seeing a wave of Generative Artificial Intelligence (Gen-AI) driven political campaigning where AI-generated videos featuring fictional voters are circulating on social media, raising concerns over transparency and public trust.
Since mid‑June, dozens of short clips – often no longer than eight seconds – have begun appearing across popular social media platforms like Facebook and TikTok, purporting to capture uninhibited public opinion from ordinary citizens.
One clip shows a woman returning from the market, dressed with vermilion in her hair and carrying shopping bags. "Who are you voting for?" the unseen interviewer asks. "I'm voting for Jamaat-e-Islami," she replies with a confident tone.
Another clip depicts an elderly rickshaw puller, weary from labour, stating he will vote for Jamaat's symbol – the two-tray scales – because he seeks "justice and fairness." At first glance, the clips seem somewhat authentic, conveying voices from a wide range of backgrounds – sectarian, economic, generational.

In reality, none are real. Every element of the videos such as the faces, voices and even the background was generated using Google's advanced "Veo" text-to-video AI model.
No 'AI-generated' labels
Interviews conducted by Dismislab, a neutral and non-partisan online verification platform, reveal no trace of genuine subjects or real settings in the approximately 70 videos (spanning between June 18 and 28 of this year) they analysed.
Each reel or clip generated millions of views and amassed hundreds of thousands of reactions, yet none disclosed they were synthetic of origin, nor did social media platforms apply any "AI-generated" labels.
Designed with precision, these faux testimonials span social identities: fruit sellers, garment workers, schoolteachers, hijab-wearing women, sindoor-adorned Hindu women, even young male first-time voters.
The messaging positions Jamaat-e-Islami as inclusive, morally driven, and responsive to citizens' demands. "Only Jamaat is checking in on us," one clip proclaims while another declares, "Jamaat is selling tickets to heaven."
These carefully crafted emotional appeals seek to dismantle assumptions about the party's traditional support base and recast it as transcending communal, religious, and class boundaries. Experts denote it as a strategic attempt at re‑branding via entirely simulated voices.
Meanwhile, imitators emerge.
Soon after Jamaat-affiliated content appeared, other parties followed suit. BNP-aligned videos featured first-time voters warning Jamaat opposes national sovereignty. Clips framed as commentary from Jamaat supporters emerged, warning that "voting for Jamaat would leave the country stripped bare."
A social media user promoting the National Citizen Party (NCP) ran AI-generated clips branding Jamaat as a "religious business" and urged voters to choose the party's water-lily emblem instead. The now-banned Awami League-affiliated content echoed calls to boycott the election if anti-Liberation forces gain ground.
Following such campaigns, ethical tension grows as experts caution that these synthetic "voices of the people", which neither disclose their origin nor the artificial nature of their creators, pose a serious ethical threat.
Blurring the line between authentic public opinion and manufactured sentiment, they risk misleading the electorate. Without clear labels and robust digital literacy, voters cannot distinguish between genuine grassroots support and algorithmic persuasion.
As these "soft-fake" campaign techniques proliferate daily – even surpassing earlier election-related AI misuse – the need for regulatory clarity intensifies. The seamless realism achieved by Veo challenges the ability of platforms and users alike to detect inauthentic content.
Risks of 'softfakes'
In a democratic process, where public sentiment shapes outcomes, undisclosed AI-generated endorsements undermine electoral integrity, explained academics.
Fahmidul Haq, a faculty member at New York-based Bard College, "There must be policies [in Bangladesh]. For example, AI-generated campaign content should be labeled clearly. We need forensic labs to detect synthetic media, be it government-led or private."
Dr Rumman Chowdhury, CEO and co-founder of Humane Intelligence and a globally recognised expert on ethical AI, has long warned about the risks of softfakes – a term she used to describe a certain type of AI-generated content.
In a 2024 article in Nature, she defined softfakes as: "…images, videos or audio clips that are doctored to make a political candidate seem more appealing. Whereas deepfakes (digitally altered visual media) and cheap fakes (low-quality altered media) are associated with malicious actors, softfakes are often made by the candidate's campaign team itself."
Chowdhury said that even when AI-generated content is not overtly malicious, its use by politicians and parties raises serious ethical concerns. She called for rules both from the companies that generate AI content to the social-media platforms that distribute them.