Elections in the age of Artificial Intelligence
What happens to democracy when Artificial Intelligence can convincingly imitate humans? From elections to public consultations, AI is blurring the line between civic participation and manufactured consent
Elections, once shaped by rallies, leaflets and television debates, are now increasingly contested in a space where truth itself can be generated, manipulated and scaled by machines. The panel on "AI and Governance in Asia", hosted by International IDEA, asked how deeply and how dangerously artificial intelligence has already affected democracy.
Across the world, generative AI has moved from novelty to infrastructure with startling speed. Just a month after its launch, ChatGPT reached 100 million monthly users, making it the fastest-growing application in history. This growth has transformed journalism, medicine and finance. It has also opened a new and volatile chapter for elections.
AI can lower barriers to civic participation. Asking a chatbot how to navigate bureaucracy or draft a letter to an elected representative could, in theory, strengthen democratic engagement. But the darker reality is that the same tools can generate misinformation at an industrial scale, threatening representation, accountability and, ultimately, public trust in any democracy.
"There is an urgent need for preparedness to ensure elections remain free, fair, and secure from AI influence," warned Leena Rikkilä Tamang, International IDEA's Asia and the Pacific Regional Director. Electoral management bodies must not only mitigate risks but also learn how to harness AI's benefits responsibly, she argued.
The risks are already visible. Since the rise of ChatGPT in 2022, elections have become what Antonio Spinelli, senior advisor at International IDEA, called a "major testing ground" for generative AI.
"In a very short time, AI has shifted from being an experimental novelty to being embedded in how information is shaped, received, and disseminated," he said, describing it as a "dual sword", with the negative edge proving far more destructive.
Fake audio clips have circulated during elections in India. In Bangladesh, women politicians have been targeted with sexualised deepfakes designed to intimidate and silence. In South Korea, fabricated campaign videos have distorted political narratives. These tactics do not merely misinform; they corrode trust, amplify polarisation and undermine confidence in electoral processes.
In 2017, bots flooded the US Federal Communications Commission with more than eight million identical comments on net neutrality. That campaign was detected because the messages were not unique. Today's AI tools can effortlessly generate millions of distinct submissions, rendering genuine public preferences almost impossible to identify.
What makes this moment particularly dangerous is how difficult AI-generated content is to detect. Spinelli noted that AI-written propaganda is just as believable as content produced by humans.
In one large field experiment in the United States, researchers trained GPT-3 to generate hundreds of advocacy emails and sent them to 7,200 state legislators. On half the issues tested, lawmakers could not statistically distinguish between AI-generated and human-written messages.
On the rest, the difference in response rates was negligible, just 2%. The implication is visible: malicious actors can now manufacture "constituent sentiment" at scale, misleading representatives about what voters actually want.
This threat extends beyond elections into governance itself. Public consultation processes, long defended as a democratic counterweight to unelected bureaucratic power, are increasingly vulnerable.
In 2017, bots flooded the US Federal Communications Commission with more than eight million identical comments on net neutrality. That campaign was detected because the messages were not unique. Today's AI tools can effortlessly generate millions of distinct submissions, rendering genuine public preferences almost impossible to identify.
The danger is not confined to Western democracies. In Southeast Asia, governments are rapidly adopting AI systems. Peter Brindle, Vice President of the AI Asia Pacific Institute, noted, "Enormous ability to strengthen institutions", from multilingual public services to sentiment analysis and fact-checking. Singapore has already developed governance frameworks that have influenced ASEAN-wide approaches.
Thailand, meanwhile, is experimenting with its own principles, though critics warn these may drift away from a rights-based model.
Dr Janjira Sombatpoonsiri of the German Institute for Global and Area Studies described how state-backed influence operations in Thailand benefit from vast funding and access to behavioural data, creating what she termed an "Integrated ecosystem of repression". Activists face not only digital manipulation but also emotional and psychological strain, exacerbated by an imbalance of technological resources.
Globally, the erosion of trust may be the most profound consequence. As AI-generated text, images and videos proliferate, citizens increasingly struggle to know what or whom to believe. Trust in media is already fragile; flooding the information ecosystem with inauthentic content risks pushing societies towards cynicism or partisan nihilism.
Sarah Kreps argues in her research published in the Journal of Democracy, Volume 4. She stated that, "When objective reality recedes, people rely more heavily on partisan shortcuts, intensifying polarisation and weakening democratic accountability."
Governments and institutions are not standing still. One line of defence lies in technology itself. The same neural networks that generate AI content can also be trained to detect it, identifying linguistic patterns that signal machine authorship.
Detection tools are proliferating, though their accuracy remains imperfect, and they must constantly adapt to evolving models.
Platform responsibility is another aspect; generative AI companies are increasingly acknowledging their role as political actors, not just technology providers. Queries about sensitive topics are already restricted, and firms are working with external researchers to define ethical boundaries.
In the United States, seven major AI companies have committed to voluntary safeguards in coordination with the Biden administration, a tentative but significant step.
Rachel Judhistari of the Wikimedia Foundation urged policymakers to see AI not merely as a commodity but as a public good, vital for preserving Indigenous languages and inclusive governance. Former Thai prime minister Abhisit Vejjajiva went further, arguing that ASEAN must act as a bloc to "Counter the tech giants", since individual countries lack sufficient leverage.
Citizens can learn to recognise patterns of inauthentic content. Cross-checking sources, verifying viral material and adopting a "trust but verify" mindset are no longer optional civic habits. "Seeing is believing" no longer holds in an age of deepfakes.
