Why Bangladesh must legislate AI before it’s too late
How Bangladesh shapes its AI framework today may determine the nation’s future. It is high time we figure out how we want to use AI to make the most of it and minimise risks

Recently, in an interview with The Diary of a CEO, Nobel Prize winner Geoffrey Hinton was asked why he had left a leading position at Google. He replied, "So that I could talk freely at the conference." The return question was, "What did you wanna talk about freely?" The reply was, "How dangerous AI could be."
Hinton is often hailed as the "Godfather of AI" in the West. He is widely regarded as one of the key minds behind the rapid growth of Artificial Intelligence. However, despite his significant contributions, he stepped away from some of the most prestigious positions in the tech world, repeatedly warning about the dangers of AI.
History will determine whether his concerns arose from realisations during his research, from observing the actions of big tech companies, or simply from a sense of guilt – much like Alfred Nobel, the inventor of dynamite.
The irony is that when AI pioneers talk about humanity's existence, we fight over whether AI will replace jobs. It is still uncertain what AI will take over, but some reforms will definitely happen. As a nation, it is high time we figure out how we want to use AI to make the most of it and minimise risks.
In 2024, Bangladesh issued the first draft of the National AI Policy, which was grounded in six principles: equity, accountability, safety, sustainability, and human rights. This document, however, consists of vague words with little real regulation.
Calling for 1,000 AI start-ups by 2025 appears more ambitious than realistic when the domestic ecosystem has only a small number of mature companies. The draft also says little about data privacy and avoids consistent rules, though the country still debates a comprehensive data protection law.
However, it was adapted as the policy was developed aligning with the so-called 'Vision 2041'. Now, there is a need to manage AI in a down-to-earth way. Bangladesh must pass a risk-based AI law with a practical roadmap. The country should also set up a national fact-checking and content-verifying platform to help people deal with AI-generated materials.
We have already seen how many people use image- and video-generation tools to humiliate and degrade others. More concerning is that women are the primary targets. False information, propaganda, and deepfake pornography spread across social media every day. People are so reckless that they even create AI-generated misleading videos of the late Flight Lieutenant Md Towkir Islam.
Artificial Intelligence is moving much faster than humanity predicted. If things go at this pace, we will see the use of AI agents to manipulate people, leading to significant financial losses.
Beyond the social debates, the country's software industry is caught in a defensive stance. Instead of exploring AI as a partner in progress, many in the sector remain preoccupied with the fear of losing jobs. Critics argue this reluctance reflects either a comfort in the old ways or a misplaced confidence that shields firms from change.
Others point to the leadership of tech companies, suggesting a deeper issue: a hesitation to invest in their own workforce's technical growth. Bangladesh's developers risk being sidelined in a global tech community moving swiftly ahead with AI without that investment. In addition, we have already lost a significant share of the freelancing marketplaces like Fiverr and Upwork, as agentic AI has already taken over easygoing tasks like data entry and web scraping.
In terms of academia, we always struggle with budget constraints. Both public and private universities face clear research goals or resource limitations due to not having a broader vision and guidance.
In 2024, Bangladesh issued the first draft of the National AI Policy, which was grounded in six principles: equity, accountability, safety, sustainability, and human rights. This document, however, consists of vague words with little real regulation.
AI certainly comes with potential risks, but we must act to keep the risks minimal. Keeping the current AI draft policy aside, we can start with at least a basic Artificial Intelligence Whitepaper. The EU has already enacted the pioneering landmark Artificial Intelligence Act (EU AI Act), which came into force on 1 August 2024. It also establishes a risk-based regulatory framework – classifying AI applications from "unacceptable" to "minimal risk", with specific rules for general-purpose AI models.
South Korea and Brazil passed their AI Act. Countries like Canada and the United States are in the process of enacting AI legislation. The fact is, AI is changing so drastically that it is challenging to keep pace and make a bound law. Countries like the United Kingdom published an AI whitepaper in 2023 as a blueprint for the policy.
As a nation, Bangladesh must move swiftly to shape an AI Act before it is too late. The first step could be to bring diverse voices – government officials, political leaders, academics, industry professionals, and scientists, both at home and abroad. The country can draft a Whitepaper from these discussions to create a shared vision and framework. After careful evaluation and consultation, that blueprint could evolve into a comprehensive AI Act capable of guiding innovation while safeguarding the public interest.
Given the country's experience with the ICT Act, it would be wise to pursue the AI Act under the interim government, while ensuring the consensus of all political parties. The risk of such legislation being used for political advantage is always present, which is why a board-based agreement is crucial. As this is part of broader reforms, the process deserves careful attention. How Bangladesh shapes its AI framework today may, in many ways, determine the nation's future.
Regardless of our path, specific points must form the backbone of Bangladesh's AI framework. It should begin with data sovereignty, ensuring the nation controls its digital assets. Clear AI roadmaps – spanning five and ten years – must be designed to prepare both academia and the tech industry for the future.
At the same time, robust guidelines are needed for international and domestic companies to handle data privacy responsibly. Firms should also be required to invest in AI research within Bangladesh and build collaborations with global leaders. Last but not least, the law must have teeth: setting out substantial penalties for misusing AI, whether committed by individuals or corporations.

Mohammad Jafrin Hossain is a recent postgraduate in Cybersecurity with a specialisation in Artificial Intelligence from Florida International University.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views and opinions of The Business Standard.