Balancing justice and algorithm: AI, accountability, and the future of Bangladesh’s judiciary
Legal recognition of jhut as a resource, integration of EPR, institutional capacity-building, and worker protections are essential to align Bangladesh’s textile waste governance with international circular economy principles

The idea of artificial intelligence stepping into the courtroom no longer sounds like science fiction. Across various parts of the world, judicial systems are slowly waking up to the possibilities — and the risks — of allowing algorithms to play a role in how justice is delivered. In Bangladesh, that conversation is only just beginning to gather momentum.
With the draft National AI Policy (2024) outlining a vision for tech-driven reform under Section 4.1.5, the judiciary now finds itself part of a much larger narrative— one that pits efficiency against ethics, and convenience against constitutional conscience.
This vision extends far beyond e-filing or video hearings. AI could scan documents in seconds, translate judgments into Bangali, and even predict case outcomes. It's a picture of a justice system that moves faster, works smarter, and perhaps, depending on its construction — excludes fewer people. For a system that has long struggled with backlogs and accessibility, the appeal is clear.
And yet, the deeper one looks, the less straightforward it becomes. Courts, after all, are not factories. Justice is not something to be processed — it must be reasoned through, often painstakingly, with room for doubt, dissent, and moral nuance.
In such a context, the prospect of machines recommending verdicts or shaping legal arguments raises more than a few eyebrows. Technology might handle facts, but law is also about values — and values resist automation.
Still, there is no denying the potential benefits.
In some respects, Bangladesh has already taken cautious steps towards modernisation. The 'Amar Vasha' translation tool, introduced in 2021, demonstrated how even relatively simple digital tools could make court proceedings more comprehensible to the average person.
Similar developments have occurred elsewhere. India, for instance, is piloting its SUPACE system to assist judges by summarising lengthy case materials. It's not revolutionary, but it is functional. In such examples, AI does not replace judgment — it helps conserve it.
Yet while technology may streamline the paperwork, it cannot replace the human weight of a decision. This is where the conversation shifts from productivity to principle. If a judge leans too heavily on an algorithm — even subtly — in deciding a case, can that judgment still be considered independent?
More troublingly, what happens when the algorithm is trained on case histories riddled with the biases of the past? Justice systems worldwide have not been free from discrimination. If a machine learns from them, it may not innovate at all; it may simply replicate, invisibly and efficiently, everything that was previously wrong.
This is not merely a theoretical concern. Countries experimenting with judicial AI have taken markedly different approaches. In India, there is a clear emphasis on confining AI to background roles. Judges use it, but do not formally rely on it. Pakistan, meanwhile, has been more assertive.
In a significant ruling from April 2025, its Supreme Court issued a series of guiding principles: AI must not erode human dignity; judges must remain solely responsible for decisions; and every AI tool must undergo rigorous tests for fairness and accountability. That ruling might set a precedent not only for Pakistan but for other courts in South Asia grappling with similar questions.
The UK, operating from a more developed legal infrastructure, has taken yet another approach. AI is used more freely, but always with caveats. It is primarily employed for administrative tasks and drafting support.
When it comes to verdicts or legal interpretations, however, the system is designed to ensure human validation. Moreover, rules mandate transparency; litigants must be informed if AI played a role, and legal professionals are expected to be trained in its use.
Bangladesh, by comparison, remains in a stage where policy is more aspirational than operational. Efforts to digitise case records and introduce virtual hearings have been commendable, but the leap to predictive algorithms or AI-guided sentencing has not yet occurred. This gives policymakers a rare opportunity: to lay the groundwork before the tools become entrenched. It is a chance to think clearly before acting swiftly.
One logical starting point would be to treat judicial AI as a category of high-risk technology. Drawing on international data protection models, any system that plays even a partial role in determining legal outcomes should be subject to rigorous, pre-deployment scrutiny.
This means testing not just for functionality, but for bias — especially along lines that remain deeply sensitive in the Bangladeshi context: gender, religion, language, and economic status. Tools that fail these tests should not merely be improved; they should be kept out of court altogether.
Transparency is another non-negotiable. No litigant should be left uncertain about how a decision was reached — whether by a judge, a software system, or something in between. While some level of technical complexity is inevitable, there must be a way to explain, in plain terms, how an AI tool works, what data it draws on, and how it reaches its conclusions.
Trade secrets cannot be allowed to override fundamental rights. One way to institutionalise this might be through a Judicial Technology Commission — an independent body with the power to audit systems, issue certifications, and investigate complaints.
Equally important is ensuring that human judges remain in control. AI can offer insights, but it cannot own outcomes. Courts must therefore maintain a formal record of when and how AI has been consulted — not for bureaucracy's sake, but to preserve accountability. When decisions can alter the course of someone's life, there must be a way to trace the reasoning behind them — not just the result.
The existing legal framework will also need adjustment. The 'Use of Information and Technology in the Virtual Court Act, 2020' was crafted for a different digital era — one focused on connectivity, not cognition.
A revised version should incorporate safeguards around algorithmic decision-making, data usage, and rights of appeal. Defence lawyers, in particular, must have the right to question AI-generated inputs, just as they would scrutinise expert testimony. Fairness demands no less.
Then there is the question of privacy. As AI systems begin to draw on court files, transcripts, and sensitive case data, the risk of overreach increases. The justice system deals with some of the most intimate information imaginable — from family disputes to allegations of violence.
That data must be protected, anonymised, and governed by clear, enforceable rules. Bangladesh's forthcoming Data Protection Law must address this directly, or risk leaving a critical gap in its privacy framework.
Finally, the oversight mechanism itself requires careful consideration. Courts cannot regulate this space alone. Nor can technology companies, however well-intentioned. A broader governance body — perhaps modelled on Pakistan's National Judicial Policy Committee — could bring together judges, legal scholars, technologists, civil society, and user representatives. The aim would not be to stifle innovation, but to ensure it develops within ethical boundaries.
Implementation, of course, should proceed with caution. Beginning with low-risk domains — such as translation services in family courts, or triage systems for minor infractions — allows the system to learn without incurring high costs.
Each pilot programme should be closely monitored, publicly reported, and reviewed against measurable standards. The goal is not to create a perfect system overnight, but to avoid mistakes that could prove difficult to undo.
At its best, AI could support the judiciary by lightening its workload, accelerating routine processes, and making the legal system more accessible to ordinary people. But its worst-case scenario — a system where opaque tools make unchallengeable decisions — must be guarded against with vigilance. The challenge is not merely technical. It is moral.
Bangladesh has a rare window of opportunity to get this right. As the country edges closer to embedding AI into its legal institutions, it must do so with care, clarity, and a firm grasp of what is at stake.
The question is not whether AI will be used in courts — that is already a foregone conclusion. The real question is whether it will serve justice, or undermine it. That answer depends on the choices we make today.
Md Ibrahim Khalilullah is the Vice President of Bangladesh Law Alliance (BLA). Mail at: ibrahimkhalilullah010@gmail.com
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the opinions and views of his employer or The Business Standard.