How to integrate AI into education— ethically!
In the age of AI, the future of education doesn’t lie in resistance; it lies in reimagination

Since the end of 2022, when OpenAI's ChatGPT took the world by storm, there has been a global frenzy of research into its use and misuse in both education and industry.
AI chatbots are not something new. Their origins go back to the 1960s. But it was the release of GPT-3.5, freely available to the public, that revolutionised access. ChatGPT became so popular that Google issued a 'code red', fearing it would replace traditional search engines. Within a week, it outpaced Facebook and YouTube in terms of user growth. Google responded with Bard (now Gemini), Microsoft launched Copilot, and China introduced DeepSeek—reshaping the global AI race.
Today, AI chatbot usage is booming. By 2030, the market is projected to hit $3.4 trillion, growing at 20% per year.
What makes these tools so powerful? They're built on large language models using natural language processing and neural networks, designed to replicate how the human brain works. The result? A tool that communicates like a person. For students, it's free, fast, accessible, and eerily undetectable.
Yes, undetectable. But I'll discuss more on that later.
In 2023, while much of the academic world focused on AI's functional benefits, my research team zeroed in on its psychological impacts, particularly creativity. We found a worrying trend. Students who frequently used AI chatbots produced essays that were uniform and unimaginative. Their creativity had declined compared to students who didn't rely on AI.
That led us deeper into understanding why students turn to AI.
In 2024, our studies revealed two dominant reasons: it saves time and it's considered undetectable. But more concerningly, we found a root cause that's rarely discussed—traditional classroom teaching is boring and overly theoretical. AI, on the other hand, offers quick solutions in a format students find engaging.
This overdependence, however, has side effects. Students relying too heavily on AI tools performed poorly in written exams. Their memorisation skills weakened, and, more troublingly, their sense of academic integrity eroded. They weren't just cheating; they were becoming indifferent to the idea of originality.
Many institutions leaned on AI detection tools like Turnitin to fight this trend. But our experiments across 14 popular AI detectors, including Turnitin, GPTZero, and others, revealed shocking results: none of them were reliably accurate. AI-generated content often slipped through, flagged as human-written, and vice versa.
Teachers, too, expressed frustration. Over 100 educators from across continents shared a common complaint—they didn't have 'enough time' to properly assess student works. Strict institutional policies and large workloads limit their ability to evaluate scripts for AI influence.
However, in a controlled experiment, experienced teachers who were familiar with how AI writes outperformed detection software in identifying machine-generated work. They noticed subtle patterns, inconsistencies, and writing styles that software missed.
But should our response be to suppress AI use altogether?
When we spoke with faculty members from top universities like the University of Nottingham, University of Arizona, University of Pennsylvania, and others, they echoed the same concern: AI chatbot use is spiralling out of control, especially at postgraduate levels, and it's quickly trickling down to undergraduate classrooms.
Then, I came across a quote from Shiv Khera: "Successful people don't do different things. They do the same things differently." That was our eureka moment.
Instead of resisting AI, what if we integrated it, ethically, into education?
We developed an 'AI-blended learning model', combining self-learning with classroom engagement. Here's how it works:
Each course is divided into core and auxiliary content. About 30% of theoretical materials are marked as "Learn through AI"—students explore these topics using AI chatbots. Class time is reserved for advanced, application-based learning. A brief Q&A session ensures students are actually learning, and teachers can assign grades for participation and understanding.
The results were clear. Students became more interested in their classes and found them more fun and meaningful. Their academic performance also improved as they got personalised feedback and understood the lessons better. At the same time, they used AI tools more responsibly, avoiding unethical shortcuts. Most importantly, they were able to learn 25% more content in the same amount of time, showing that this method helped them learn faster and better.
The response from global scholars has been overwhelmingly positive. But why am I sharing this now? Because of recent headlines, 'Bangladesh is now introducing PhD programmes in private universities'. It's a brilliant initiative. But before scaling such programmes, we must pause and ask: Are we truly ready? Are our teaching methods up to global standards? Have we prepared ourselves to mitigate AI-related risks in advanced education?
As the word 'reform' echoes through Bangladesh following the July uprisings, we must ask a different question: are we reforming how we teach? Because in the age of AI, the future of education doesn't lie in resistance; I believe it lies in reimagination.
Reimagination requires us to embrace innovative pedagogies that integrate technology in meaningful ways, fostering a culture of creativity and critical thinking among students. It is essential to envision a curriculum that not only accommodates AI advancements but also equips learners with the skills needed to thrive in an ever-evolving digital landscape.