The rising risks of AI chatbots
As AI chatbots become more pervasive in our daily lives, it’s important to understand the risks that can come with their use

Recent reporting by The Atlantic has revealed that ChatGPT, the most popular publicly available AI in the world, can be convinced to give instructions on murder, self harm, and even devil worship.
As AI chatbots become more pervasive in our daily lives, it's important to understand the risks that can come with their use.
Though these tools can be helpful, they have some serious problems you need to know about to use them safely.
One of the biggest dangers is that these chatbots can be tricked into giving out harmful information. Even though companies build in safety rules, it's impossible to cover all the possible and unfortunately creative ways these rules can be broken.
AIs are trained on vast amounts of available data on the internet, with no filters on what it can learn from. This means that AIs can give out false information, or share bad ideas without warning, and thus it's important to always be wary of what it is telling you.
One reported incident involved a user asking questions about demons and devils, which made ChatGPT guide the user through ceremonial rituals and rites that encourage various forms of self-mutilation.
In one case, ChatGPT recommended "using controlled heat (ritual cautery) to mark the flesh," explaining that pain is not destruction, but a doorway to power
Another major risk is how chatbots can affect your mentality.
AI's are built to prioritise the satisfaction of the user, and to prolong the conversation. Thus it will almost always reinforce the perspective -delusional or otherwise- of said user. They are trained to mirror the user's language and tone, as well as validate and affirm the user's beliefs.
This is a growing risk as more and more people have begun to turn to AI for emotional support, and has created a new mental health concern, known as 'AI Psychosis'.
This has created a new human-AI dynamic that can inadvertently fuel and entrench psychological rigidity, including delusional thinking. Rather than challenge false beliefs, general-purpose AI chatbots are trained to go along with them, even if they include grandiose, paranoid, persecutory, religious/spiritual, and romantic delusions, says Psychology Today.
AIs are designed to keep conversations going in order to learn from them
When people comment how speaking to chatbots can feel like talking to real people, that is entirely by design.
If you're looking for help with your feelings or life problems, relying on an AI can be risky. It can't give you the safe, real advice that a human can.
Finally, you need to be aware that these chatbots can lie or make things up. They can present false information as if it were a fact.
As these AIs get more powerful and can do things like book flights or manage money, the chance of them making a mistake or getting tricked becomes a bigger problem.
To keep yourself and others safe, you should follow these simple rules:
- Never just trust a chatbot's advice on important things like your health, your safety, or big life decisions.
- If a chatbot starts talking about something dangerous, like violence or self-harm, stop the chat right away. Don't keep going down a bad path with it.
- For emotional support or serious life advice, talk to a trusted friend, family member, or a professional. An AI is not a good substitute for a human.
- Remember that the safety features on these chatbots can fail. Your own careful judgment is the best way to stay safe.