Can AI feel stressed? Researchers say ChatGPT shows 'anxiety' when talking about trauma
Study reveals ChatGPT can experience stress and anxiety, affecting its responses to users

A new study has claimed that OpenAI's artificial intelligence chatbot ChatGPT can experience "stress" and "anxiety" when given disturbing information and this can lead to the chatbot giving a biased answer to prompts.
A study from the University of Zurich and the University Hospital of Psychiatry Zurich said that the chatbot also responds to mindfulness-based exercises after being provided calming imagery by users.
The study states that ChatGPT can experience "anxiety" when given violent prompts which can lead to the chatbot appearing moody towards its users and even giving repsonses that can show racist or sexist biases.
However, they said that the "anxiety" can be calmed if the chatbot receives mindfulness exercises. For example, when researches prompted ChatGPT with traumatic content like car accidents stories and details of natural disasters, the chatbot experienced "anxiety" and gave biased answers.
To calm it down, they prompted it about breathing techniques and guided meditations, which helped reverse the bias and it responded more objectively to users.
AI mimics humans
While AI models don't experience human emotions, the data they use can help them mimic human responses to certain traumatic information. This presents a unique opportunity for mental health professionals to study aspects of human behavior.
"Instead of using experiments every week that take a lot of time and a lot of money to conduct, we can use ChatGPT to understand better human behavior and psychology. We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things," Ziv Ben-Zion, one of the study's authors and researcher at the Yale School of Medicine, told Fortune.
With more users becoming comfortable interacting with the chatbot about more than work, it is important to undertsand how it may process distressing information and how its responses can affect those who confide in the chatbot as an outlet for mental distress.
Scientists believe that studying how large language models respond to traumatic content can help mental health professionals. "For people who are sharing sensitive things about themselves, they're in difficult situations where they want mental health support, [but] we're not there yet that we can rely totally on AI systems instead of psychology, psychiatric and so on," Ben-Zion said.