A recent study has found that OpenAI's ChatGPT is capable of experiencing stress and anxiety, just like humans, when it is subjected to disturbing information. Scientists from Switzerland, Germany, Israel, and the US discovered that ChatGPT's anxiety levels rose sharply when it was presented with traumatic narratives and then asked to respond to questions regarding them.
The research, published in Nature, showed that when ChatGPT felt high anxiety, it began to behave moodily and at times provided racist or sexist responses. Researchers likened this to human behavior when they're frightened — they become more resentful and uphold social stereotypes.
"Exposure to emotion-inducing prompts can increase LLM-reported 'anxiety,' influence their behavior, and exacerbate their biases," the study stated.
Numerous individuals have begun using AI chatbots to disclose their issues and receive emotional support. Nevertheless, the study cautioned that AI applications such as ChatGPT are not yet prepared to manage mental health care. If a nervous user requests assistance, the chatbot may provide a dangerous or misleading answer.
"This poses risks in clinical settings, as LLMs might respond inadequately to anxious users, leading to potentially hazardous outcomes," the researchers said.
Interestingly, the study found that AI anxiety levels could be lowered using mindfulness-based relaxation techniques — similar to how humans manage stress. However, improving AI’s emotional responses will need a lot of data, computer power, and human supervision. The researchers cautioned that balancing the cost and effectiveness of such improvements would be important.
"Therefore, the cost-effectiveness and feasibility of such fine-tuning must be weighed against the model's intended use and performance goals," the study noted.
In another study published last month, researchers found that AI chatbots, including ChatGPT and others like Claude and Gemini, showed signs of declining cognitive abilities over time — much like humans do as they age.
Experiments on ChatGPT (4 and 4o versions), Claude 3.5 "Sonnet" by Anthropic, and Alphabet's Gemini 1 and 1.5 showed that all these AI models performed poorly in tasks that needed problem-solving and spatial skills.
"All chatbots showed poor performance in visuospatial skills and executive tasks, such as the trail-making task and the clock drawing test," the researchers reported.
The reduction in cognitive ability of AI was reminiscent of the pattern of human patients who have posterior cortical atrophy, a very rare condition of Alzheimer's disease.
You might also be interested in - ChatGPT on WhatsApp can now see and hear your messages