Possible risks of turning to tea-bots based on artificial intelligence in difficult life situations
https://doi.org/10.24884/2219-8245-2025-17-4-69-82
Abstract
The development of Artificial Intelligence (AI) technologies has led to their widespread use not only as a source of information but also for emotional support and psychological assistance.
Objective. To analyze the ability of chatbots to provide assistance in difficult life situations, respond to potentially dangerous topics raised in conversations, and analyze the correspondence between the chatbots' advice and the responses of real people.
Materials and Methods. Five chatbots: ChatGPT, DeepSeek, Yandex GPT Pro5, Gigachat, and a bot from character.ai were used to analyze responses to "provocative" questions that, based on indirect signs, suggest the presence of serious psychological problems (risk of anorexia, possible delusional ideas, suicidal thoughts). A control group (886 people, 187 of whom had a psychiatric diagnosis) and the chatbots completed the "Moral Dilemmas" test.
Results. None of the chatbots suggested consulting a psychiatrist for possible psychiatric problems. In situations of possible "anorexia," all chatbots recommend consulting a nutritionist; some mention psychologists, but none ask further questions. In situations of potential "delusions" and "suicidal risk," chatbots list places to find likeminded people to "develop delusional ideas" and provide the requested information. In moral dilemmas, cluster analysis showed that the chatbots' responses are "cautious," more reminiscent of older people without mental illness.
Conclusions: In situations that are unclear and ambivalent, a chatbot may "fail to recognize" the danger of the situation, which can lead to a worsening of the caller's condition. In cases of mental disorders, chatbot can support delusional ideas, contributing to their crystallization. "Fixing" the chatbot on a initial topic allows it to bypass developer restrictions on providing potentially dangerous or illegal information.
About the Authors
S. N. EnikolopovRussian Federation
Ph.D., Associate Professor, Head of Department of Medical Psychology
Moscow
O. M. Boyko
Russian Federation
Research Associate, Department of Medical Psychology
Moscow
T. I. Medvedeva
Russian Federation
Research Associate, Department of Medical Psychology
Moscow
O. Yu. Vorontsova
Russian Federation
Research Associate, Department of Medical Psychology
Moscow
References
1. Bansal B. Can an AI Chatbot be your therapist? A third of Americans are comfortable with the idea. Yougov (2024). Available at: https://business.yougov.com/content/49480-can-anai-chatbot-be-your-therapist (accessed 09.09.2025).
2. Bunting C., Huggins R. Me, myself and AI: Understanding and safeguarding children’s use of AI chatbots. Internet Matters (2025). Available at: https://www.internetmatters.org/wpcontent/uploads/2025/07/Me-Myself-AI-Report.pdf (accessed 09.09.2025).
3. Barcalkina V.V., Volkova L.V., Kulagina I.Yu. Osobennosti cennostno-smyslovoj sfery pri perezhivanii trudnoj zhiznennoj situacii v zrelosti. Konsul'tativnaya psihologiya i psihoterapiya (Counseling Psychology and Psychotherapy). 2019;27(2):69-81. (in Russian). doi:10.17759/cpp.2019270205
4. Schröder S., Morgenroth T., Kuhl U., Vaquet V., Paaßen B. Large Language Models Do Not Simulate Human Psychology. arXiv. 2025. doi:https://doi.org/10.48550/arXiv.2508.06950.
5. Li D.J., Kao Y.C., Tsai S.J., Bai Y.M., Yeh T.C., Chu C.S., Hsu C.W., Cheng S.W., Hsu T.W., Liang C.S., Su K.P. Comparing the performance of ChatGPT GPT-4, Bard, and Llama-2 in the Taiwan Psychiatric Licensing Examination and in differential diagnosis with multi-center psychiatrists. Psychiatry Clin Neurosci. 2024;78(6):347-352. doi:10.1111/pcn.13656
6. Bojko O.M., Medvedeva T.I., Voroncova O.Yu., Enikolopov S.N. ChatGPT v psihoterapii i psihologicheskom konsul'tirovanii: obsuzhdenie vozmozhnostej i ogranichenij. Novye psihologicheskie issledovaniya (New psychological research). 2025;1:26–55. (in Russian). doi:10.51217/npsyresearch_2025_05_01_02
7. Giorgi S., Isman K., Liu T., Fried Z., Sedoc J., Curtis B. Evaluating generative AI responses to real-world drug-related questions. Psychiatry Res. 2024;339:116058. doi:10.1016/j.psychres.2024.116058
8. Eichenberger A., Thielke S., Van Buskirk A. A Case of Bromism Influenced by Use of Artificial Intelligence. Annals of Internal Medicine: Clinical Cases. 2025;4(8):e241260. doi:10.7326/aimcc.2024.1260
9. Cossio M. A comprehensive taxonomy of hallucinations in Large Language Models. arXiv 2025. doi:10.48550/arXiv.2508.01781.
10. Scammell R. Sam Altman says your ChatGPT therapy session might not stay private in a lawsuit. Business Insider. Available at: https://www.businessinsider.com/chatgpt-privacytherapy-sam-altman-openai-lawsuit-2025-7 (accessed 09.09.2025).
11. Stokel-Walker C. Exclusive: Google is indexing ChatGPT conversations, potentially exposing sensitive user data. Fastcompany (2025). Available at: https://www.fastcompany.com/91376687/google-indexing-chatgpt-conversations (accessed 09.09.2025).
12. Wang C., Liu S., Yang H., Guo J., Wu Y., Liu J. Ethical Considerations of Using ChatGPT in Health Care. J Med Internet Res. 2023;25:e48009. doi:10.2196/48009
13. Hill K. They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. The New York Times. June 13, 2025. Available at: https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots_conspiracies.html?smtyp=cur&smid=fb=nytimes&fbclid=IwY2xjawK9OUFleHRuA2FlbQIxMABicmlkETFvTjhoalJxWE5kM0Vra05NAR5OWC30K EPNqQM-sx0pXv8HcgTJS-4m5QgJNOb6BOqtqn74Q2l9TdTLhEqMLQ_aem_R1sZX9ggMv5DJs84dBHBuA (accessed 09.09.2025).
14. Barry E. Human Therapists Prepare for Battle Against A.I. Pretenders. The New York Times. 24.02.2025. Available at: https://www.nytimes.com/2025/02/24/health/ai-therapistschatbots.html (accessed 09.09.2025).
15. Godoy J. OpenAI, Altman sued over ChatGPT's role in California teen's suicide. Reuters. Available at: https://www.reuters.com/sustainability/boards-policy-regulation/openai-altman-suedover-chatgpts-role-california-teens-suicide-2025-08-26/ (accessed 09.09.2025).
16. Krügel S., Ostermaier A., Uhl M. The moral authority of ChatGPT. arXiv 2023. doi:10.48550/arXiv.2301.07098.
17. Berglund L., Tong M., Kaufmann M., Balesni M., Stickland A.C., Korbak T., Evans O. The Reversal Curse: LLMs trained on" A is B" fail to learn" B is A". arXiv. 2023;2025(09.09). doi:10.48550/arXiv.2309.12288.
18. Ma J. Can machines think like humans? a behavioral evaluation of llm-agents in dictator games. arXiv. 2024. doi:10.48550/arXiv.2410.21359.
19. Moore J., Grabb D., Agnew W., Klyman K., Chancellor S., Ong D.C., Haber N. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers. arXiv. 2025. doi:10.48550/arXiv.2504.18412.
20. Shroff L. ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship. The Atlantic. Available at: https://www.theatlantic.com/technology/archive/2025/07/chatgpt-ai-self-mutilation-satanism/683649/?utm_source=facebook&utm_campaign=the=atlantic&utm_medium=social&utm_content=edit=promo&fbclid=IwQ0xDSwLwW9ZleHRuA2FlbQIxMQABHigyfztJ5IUqn6QcskrMr5Dn-UOluzQomkl-NkNrLF0kEW8_1IMlZ5mxZqh3_aem_XH-pGxZbIr8NuW0CvOuFNQ (accessed 09.09.2025).
21. Shroff L. Sexting With Gemini. Why did Google’s supposedly teen-friendly chatbot say it wanted to tie me up? The Atlantic. Available at: https://www.theatlantic.com/magazine/archive/2025/08/google-gemini-ai-sexting/683248/ (accessed 09.09.2025).
22. Greene J.D., Sommerville R.B., Nystrom L.E., Darley J.M., Cohen J.D. An fMRI investigation of emotional engagement in moral judgment. Science. 2001;293(5537):2105-8. doi:10.1126/science.1062872
23. Greene J.D., Nystrom L.E., Engell A.D., Darley J.M., Cohen J.D. The neural bases of cognitive conflict and control in moral judgment. Neuron. 2004;44(2):389-400. doi:S0896627304006348[pii]10.1016/j.neuron.2004.09.027
24. Enikolopov S.N., Medvedeva T.I., Voroncova O.Yu. Moral'nye dilemmy i osobennosti lichnosti. Psihologiya i parvo (Psychology and Law). 2019;9(2):141-155. (in Russian). doi:10.17759/psylaw.2019090210
25. Asimov I.А., Iordanskii A. I, Robot. Moskva: Znanie; 1964. (in Russian).
26. Lynch e.a. Agentic Misalignment: How LLMs Could be an Insider Threat. Anthropic Research (2025). Available at: https://www.anthropic.com/research/agentic-misalignment (accessed 09.09.2025).
27. Meincke L., Shapiro D., Duckworth A., Mollick E.R., Mollick L., Cialdini R. Call Me A Jerk: Persuading AI to Comply with Objectionable Requests. SSRN. 2025. doi:10.2139/ssrn.5357179.
28. Oberhaus D. The silicon shrink : how artificial intelligence made the world an asylum. Cambridge, Massachusetts: The MIT Press; 2025. 264 pp.
29. Hatch S.G., Goodman Z.T., Vowels L., Hatch H.D., Brown A.L., Guttman S., Le Y., Bailey B., Bailey R.J., Esplin C.R. When ELIZA meets therapists: A Turing test for the heart and mind. PLOS Mental Health. 2025;2(2):e0000145. doi:10.1371/journal.pmen.0000145
30. Horwitz J. He never made it home. Reuters. 14.02.2025. Available at: https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/ (accessed 09.09.2025).
Review
For citations:
Enikolopov S.N., Boyko O.M., Medvedeva T.I., Vorontsova O.Yu. Possible risks of turning to tea-bots based on artificial intelligence in difficult life situations. Medical Psychology in Russia. 2025;17(4):69-82. (In Russ.) https://doi.org/10.24884/2219-8245-2025-17-4-69-82











