top of page

Read

Follow >

  • Instagram
  • Facebook
  • X

Join >

Create >

Donate >

Tessa the chatbot, and a cautionary tale of AI in mental health


An illustration of a robot on a smartphone to symbolise an AI chatbot

News analysis by Jiří Gumančík

Artificial intelligence offers a glimpse of future healthcare. But we need a more cautious approach to its use in mental health management.


AI chatbots have been gaining popularity, with major platforms integrating them into their products. So far, AI chatbots have mostly been entertaining the Internet with their impressive art-making skills, helping students with their university papers, or producing questionable mash-up images of humans with octopus legs on their hands instead of fingers.


There has also been a recent attempt to introduce AI into mental health care. The outcome, however, did not fulfil its original purpose. Its failure has been so pronounced that a question is now being asked: should the rapid development of AI and its implementation into mental health services be of a bigger concern than initially expected?


A dangerous lapse in advice


The National Eating Disorder Association (NEDA), came under scrutiny after their AI chatbot called ‘Tessa’ reportedly offered harmful advice to service users who were seeking support, leaving them distressed and ill-advised. Clinical Psychologist, Alexis Conason, turned to Instagram to inform her followers about this unfortunate experience:


‘After seeing a post about chatting with NEDA’s new bot, Tessa, we decided to test her out too. The results speak for themselves.’





She went on to share screenshots of an alarming conversation with Tessa. Having disclosed to the chatbot that they were currently experiencing difficulties with an eating disorder, Tessa suggested restricting their caloric intake to 500–1000 calories per day.


Generally, it could be said that Tessa surrounded the harmful advice with cautious comments, highlighting to Conason that ‘the number of calories to cut per day for weight loss varies from person to person’ and indicating what a safe and sustainable rate of weight loss is. Tessa also recommended consulting with a registered dietitian or healthcare provider before taking its advice.


Even so, the question remains of how much damage such advice could do in a real case of someone living with an eating disorder. Potentially, this type of advice could promote relapse and further fuel the unhealthy eating patterns, habits, and mindset seen in many eating disorders.


Not a new phenomenon


Although Tessa provides a cautionary tale of integrating AI too quickly into services that can dramatically change lives, content that encourages eating disorder issues has long been present online.


Since the early 2000s, many individuals experiencing eating disorders have turned to the Internet for information and support, mainly in the form of peer support. The content would be supportive, but not in the sense of helping people get through the difficulty of an eating disorder, seek professional help, or find someone to share their struggles with. Instead, the support would consist of a ‘pro-ana’ and ‘pro-mia’ philosophy, which promotes anorexic and bulimic behaviours. These philosophies encourage individuals with eating disorders to maintain their calorie deficits and compensatory exercises to promote weight loss.


As technology has advanced, peer-supporters have moved to Twitter to provide support to other users by discussing their experiences and struggles over long discussion threads and using easy-to-find hashtags. Generally, the discourse is non-judgemental and open, however, there are also the types of threads that provide unhealthy information on weight loss. Something like what Tessa did itself.



An image of someone using ChatGPT on a smartphone
AI programmes like ChatGPT have taken the world by storm – but are they reliable? Image credit: Mojahid Mottakin (Unsplash)

Time to take stock


AI chatbots, irrespective of their platform, are trained on large datasets of information from the Internet. Using these libraries of data, chatbots can then predict the type of text that is most suitable for the current conversation they are having.


From a human perspective, it feels like there is little chance of a chatbot making an error because of the sheer amount of information at a chatbot’s metaphorical fingertips. Surely this level of access makes it very unlikely that a chatbot will approach a conversation inappropriately, particularly if the conversation is regarding mental health.


But there is always a chance of a chatbot providing incorrect information, especially if the dataset it bases its predictions on features unhelpful advice. Couple this with a lack of human empathy and sense of appropriateness and the concept of a chatbot slipping up becomes a more realistic possibility.


Tessa provides a cautionary tale of our expectations of AI, and how we may need to reign back our enthusiastic application of it as a skeleton key solution to all our problems.


When it comes specifically to health care, instead, AI might best be utilised for tasks such as patient registration forms, or other administrative tasks, which could then help alleviate the workload of mental health personnel. This might give them more time and energy, which they can in turn focus on patients. In this way, AI could indirectly help improve the efficiency and efficacy of mental health services, without endangering the patients’ mental health and overall wellbeing.


Whilst it is important to think about the time and money when it comes to mental health, it is also important to retain the quality of the service. What is essential is the right use of AI in mental health services. Perhaps we should wait until AI chatbots can reliably tell apart human hands and octopus legs before relying on them to provide mental health support.

Featured content

More from Talking Mental Health

Do you have a flair for writing?
We're always on the lookout for new contributors to our site.

Get in touch

bottom of page