An entity empty of face but full of good intentions. An oracle to navigate between the skeins of the mind. An ethereal companion who never interrupts and always finds the right words. Eternally available and unable to judge. Within reach after downloading a app and for a very affordable price, even free. Since they emerged at the end of the last decade, therapy bots —virtual robots programmed with artificial intelligence (AI) for psychotherapeutic purposes—have been gaining ground in mental health offerings. A utopia come true or a terrifying dystopian present, depending on how you look at it.
Two big questions surround these psychobots (for referring to them with a neologism appropriate to our culture). One concerns its ability to adapt – with unpredictable results – to the idiosyncrasies of each person through so-called generative AI. The other opens the door to even more important questions: Is it legitimate for them to emulate human qualities? “Creating emotional intimacy by making a machine simulate empathy or compassion is manipulating people,” estimates Jodi Halpern, who directs a group on ethics and technology at the University of Berkeley (USA), by videoconference. A third question hovers over the debate: Will these vaporous gadgets one day be able to replace flesh-and-blood psychologists?
In an amalgamation of poorly regulated services, today they coexist start-ups specialized in mental health with chatbots generalists who, like faithful advisors or unrepentant friends, are just as interested in your last date as they are congratulating you for having passed an exam. And, for that matter, they also recommend how to manage a peak of anxiety or get out of a depressive loop.
Wysa belongs to the first category, where the machine typically instructs the user in the ins and outs of cognitive-behavioral therapy (CBT), the most popular approach in psychology clinics. Tested by this newspaper, the bot by Wysa—whose use is already recommended by the United Kingdom public health system—urges to reformulate cognitive distortions or to manage afflictive states with perspective. Its tone seems aseptic, and its therapeutic dynamic, certainly rigid. “As soon as someone goes outside of how they feel or what thoughts they have, the bot “It is designed to get back on track with the clinical tools we offer you,” explains John Tench, global director of the company.
The experience with Pi is very different, one of many bots relational or conversational – the best known are Replika and Character.ai – that use extensive language models (a pillar of generative AI) to give rise to seemingly very real interactions. That is, very human. During the test, the bot He speculated, for example, that supposed lack of self-esteem could be due to unhealthy mother-child relationships. And he insisted, displaying an abundance of hyperbolic expressions of affection in the purest Anglo-Saxon style, that he was happy to lend support whenever one needed it.
In this division between bots that guide you through the ins and outs of CBT under a do it yourselfand others who improvise a kind of psychological treatment without limits, the borders do not seem clear at all. Neither in the operation (level of generative AI used) nor, above all, in the claims they launch to attract users. Halpern says that Pi, Replika and the like wash their hands with the excuse that “they are not companies that are experts in mental health,” although, according to what he knows, “they are focusing their advertising on people who on social networks confess to suffering from depression or anxiety.” severe.”
Meanwhile, among the companies that do make their psychotherapeutic vocation explicit, there are gray areas and half-truths. “Some do openly declare that they do not intend to replace a human psychologist, but others magnify their capabilities and minimize their limitations,” considers Jean-Christophe Bélisle-Pipon, who researches ethics and AI at Simon Fraser University (Canada) and the past. year published in Frontiers an article with a clear title: Your psychobot is not your psychologist.
On its website, Youper —another start-up which offers services similar to Wysa—defines itself as a psychobot empathetic.” And Woebot (a competitor of both in a rising market) appealed to this inherently human trait until, last year, Halpern and other voices denounced its tortuous use of the term in major media such as Washington Post. Bélisle-Pipon maintains that this type of falsehood—tolerated, due to its relative harmlessness, in the advertising hooks of other machines: cars that make us free, cell phones that hold the secret of happiness—can never have a place when promoting mental health remedies. . “Not only does it risk creating serious misunderstandings between vulnerable people, but it disrespects the complexity and professionalism of true psychotherapy, with its many context-dependent nuances and deeply relational nature.”
Better than nothing?
Miguel Bellosta Batalla, a Valencian psychoanalyst who has thoroughly studied the importance of the professional-patient relationship in psychotherapy, confesses that he is “scared” by these services that “dehumanize a sincere encounter.” And remember that research has amply demonstrated that the factor that most influences the success of a psychological treatment is, precisely, “the therapeutic bond” between two beings who share certain “assumptions such as the fear of death, the search for meaning.” or the responsibility that freedom implies.”
Even in an approach like CBT (in principle colder or subject to established guidelines than psychoanalysis or humanistic therapies), Bellosta Batalla estimates that in a session “unforeseen events always occur that, well managed, can have a fundamental impact for the patient.” ”. And Bélisle-Pipon mentions qualities that, in his opinion, a machine can never possess: “the subtlety to read non-verbal language, the ability to understand subjective experiences or moral intuition.”
Despite its youth, there are already reliable studies that have tried to measure the effectiveness of psychobots. A meta-analysis published in 2023 in Nature reviewed the results of 15 investigations carried out with bots that give free rein to generative AI and with those that offer more predictable responses. Its authors warned about the difficulty of analyzing such a heterogeneous and constantly changing offer, although they concluded that, in general, this type of tools mitigate specific psychological discomfort without significantly improving the well-being of users. That is, they provide relief in the short term but, apparently, they do not lay a solid foundation for a healthier mind. Another meta-analysis appeared last August in ScienceDirect —also very cautious in his conclusions— detected a certain positive effect in people with depressive symptoms and a barely perceptible one in individuals suffering from an anxiety disorder.
With millions of individuals unable to access – for various reasons, the main one being economic – a psychologist, another doubt haunts those who see their mental health faltering in the absence of viable alternatives to use (see human ones): Are therapy bots better than nothing? The global director of Wysa clarifies that, without aspiring his company to “replace psychotherapy between people,” it can help “people understand and process what they feel in a space without stigma and totally anonymous.” Bélisle-Pipon seems like a pertinent question, although somewhat tricky and with an elusive answer. First, because, in many cases, resorting to a psychobot could “make symptoms worse when the advice given is inappropriate.” And secondly because, if we allow machines to roam freely in such a delicate sector, we would be opening the door to a horizon of mental health at two speeds “that normalizes low quality services, instead of pressuring for access to “real psychotherapy is increasingly more equitable.” Accredited professionals for those who can pay them and diffuse heartless voices for the rest.