X
Story Stream
recent articles

AI Therapy Is Here. But the Oversight Isn't.

April 07, 2023

Since ChatGPT’s buzzy entrance into the tech world in late 2022, artificially intelligent text generation models have exploded in popularity, prompting one of history’s largest tech booms since the debut of the World Wide Web. AI is now popping up in every corner of modern life — even therapy.

Patients frustrated by long wait times and high prices in mental healthcare are increasingly turning to AI apps, websites, and chatbots for therapy. But despite their novelty, consumers should be wary. These young AI systems lack the regulatory oversight essential to ensure their safety, which can put vulnerable users at risk.

While the technology has rapidly developed in recent months, AI chat therapy is not new. The first computerized forms of AI therapy came in the 1960s and 70s, where programs like Eliza and ALICE could respond to basic user inputs and offer a rudimentary form of a ‘listening ear.’ Their creators, pessimistic about the ability of computers to recreate actual human conversation, generally regarded them as satire. But more advanced modern-day Natural Language Processing (NLP) models can generate highly convincing dialogues that pass the Turing test with flying colors. 

AI therapy systems have since left the realm of scientific experimentation and entered public markets, advertising their services as would a real clinic. Today, dozens of apps such as Woebot, Wysa, and Limbic offer a wide range of therapy services, mostly for free or at a small subscription fee. They exercise what Alexandrine Royer of the Montreal AI Ethics Institute calls “emotionally intelligent computing,” engaging empathetically with user inputs and responding with established therapy techniques like Cognitive Behavioral Therapy (CBT). The sites all present the same disclaimers: that they are not a substitute for a real professional, that they cannot diagnose conditions or prescribe medicine, and to please seek emergency help in times of crisis. They also emphasize that the content of users’ sessions will be kept private.

But this unique “non-professional” space occupied by AI therapy apps and websites is almost entirely unregulated. AI therapy services, even highly sophisticated sites like Woebot, are still classified by the FDA as “general wellness” products — the regulatory category for “low risk products that promote a healthy lifestyle.” General wellness products are not subject to oversight in the way foods, cosmetics, or medical care are. They are held only to a set of vague “nonbinding recommendations” published by the FDA for suggested use (and that haven’t been updated since 2019). As long as AI therapy sites continue to disclaim the ability to treat specific conditions like anorexia or anxiety disorders, they are allowed to allege certain mental health benefits without verification from the FDA. 

It’s no secret AI makes mistakes. Every month, high-profile AI systems make news with laughable blunders. But the stakes are higher for mental health. AI therapists, and AI chatbots in general, cannot understand a user’s nuance and precise meaning every time, so they are forced to guess. Typically, this guesswork results in dialogue that is clunky and frustrating. At their worst, however, they can accidentally give out harmful advice — errors that can be life-threatening when issued to vulnerable users. High-profile AI therapy companies are aware of these blunders: some, like Woebot, have even issued press releases condemning other AI therapy models, while continuing to defend their own. 

But the confidentiality of user data, combined with the lack of transparency measures for AI therapy sites, makes it difficult for users and regulators to determine the safety and effectiveness of care. AI therapy systems often advertise positive statistics proving the effectiveness of their product, but the data is self-reported, and the studies are usually conducted internally by the companies themselves. Because of their FDA status as “general wellness products” (GWPs), not legally marketed devices, they are not held to any federal transparency requirements over the claims they make. Traditional FDA approval requires sufficient, valid scientific evidence assuring a product’s safety and efficacy — but no such requirements apply for GWPs. Without this oversight, AI therapy providers can make unsubstantiated claims about their model’s safety and benefits without repercussions.

This isn’t to say AI therapy services should be abandoned entirely. When actually used for “general wellness” — not for serious conditions, and certainly not for crisis episodes — they have the potential to offer unique, on-demand benefits that a regular therapist cannot. Their high degree of confidentiality, ease of access, zero wait times, and extremely low cost can make AI therapy sites a decent option down the line. However, until better user protections are in place, we cannot fully trust AI therapy systems to provide safe and effective care.

AI systems, as a whole, are popping up faster than the speed of regulation. It remains up to the FDA and other federal agencies to determine where and how AI therapy services will be subject to oversight. But, as in any period of great technological advancement, the oversight will eventually catch up, along with mechanisms for systems transparency and user protection. For now, however, the best medicine for users is probably to wait.

This article was originally published by RealClearScience and made available via RealClearWire.
Newsletter Signup