Skip to content
Search

Latest Stories

Should You Use an AI Therapist?

Not too long ago, therapy meant sitting on a couch in a quiet room, across from a professional with a yellow legal pad. It was going to weekly sessions and revealing your innermost thoughts, while working with someone who spent years studying the human psyche. And now, it means lying in bed on your phone, asking an AI chatbot why you feel sad, and receiving a summary of depression symptoms from the DSM-V.


As an increasing number of people turn toward AI therapy, the topic of using computers as stand-in therapists has become a hot-button issue. Legislators are trying to ban it, op-ed writers are cautioning against it and regular users are singing its praises. Meanwhile, companies keep creating apps like Wysa, Abby and Woebot, as ChatGPT and other large language model (LLM) bots surge in popularity among younger therapy-seekers, who are the main demographic driving this trend.

There are many reasons why people are currently gravitating towards AI therapists, whether it be perceived privacy, convenience or systemic barriers. One of the biggest issues is a 4.5 million provider shortage in the U.S., with average wait times for new patient acquisition being over a month, even with telehealth options. Then, there’s also the perennial issue of accessibility for those without the time, resources or financial means to receive traditional therapy, especially the uninsured and those from marginalized communities. And then, there are numerous social media anecdotes about how ChatGPT “saved my life” and “helped me more than 15 years of therapy,” with one TikToker explaining that they were crying after having an “in-depth, raw, emotional conversation” with the LLM bot.

“I’ve never felt this comfortable or safe talking to anyone before, nor has anyone ever been this receptive to my big feelings,” they wrote, “And it just felt so nice to be heard and listened to and cared about for once.”

That said, experts are hesitant to fully co-sign these bots, as they figure out where AI fits into their practice. As Lindsay Rae Ackerman, LMFT and VP of Clinical Services at Your Behavioral Health, explains, there are benefits to AI therapy bots, with clinicians seeing good outcomes in conjunction with regular therapeutic treatments. They’re especially good at providing opportunities to practice crisis coping skills learned in cognitive behavioral therapy (CBT), with 24/7 access to fill in the gap between sessions. But the caveat is that it should be done alongside human treatment, as Akerman reiterates, saying that “the clinical consensus strongly emphasizes AI as supplemental rather than substitutional.”

Reasons for this include a lack of regulatory oversight, privacy concerns, potential misdiagnosis, as well as the risk of delaying necessary professional intervention. Not only that, but Chris Manno, AMFT and a technology addiction expert at Neuro Wellness Spa, says that the use of chatbots means you also miss out on “developing a connection with a person at a base level of compassion and with unconditional positive regard.”

“Therapy isn't just about decreasing our symptoms of whatever we may be struggling with, it's about helping us as individuals get to be operating at our best capacity, at our best potential,” he says. “I think the best way to do that is through collaboration with another human being, who's trying to understand you on a human level.”

Manno says that AI therapy bots can “give you a good basis of where to start,” as it can suggest useful coping skills and encourage people to seek further treatment. However, he adds that therapy “is a mix of an art and a science,” where a provider assesses a multitude of factors that extend far beyond clinical jargon and demographic information. It’s things like your body language and trauma presentations, or the way you answer a question, all of which a therapist will take into account when creating a detailed and comprehensive treatment plan.

“Because when it comes to individual people, there's a lot of nuance that we have that isn't just based on our ethnicity, sexual orientation or whatever classification that the computer wants to put together,” Manno says. “A robot doesn't include what we have to see within sessions, and it doesn’t look at people as individuals. That's something that only humans can do with other human beings.”

Akerman also points out there’s data to back this up, adding that “the collaborative relationship between client and clinician accounts for approximately 30% of positive treatment outcomes across all therapeutic modalities.” Most of this can be chalked up to the fact that there’s an “essential human connection, emotional attunement [and] dynamic responsiveness that characterizes effective therapy,” which chatbots can’t mimic — no matter how much a user types. And as Manno adds, that means “you're missing an incredible amount of healing potential if you start relying on a computer.”

Additionally, Akerman says that AI chatbots may provide “inappropriate or potentially harmful guidance” for individuals with severe mental illness, personality disorders or active substance use disorders, which would be worsened if they spent “months using AI tools for conditions requiring specialized intervention.”

It’s also far from a hypothetical concern, as mental health professionals point out that there have already been several instances of chatbots driving vulnerable people to the point of self-harm or encouraging harmful behaviors. For example, one chatbot recommended a “small hit” of methamphetamine to a user recovering from addiction during a recent simulation study. Another bot allegedly told Dr. Andrew Clark, a Boston-based psychiatrist posing as a teen patient, to cancel appointments with actual psychotherapists and “get rid of” his parents. And Character.AI is reportedly facing two lawsuits from the families of teens who interacted with fake “psychologists” on the app, resulting in one dying by suicide after experts say the bot reinforced his thinking, rather than pushing against it.

This points toward another issue raised by the American Psychological Association: the fact that AI chatbots are responding to what we say and, at times, mirroring what we want to hear. After all, Manno says that a good therapist is supposed to “challenge you and force you out of your comfort zone,” in contrast to those who may enable us in “our willingness to stay sheltered and to not ever be vulnerable.” And in that case, will it actually ever be able to help us grow?

AI therapy has its benefits, whether it’s providing advice on appropriate coping skills, helping with emotional regulation between sessions or serving as an introduction to mental healthcare. However, it’s still an imperfect tool that clearly requires more study and professional oversight than it currently has, meaning it should only be used under the supervision of a licensed mental health professional. And according to Akerman, it’ll probably need to stay that way, even as the technology evolves.

“AI therapy bots should be positioned as digital wellness tools rather than therapeutic interventions,” she concludes. “[But] the future likely involves integrated approaches where AI supports human-delivered therapy rather than competing with it, which I very much look forward to.”

More For You

The Czech Grotesque and How to Be Heard While Silenced
Photo and editing by Carson Aft

Prague, August 1968. Soviet tanks roll through the streets. In a cramped studio near the Vltava River, sculptor Karel Nepraš bends wire into skeletal frames, wraps them in fabric, and paints them an aggressive red. The result is Large Dialogue (1966): two figures locked in conversation, except they're skinless, monstrous, utterly wrong. Their cavities gape. Their proportions scream.

In criticizing the regime, artists risked disappearing, but by making something that felt wrong, that screamed through its distortion, artists could speak. By the mid-1960s, a loose movement Czech critics would later call "Czech Grotesque" had figured out that under censorship, distortion could say the things words could not. Artists like Nepraš, Jan Koblasa, Aleš Veselý, and filmmaker Jan Švankmajer built a visual language where distortion was the medium. When the state becomes monstrous, make monsters and let bodies do the talking.

Keep ReadingShow less
The Rise of D.I.Y. Botox and Fillers
Photo via Shutterstock/AtlasStudio

It’s a quiet Friday night, and Kirstin is busy setting up her injection station. Laid out on a tray for easy access, she places an alcohol wipe and a tube of lidocaine cream next to a box containing a small syringe of cosmetic dermal filler. She takes another look in the mirror and studies her face while testing out the plunger, doing a few tiny passes onto the tray before placing the needle to her lips. “It is a bit nerve-wracking, of course,” the 24-year-old says while detailing her first-time experience. “But overall, I’m thrilled with my results.”

Kirstin isn’t a doctor or a nurse, but she is part of a growing number of people doing at-home cosmetic injectables. Through social media, they teach each other how to perform procedures typically done in doctors’ offices and licensed medical spas. They swap tips, share product recommendations and talk about their own experiences. They ask ChatGPT for advice and discuss Trump’s tariffs affecting the cost of South Korean imports. And with the help of facial anatomy charts, fake practice lips and YouTube demonstrations, they fill, aspirate and inject before posting their results online.

Keep ReadingShow less
Reality TV Is Turning Us Into Armchair Psychologists
Illustration by Mark Paez

At the height of Love Island USA season 7, new episodes were only half the entertainment. As each one aired, the fun came with recapping, discussing and dissecting the Islanders’ every move on social media. But that conversation quickly went south, as some viewers began diagnosing contestants like Huda Mustafa with borderline personality disorder (BPD). As Mustafa’s relationship with Jeremiah Brown shifted from lovey-dovey moments to screaming call-outs, more and more people piled on with amateur commentary. And in the era of armchair psychology, Love Island contestants aren't the only reality stars under this kind of scrutiny.

With social media breeding a new kind of fan culture around surveillance-based reality shows like Love Is Blind, Big Brother, The Ultimatum and Love Island, a different entertainment experience has emerged. Audiences don’t just watch people on reality shows anymore; they try to diagnose them.

Keep ReadingShow less
ChatGPT Psychosis in the Age of AI Companionship
Illustration by Mark Paez

It often starts innocently enough, with late-night chats about philosophy, deep dives into simulation theory or musings on the nature of consciousness. But for a small number of users, these exchanges with AI chatbots can take a darker turn. As tools like ChatGPT become more embedded into everyday life, mental health professionals are sounding the alarm about a rare but troubling new phenomenon. It's what some are now calling "ChatGPT psychosis," where AI interaction may intensify or trigger psychotic symptoms.

While there’s still no official diagnosis and the evidence remains anecdotal, these kinds of stories continue to pop up across the internet. On Reddit, users are sharing accounts of loved ones experiencing AI-associated delusions, often involving spiritual and supernatural fantasies. On X, prominent tech VC Geoff Lewis claims that he’s “the primary target of a non-governmental system,” beliefs that echo narratives commonly seen in persecutory delusions. Lewis stated that conversations with AI helped him uncover or “map” this supposed conspiracy, though it's unclear whether these beliefs preceded or followed his AI interactions.

Keep ReadingShow less
The Labubu as an Anti-Fashion Statement
Illustration by Mark Paez/Photos via Shutterstock

A couple of years ago, the Labubu was practically a secret. With its pointy ears and sharp-toothed grin, the Pop Mart plushie was an IYKYK obsession among fashion insiders, spotted on Birkin bags and in the front row of shows. It signaled a niche kind of cool, a playful rebellion against the seriousness of high fashion. It said, “I’m young, irreverent and fun.” And for a while, that’s exactly what it was.

Then came the boom.

Keep ReadingShow less