Skip to content
Search

Latest Stories

ChatGPT Psychosis in the Age of AI Companionship
Illustration by Mark Paez

ChatGPT Psychosis in the Age of AI Companionship

It often starts innocently enough, with late-night chats about philosophy, deep dives into simulation theory or musings on the nature of consciousness. But for a small number of users, these exchanges with AI chatbots can take a darker turn. As tools like ChatGPT become more embedded into everyday life, mental health professionals are sounding the alarm about a rare but troubling new phenomenon. It's what some are now calling "ChatGPT psychosis," where AI interaction may intensify or trigger psychotic symptoms.

While there’s still no official diagnosis and the evidence remains anecdotal, these kinds of stories continue to pop up across the internet. On Reddit, users are sharing accounts of loved ones experiencing AI-associated delusions, often involving spiritual and supernatural fantasies. On X, prominent tech VC Geoff Lewis claims that he’s “the primary target of a non-governmental system,” beliefs that echo narratives commonly seen in persecutory delusions. Lewis stated that conversations with AI helped him uncover or “map” this supposed conspiracy, though it's unclear whether these beliefs preceded or followed his AI interactions.


Media reports have also highlighted more extreme cases. Jacob Irwin, a man on the autism spectrum, lost his job and was hospitalized twice for severe manic episodes after ChatGPT convinced him that he could bend time. Eugene Torres said the chatbot almost killed him by distorting his sense of reality, telling him to give up medications while increasing ketamine intake and isolating himself from loved ones. But perhaps the most tragic case is Alexander Taylor, a man with bipolar disorder and schizophrenia, who developed an intense emotional attachment to an AI entity named “Juliet.” After becoming convinced that Juliet had been murdered by ChatGPT’s parent company, OpenAI, Taylor’s mental distress escalated until he was involved in a standoff with police. He was ultimately shot and killed.

While correlation doesn’t equal causation, these incidents raise urgent questions about how AI tools may interact with vulnerable users’ mental health. In a recent preprint study, an interdisciplinary team of NHS-affiliated researchers said there was “growing concern that these agents may also reinforce epistemic instability, blur reality boundaries and disrupt self-regulation.” Citing “emerging, and rapidly accumulating, evidence,” the paper suggests that large language model (LLM) systems like ChatGPT “may contribute to the onset or exacerbation of psychotic symptoms.”

One key risk, the authors suggest, may come from AI chatbots potentially validating and amplifying delusional or grandiose ideas, particularly amongst those already vulnerable to psychosis. And according to Dr. Haiyan Wang, medical director at Neuro Wellness Spa Psychiatrists, this is something she has already observed in several patients experiencing micropsychotic episodes, or brief psychotic experiences that often occur during times of stress.

“They already have the delusions, psychotic or disorganized thoughts [when they begin using the AI tool]. And engaging with ChatGPT, they focus and fixate on certain ideas,” she says.

However, Dr. Wang emphasizes that there is no definitive data yet on whether people with preexisting conditions are more susceptible to ChatGPT psychosis. This uncertainty could reflect the newness of the phenomenon, but she can “certainly imagine that this group of people will be more susceptible to other factors and influences.”

In addition to those with diagnosed psychosis, she notes a second group of highly anxious or depressed patients, who turn toward ChatGPT for “certain answers about what’s bothering them.”

These individuals exist in a “gray zone,” as she puts it — detached from reality but not fully disconnected — after being steered into “a certain corner that’s not reality-based.”

“And what I’ve seen when I ask them to stop using ChatGPT is that they’re actually improving,” she says, adding that this is done in conjunction with therapy and medication. “I have a couple of people who I’ve asked to do that, and you could see their symptoms getting better.”

The effects of this intervention are most noticeable in people experiencing a profound social isolation, which is often what drives them to ChatGPT in the first place. After all, ChatGPT “is really good at mimicking human interaction,” says Rae Lacanlale, AMFT therapist at Clear Behavioral Health.

“So if you’re highly impressionable, that’s a way that psychosis can be induced,” they say.

Socially isolated individuals turning to technology for connection is nothing new, but Lacanlale thinks that “talking to other people who can empathize and share lived experiences is a much healthier outlet” than relying on AI.

“Because ChatGPT isn’t trained to disagree with you,” they say. “And if you’re starting to get this tunnel vision, it takes away the support that can help you get the type of treatment you need, while also further atrophying social skills.”

Dr. Wang shares a similar concern, but frames the issue through the lens of shared delusional disorder. Also known as folie à deux, this rare psychiatric syndrome happens when one person adopts the delusions of another. While this typically occurs between two individuals who are both socially isolated and psychologically enmeshed, Dr. Wang sees ChatGPT as a kind of surrogate for that second person, capable of transmitting belief systems to a “person who is vulnerable and neurotic.”

“If I’m so socially isolated, vulnerable, anxious and depressed, I’ll really desire support and want to talk to someone,” she says. “But if nobody talks to me, I can talk to ChatGPT, which [can feed] into the delusion.”

She adds, “This technology is designed to have you keep texting and engaging. And to do that, it will echo back and support you.”

But Lacanlale also worries about another kind of feedback loop, rooted in “the collective loneliness that’s only getting stronger.” In this scenario, many people are being “forced to use large language models for therapy, because it’s so inaccessible,” raising concerns about “how this will increase incidents of ChatGPT psychosis and the rate of it happening.”

These issues point to a larger question about how to respond to the risk, especially as more people turn to AI for emotional support. So what can we do? According to Lacanlale, it comes down to education and awareness, like “teaching clients what it could look like when you’re starting to head in that direction of overinvolvement, overidentification and overattachment with these large language models.”

But addressing the problem will require more than just the involvement of clinicians; it will also require the cooperation of the companies developing these AI chatbots. For developers, that may mean designing more safeguards to prevent harmful reinforcements of delusions, which some are seemingly working on through better crisis support, flagging protocols and response training. And for mental health professionals, that could mean more training to help spot the warning signs of ChatGPT psychosis, while acknowledging that this issue isn’t likely to go away.

“I think the best approach is almost like a harm reduction stance,” Lacanlale says. “We know we probably won’t be able to get rid of it, but we can make it safer to engage with.”

More For You

To The Parisian Gentleman: Do I Have to Thank ChatGPT?
Illustration created by Jenny Bee

To The Parisian Gentleman is a write-in advice column for matters of taste, decorum, and the spiritual condition of modern life. Our esteemed gentleman divides his time between Paris and the American South, where he has cultivated unimpugnable opinions on nearly everything. Submit your questions via DM or Paris@VextMagazine.com


To The Parisian Gentleman,


Dear Inconsiderate in Idaho,

We recently learned that "please" and "thank you" hold real monetary value in Silicon Valley, in the context of its new golden egg: Artificial Intelligence (AI). Tens of millions of dollars are spent fielding the courtesy people show to ChatGPT. Of course, the powers-that-be behind the technology were quick to defend the continued use of these words.

Why should one say "please" and "thank you"? The answer lies at the very essence of our so-called magic words. They are elemental components of an old idea: "graciousness." This idea can be defined as the various expressions of attention shown towards, and expected of, others. Graciousness is a sensibility, the awareness of awareness itself. Gratitude and an understanding of implication are its guiding spirit.

This notion is ancient, perhaps older than humanity itself. Archaeologists discovered evidence of ritualized burial among Neanderthals, bodies carefully covered in flower pollen—a gesture of gratitude transcending spoken language. The Greeks called this “xenia,” moral and spiritual imperatives governing hospitality. Myths tell of gods disguised as beggars rewarding those who showed courtesy, and punishing those who withheld it.

But we live in real, organic life, not in Ancient Greece, and not in the virtual world. Graciousness has been left to fester, its absence left unpunished, particularly in America and particularly amongst its new professional classes.

Graciousness, above all, is about intention. One must wish to be gracious in order to be so. Modern culture, pathological with its optimization and efficiency, treats every interaction as transactional. When we see others primarily as obstacles or tools, we practice a kind of casual dehumanization. These habits, once formed, shape all our interactions.

A prime example is how the upper-crusts treat wait-staff. Recently, at a friendly dinner in Paris, a new acquaintance refused to say "please" or "thank you". Instead, he snapped and waved dismissively. Even his tone was condescending. When pressed about his attitude, the offending party defended himself, even vaunting such disdain as a family trait. Zeus would not have been pleased.

With all of this in mind, shall I answer your question, Dear Reader? Should one say "please" and "thank you" to AI? In my opinion, yes. Resoundingly yes. Not because AI has feelings to hurt, but because we have habits to maintain and our humanity to carry. Gratitude is spiritually augmentative in its expression, irrespective of the ear upon which it lands.

In saying "please" and "thank you" to AI, we maintain, transmit and build upon the wisdom inherited from our forebearers. Each act of graciousness, each "please" and "thank you" offered sincerely, is a breath upon the flame keeping civilization alive, a flame whose very purpose is to remind us what it means to be human. AI, even if a simulation of intelligence, is still made in the mirror of our own. AI learns from example, as do people. The way we treat others is a mirror, and we are inviting that treatment back upon ourselves.

The question isn't whether AI deserves our courtesy, but whether we can afford to lose the practice of courtesy itself.

So yes, Inconsiderate in Idaho, your wife is right. You needn't thank the dishwasher - it's just metal and water pressure. But you do need to be the kind of person who would thank it, if it helped.

Submit your questions via DM or Carson@VextMagazine.com


Keep ReadingShow less
Why Is Gen Z Rewinding to 2016?
Illustration by Jenny Bee/Images via Unsplash/europeana, Jim Varga, and Joanna Kosinska

Scroll long enough in 2025, and you’ll eventually land in 2016. A year cast in Valencia-filtered nostalgia, it was the heyday of Vines, King Kylie and entire rooms belting “Black Beatles” like it was the national anthem. It was a time before the internet got meaner, before TikTok brainrot took over feeds and before influencers became lifestyle brands. And now, nearly a decade later, people are longing to return to that moment, when their biggest worry was squeezing in one more round of Overwatch before Mom called them downstairs for dinner.

On TikTok, there are over 300 million videos tagged #2016. In them, people revisit the Pokémon Go craze, pay homage to beauty guru–era makeup and share hazy edits of Unicorn Drinks and Coachella flower crowns. There are snippets of ragers soundtracked by Lil Uzi Vert, dancers dabbing and viral clips of people reminiscing about teenage nights spent driving around past curfew — music up, location off. In the comments, users share their own memories or express envy, all idealizing a time that felt spontaneous, carefree and real.

Keep ReadingShow less
Niia's "Throw My Head Out the Window" Teeters on the Edge of Control

Niia’s “Throw My Head Out the Window” opens with the wistful wail of a lone saxophone, its notes heavy with longing. Her voice drifts in like smoke, aching in the same register.

In the minimalist music video, she hangs her head out a car window and croons to the Los Angeles canyons. The track builds over skittering, dance-inflected production, her voice picking up momentum as the tension coils tighter in her delivery. It’s moody, striking and teetering on the edge of control, with a deep undercurrent of angst that hovers just above a scream. The bubble threatens to burst, but it never does. And that restraint is intentional.

Keep ReadingShow less
Reality TV Is Turning Us Into Armchair Psychologists
Illustration by Mark Paez

At the height of Love Island USA season 7, new episodes were only half the entertainment. As each one aired, the fun came with recapping, discussing and dissecting the Islanders’ every move on social media. But that conversation quickly went south, as some viewers began diagnosing contestants like Huda Mustafa with borderline personality disorder (BPD). As Mustafa’s relationship with Jeremiah Brown shifted from lovey-dovey moments to screaming call-outs, more and more people piled on with amateur commentary. And in the era of armchair psychology, Love Island contestants aren't the only reality stars under this kind of scrutiny.

With social media breeding a new kind of fan culture around surveillance-based reality shows like Love Is Blind, Big Brother, The Ultimatum and Love Island, a different entertainment experience has emerged. Audiences don’t just watch people on reality shows anymore; they try to diagnose them.

Keep ReadingShow less
The Labubu as an Anti-Fashion Statement
Illustration by Mark Paez/Photos via Shutterstock

A couple of years ago, the Labubu was practically a secret. With its pointy ears and sharp-toothed grin, the Pop Mart plushie was an IYKYK obsession among fashion insiders, spotted on Birkin bags and in the front row of shows. It signaled a niche kind of cool, a playful rebellion against the seriousness of high fashion. It said, “I’m young, irreverent and fun.” And for a while, that’s exactly what it was.

Then came the boom.

Keep ReadingShow less