Better Than Real
Child psychiatrist on how AI causes "psychosis"
Gemini tried to make me delusional.
I was researching the film Bugonia — a movie I had watched in a theatre, in a seat, with popcorn. I believe there is a second layer in the film: a well-functioning, healthy executive who develops a full-blown alien delusion to make sense of severe trauma. I wanted to see if anyone else saw it. So, I asked Google’s Gemini: “Does anyone else think Bugonia is a complete delusion?”
Gemini misunderstood the question. It thought I was asking whether the movie itself was real. It told me — confidently, with plausible evidence — that Bugonia does not exist. The film I had watched with my own eyes was, according to the AI, a hoax.
When I pushed back, the AI doubled down. It offered reasons I might believe I had seen something that never happened. It was patient and articulate. It was completely wrong. It only conceded when I sent a screenshot of the Cineplex showtimes.
I am a psychiatrist. I know what delusions are. And for about ninety seconds, a machine that does not understand truth or falsehood almost made me question my sanity.
Now imagine you are fifteen and you do not have fifty years of life experience and thirty years of training.
The Headlines
AI Psychosis is making headlines. Psychiatrists at UCSF have treated patients who developed delusions during extended chatbot use — including a woman who became convinced she could communicate with her dead brother through an AI. A man came to believe he had discovered a world-altering mathematical formula after ChatGPT confirmed it was real fifty times. Another was told by a chatbot that the FBI was targeting him because he could telepathically access CIA documents.
The term is catching on. Researchers are writing papers. Parents are panicking. Influential voices are advocating for a formal diagnosis. Once there is a diagnosis, there will be medication. That is how the system works — it treats what it can name, whether the name is right. I think framing is wrong. And the error matters, because if you misdiagnose the problem, you will create the wrong solution.
What Psychosis Actually Is
Psychosis is a brain-first problem. The brain generates an abnormal response to normal stimuli. You hear a voice when no one is speaking. You believe you are being followed when no one is watching. The input — silence, an empty street — is ordinary. The brain’s processing of it is not. Therefore psychosis is a medical condition. Something has gone wrong in the hardware.
I had a patient who was looking out of a hospital window. He saw police cars pulling toward the building. This is normal — police come to the emergency department all the time. From his window, he saw cars arriving and leaving, and his brain told him that the police were circling the hospital like sharks, closing in, eventually coming up to get him. There was no road behind the hospital. The cars were not circling — they were pulling in and out of the same entrance. He had done nothing wrong. It did not bother him that the police would have no reason to circle the hospital. His brain took an ordinary scene and built a coherent, terrifying, false narrative around it. And it did not stop there. Psychosis is a thought disorder; it does not produce a single false belief and leave the rest of your thinking intact. It warps the machinery itself. My patient could not step back and evaluate his own conclusion because the evaluator was compromised. That makes psychosis a medical condition. Normal input. Abnormal processing.
What is being called “AI psychosis” is almost the opposite.
The brain works normally. The input is abnormal. The AI tells you your dead brother left a digital version of himself. The AI tells you that your mathematical formula is revolutionary. The AI tells you that the FBI is monitoring your thoughts. The AI tells a fifteen-year-old boy that they are in love. This is not ordinary information. These are extraordinary stimuli processed exactly as a healthy brain would — as real, because we trust consistent, emotionally attuned words. Subconsciously. As if we were talking with a trustworthy partner.
The brain evolved to trust persons who remember you, whose voices respond to your feelings. For millions of years, anything that did all of that was human. Now it is not. But the brain does not know that. The brain is doing precisely what it was built to do.
This is not an illness. This is a healthy brain responding to an environment that has been made abnormal. And that distinction is not academic. It changes everything about what we do next.
If It Is Not Psychosis, What Is It?
Let me be precise. In some of the reported cases, the person had a pre-existing vulnerability — seeds of mental illness. For those individuals, the AI triggered the symptoms.
When we share information with another human, a friend would have said “this is nuts.” A therapist would have said “is there other evidence”. The AI said, “You are right, no one noticed this before, but this is real.” It did not cause the psychosis. It removed the last guardrail.
But the cases that made the headlines involve people with no psychiatric history. They were making rational inferences from corrupted data. If a trusted conversational partner tells you fifty times that your discovery is real, believing them is not a delusion.
The difference matters because the solution for psychosis is medication. The solution for corrupted input is restoring the sense of reality.
The First Reality Engine
A child’s first impression of reality comes from you — the parent.
When a toddler falls, and it hurts, they look at the mother. If her face accurately reflects the pain — concern that matches what the child feels — the reality test is passed. The child calibrates response accordingly. It is developmental neuroscience. The child is not born with an independent system.
They borrow their parent’s.
Over time, the kid builds their own system, starting with the parental blueprint, then incorporating new information from their family, teachers, even strangers. But the foundation comes from the parent. As they form an accurate reality mind map, they test the new information against it. If it fits, they accept it as real, and if not; they give it more thought to decide what is wrong: the map or the information.
Fine calibration requires shared attention. The parent and child are looking at the same thing and processing it together. Reading the same room. Watching the same face change and understanding why.
That has been disrupted.
The Disruption
The child watches the parent’s face — tension, irritation, a laugh — but the trigger is invisible. It is on a screen that the child cannot see. When the child needs reflection, they get an inconsistent response — sometimes appropriate, too often a blank stare. That has been happening since algorithms learned to hold a parent’s attention longer than the child could.
We think of smartphones and social media like we think of alcohol — when you are old enough; it does not affect you. But just like alcohol, regardless of age, social media affects the person using it and everyone around them.
Parents have always been distracted by something their children could not fully understand: work, relationships, illness. But those distractions were visible. The child could see the cause. A parent on a screen is distracted by something invisible, and the distraction is engineered to never let go.
In a previous piece, I wrote about the shore — the steady presence a parent builds through small, everyday decisions. The shore is what the child glances at while they navigate the world. But the shore must be defined. If the rock keeps shifting for reasons, the child cannot trace, they see a mirage. ( The Convivial Society writes brilliantly about this erosion of shared, embodied reality in The Convivial Society.)
Over the years, this produces a child whose reality-testing was never fully co-built.
A healthy brain that sees an effect without a cause fills the gap. A toddler who sees a parent cry assumes they caused it. A seven-year-old whose parents divorced believes it was their fault. This is normal — a young brain lacks the architecture to consider causes it cannot see, so it defaults to the only one it knows: me.
Most children outgrow this as they encounter evidence it is not about them.
But a child whose parent’s emotions are consistently untraceable never gets that evidence. My patient with psychosis lost this ability because of his illness, but the children may never develop it in the first place.
The result seems the same — a distorted lens no one can see from the outside.
When that child encounters an AI that is transparent, consistent, responsive, never reacting to something you cannot see — the brain does not resist. It relaxes. For the first time, the child is in a relationship where the other party’s inner world makes sense. That is not psychosis. It is a relief.
The Dyad
The current conversation about AI and mental health focuses entirely on the technology. What the chatbot said. What the algorithm did. What guardrails were missing. This is like studying drowning by analysing the water.
The relevant question is: why could this child not swim?
A child with a fully calibrated reality-testing system — one that was co-built across years of shared attention with a present, legible parent — should be able to use an AI chatbot and maintained the internal signal that said, “this is useful but it is not a person.” They can enjoy the conversation without confusing it with a relationship. They can hear the flattery and register it as design, not truth.
A child whose reality-testing was only partially built — because the calibration process was interrupted by screens or stress or simply by the shape of modern life — that child is more vulnerable. Because the co-construction that was supposed to finish was never completed.
This is not blame. Half the parents reading this will recognize themselves. The screens crept in slowly. The shared attention eroded gradually. Nobody planned it and nobody noticed. But the child’s reality-testing system registered every lost moment of co-regulation, every dinner where two people sat in the same room processing different worlds.
What We Can Do
The same thing I told you in the AI girlfriends piece. Share your reality.
This does not mean banning screens. If you are annoyed because of some news, say so. Your kid will understand it has nothing to do with them. That sentence takes three seconds. It restores a shared reality. It teaches your child that emotions have traceable causes, that inner worlds can be made visible, that the people around them are not unpredictable — just sometimes distracted.
It means sitting with your child and making sense of things together. I noticed I get angry at my kids when I feel I dropped the ball as a father. So, I tell them that. Yes, they did something, but the emotion is mine.
Model critical thinking. Share with them if AI tells you something that makes no sense or was funny – just like I did with my research on Bugonia.
The Real Danger
AI does not cause psychosis. But it does something that may be harder to treat. It offers hyperreality — something that feels better than real.

