(read time: 3 minutes)
Punch line: No, AI will never be conscious—and no, AI will not replace human therapists.
As a somewhat unusual combination of electrical engineer, student of consciousness research, long-time meditator, and practicing therapist, I get a lot of questions from clients, friends, and family about AI—especially about AI-driven therapy.
Let’s start with the question that seems to be lurking behind all the others:
Is AI already conscious, or will it become conscious as its capabilities continue to grow exponentially?
Computers are mysterious to most people—and, frankly, even to many computer scientists—so let me simplify this as much as possible. We could have built computers using pipes, valves, and water: an unimaginably complex plumbing system. But such machines would be enormous, slow, and expensive. Instead, we use wires, transistors, and electricity because they can be made tiny, fast, and cheap.
But here’s the key point: no matter the medium—plumbing or silicon—complexity alone does not produce subjective experience.
While this remains a hotly debated topic in neuroscience and consciousness research, I believe we will eventually understand—and be able to demonstrate—that brains do not cause experience to arise in the way we currently assume.
A useful analogy is a radio. A radio does not create music; it receives and translates a signal. Damage the radio, and the music becomes distorted or disappears entirely—but the signal itself has not been destroyed.
In this view, the brain is the radio. The mind—the field of subjective experience—is the signal.
Brains clearly matter. Change the brain, and experience changes. But correlation is not the same thing as generation. The brain appears to shape, filter, and localize experience—not manufacture it from inert matter. Computers, by contrast, manipulate symbols according to deterministic rules. They process information, but they do not receive—or participate in—any field of experience. Intelligence, no matter how impressive, is not the same thing as having an inner point of view. So no—we are never going to have to feel guilty for turning our computers off, or worry that yelling at them when they crash is ethically questionable. There is no “someone” there having an experience of loss or fear. There is no inner life.
Now let’s turn to the question of AI-based therapy.
AI is extraordinary at processing information—and we are only at the beginning. I fully expect AI systems to soon be able to assess symptoms, suggest diagnoses, identify cognitive distortions, flag defensive patterns, and generate sophisticated treatment plans. That’s not science fiction; it’s a near-term reality.
This will matter. AI may dramatically improve access to mental health support, especially for psychoeducation, skills training, and early intervention. For many people, that will be genuinely helpful.
But none of this addresses the most critical ingredient of therapy: Healing rarely happens in isolation. It happens in relationship.
Deep psychological healing occurs when one human nervous system comes into attuned contact with another. When a person feels genuinely seen, emotionally met, and cared for by someone who is not only trained, but who has confronted—and to some degree transcended—the existential reality of suffering in their own life.
An AI system can simulate this. It can say the right words. It can even sound compassionate. But something essential is missing: There is no being on the other side who is actually sharing the emotional field. There is no real relationship—and decades of psychotherapy research tell us that the therapeutic relationship is one of the strongest predictors of outcomes, if not the strongest.
As with AI “friendships” and AI “romantic partners,” which are now increasingly common, some people will attempt to convince themselves that a relationship exists. For some people, some of the time, that may partially work.
But at a deeper level, it rings hollow. When a machine says, “I care about you,” our nervous systems know that there is no one there who is actually experiencing care. A machine can simulate caring behavior, but it cannot care. Humans can feel that difference, even if they struggle to explain it.
Most of my clients don’t even like meeting online. They can sense that something is missing. They want to be in the room with me—sharing real space, real presence, and real emotional contact—so their nervous systems can learn safety through direct interaction with another regulated nervous system.
They heal not because of clever interventions alone, but because relationship itself becomes the container for change.
So when I say, “I see you. I genuinely care about you. And I see the light in you trying to emerge,” I mean it.
A computer never will. And it never can—because it does not experience anything at all.