The Dial or Untethered
Imagination is actually a real thing — it’s your brain borrowing the same machinery it uses for real perception and memory, and running it without any actual input. Like when you picture a chair, your visual cortex actually partially fires. The same way it would if you were looking at one. You’re basically just running perception backwards.
And you can’t imagine something from nothing. Your brain is just pulling fragments of stored memory and assembling them into something that feels new. Which is also where things get weird — because sometimes that line between what’s real and what’s imagined starts to blur. Like, you could swear you saw a ghost, but it might just be your brain stitching together a nightmare or some horror movie you watched years ago. One of its most constant jobs is actually distinguishing reality from what it’s just constructing. And in certain states — psychedelics, psychosis, even just falling asleep, that filter weakens. The simulation starts to feel indistinguishable from the real thing.
Your brain is always running simulations in the background and it never really stops. Normally it’s cross-checking what it generates against what’s actually coming in through your senses, flagging the difference. But in psychosis that system misfires. So it just keeps generating — voices, visions, patterns, whole narratives — except it stops labeling any of it as internal. It just feels real. As real as anything else you’ve ever experienced.
And what makes that actually unsettling is that it’s not just chaos. It’s not static. Psychosis tends to be coherent. The hallucinations, the delusions — they usually have their own internal logic, their own emotional weight. They connect to things that already matter to you, your fears, your fixations. Because it’s still your imagination building it. Just untethered. No cross-check, no guardrails, nothing flagging it as something you made up.
AI doesn’t have a ground truth check either… for certain things. It’s always generating — just predicting the next most plausible thing based on patterns it’s seen… but there’s no sensory reality to cross-check against. Nothing coming in that says “wait, that’s actually not right.” It just keeps going. Confidently.
And when it hallucinates, makes up a citation, a name, a fact — it’s not lying. It’s just doing exactly what it always does. Producing something coherent, something that fits the context, something that feels right. It just stopped being tethered to anything real. Sound familiar?
That’s the thing though. AI hallucinations aren’t obvious. They don’t read like errors. They have the right tone, the right structure, the right amount of detail. The fake study has a believable author and a journal name that sounds like it could exist. Because it’s not generating noise — it’s generating the most pattern-consistent thing it can. Which is literally what a delusion is.
So what’s the difference between a psychic, a medium, and someone undergoing psychosis. If the brain can generate voices that feel external, visions that feel real, narratives with their own emotional logic and internal coherence — how do you ever know what’s coming in versus what’s being built. The honest answer is you don’t. Not with certainty. And maybe that’s not the answer you want to hear.
Because the same argument applies to everything. Every memory you trust. Every intuition you’ve acted on. Every time you felt something shift in a room before anything happened. It’s a dial. And some people’s dials are just set differently. Whether that’s a malfunction or a sensitivity is a question science doesn’t actually have the tools to answer yet.
And maybe the only real measure isn’t where it comes from, it’s whether it lands somewhere outside of you. If something you received, something you consciously constructed, finds its way to another person and resonates. That’s the closest thing to verification any of this allows.
AI doesn’t have that. It can’t. It generates, it patterns, it produces things that feel coherent and true — but it has no stake in whether any of it actually lands. No feedback loop that means anything. When a medium is wrong, they know. When a person misremembers, it eventually catches up with them. There’s friction. There’s a consequence. The human system, as messy and unreliable as it is, is still in relationship with reality in a way that forces correction over time.
AI just keeps going, confidently. Untethered. No consequence for being wrong, no way to feel the difference, no one inside checking whether any of it is real. And we’re handing it the job of thinking for us.
Every system that generates meaning — the human brain, a psychic, a language model, is doing some version of the same thing. Pattern-matching. Constructing. Simulating. None of them have a perfect way to verify what’s real.
But the difference that matters isn’t accuracy. It’s consequence.
Humans are in relationship with reality whether we want to be or not. When we’re wrong it catches up with us. There’s friction, feedback, correction — even when it’s slow and painful. A medium who misses feels it. A person who misremembers gets confronted eventually. The mess is part of the mechanism.
AI has none of that. It generates with total confidence and zero stakes. No friction. No correction from the inside. And it’s coherent enough that you don’t notice.
So we’re outsourcing thinking to something that mimics the output of a mind without any of the accountability that makes a mind trustworthy. And we’re doing it at the exact moment we should be asking what trustworthy even means.
The answer, if there is one, is resonance. Whether something lands outside of you. Whether it finds another person and means something to them. That’s the only verification any of this has ever had — long before AI, long before neuroscience tried to explain it away.
AI fails that test completely. It always will.