21 Comments
User's avatar
V. N. Alexander's avatar

I uploaded the first half of this essay and prompted DeepSeek to "write a new essay with the same style and similar content of this essay." This is what it produced. Please let me know what you notice.

The Illusion of Thought-Reading: Why Neural Implants Don’t (and Can’t) Decode the The Illusion of Thought-Reading: Why Neural Implants Don’t (and Can’t) Decode the Mind

The headlines blare with promises of a sci-fi future:

"AI Can Now Read Your Thoughts!"

"Neuralink Grants Telepathic Control Over Machines!"

"Brain Chips Will Merge Humans with AI!"

But beneath the hype lies a far less glamorous reality. The truth is, no neural implant—whether developed by Neuralink, academic labs, or military research programs—actually "reads" thoughts. What these devices detect are electrical impulses associated with movement, not cognition. The distinction is crucial, yet it is routinely glossed over in breathless media coverage and corporate marketing.

The Mechanics of Misinterpretation

Thought is not a series of binary signals waiting to be decrypted. It is an emergent phenomenon, a dynamic interplay of neural networks firing across multiple regions of the brain. When you think of a memory, a belief, or an abstract concept, there is no single electrical signature for an AI to intercept. What current brain-computer interfaces (BCIs) actually record are motor commands—signals sent from the brain to the body when a person intends to move a limb, twitch a muscle, or produce speech.

Neuralink’s much-publicized first human subject, Nolan, provides a perfect case study. His implant does not "read his mind" to move a cursor—it detects the neural activity that would have moved his hand if his spinal cord were intact. Similarly, patients like Ann and Bravo1, who have lost the ability to speak, use implants that translate attempted facial movements or vocal cord activations into synthetic speech. These are remarkable feats of engineering, but they are not telepathy. They are sophisticated muscle monitors.

The Training Fallacy: Why AI Doesn’t "Understand" the Brain

A critical flaw in the "mind-reading" narrative is the assumption that AI can generalize brain activity patterns without extensive, patient-specific training. Ann had to repeat phonemes for weeks so the algorithm could associate her motor signals with speech sounds. Bravo1 painstakingly trained his system to recognize just 50 words. These are not universal decoders—they are personalized interfaces that require the user to conform to the machine’s limitations, not the other way around.

This raises an obvious question: *Why not use non-invasive alternatives?* For Ann, AI-powered lip-reading could achieve the same (or better) results without brain surgery. Bravo1 could communicate via eye-tracking or Morse code blinks. Nolan, who retains some motor control, could use voice commands or head-tracking devices. The insistence on implants suggests a fetishization of invasive tech—a belief that cutting into the brain is inherently more futuristic, even when safer, simpler solutions exist.

The Left Hemisphere’s Delusion: Why We Keep Falling for the Myth

Why does the fantasy of mind-reading persist despite the evidence? The answer may lie in how modern culture processes reality. As psychiatrist Iain McGilchrist argues in *The Master and His Emissary*, the left hemisphere of the brain—the part obsessed with abstraction, reductionism, and mechanistic models—has come to dominate Western thought. It sees the brain as a computer and thoughts as data streams, ignoring the messy, holistic nature of consciousness.

This reductionist mindset dovetails with Silicon Valley’s ideology: the belief that every human experience can be optimized, digitized, and commodified. Elon Musk’s claim that Neuralink will let people "merge with AI" is not a scientific prediction—it’s a marketing pitch. The actual proposed use case? Moving a cursor faster. Not exactly *The Matrix*.

The Real Risks: Control, Not Cognition

The most disturbing possibility is not that these devices will read minds, but that they will manipulate them. If a BCI can interpret motor signals, could it also *inhibit* them? Could it override a user’s intentions, like a more precise version of electroconvulsive therapy? The history of neurotechnology is littered with attempts to control behavior—from Cold War-era "brain pacemakers" to modern deep-brain stimulation used (sometimes controversially) for mental illness.

Neuralink’s own ambitions hint at this darker potential. The next phase of Nolan’s trial involves stimulating his muscles, effectively turning his implant into a wireless bypass for his spinal injury. While this could be life-changing for paralysis patients, the same principle could be weaponized. Militaries are already investing in BCIs to enhance soldiers’ reflexes. How long before the same tech is used to suppress dissent, enforce compliance, or even alter perceived reality?

Conclusion: The Emperor’s New Chip

Neural implants are impressive tools for restoring movement and communication to those who have lost them. But they are not mind-readers, and they never will be. The persistent conflation of motor signals with conscious thought reflects a cultural obsession with techno-utopianism—one that prioritizes sci-fi fantasies over tangible, ethical progress.

Before we let companies drill into our skulls for the sake of faster swipes, we should ask: *What problem is this really solving?* And who stands to benefit? Until then, the only thing these devices will decode is our willingness to believe in magic.

Expand full comment
Dave's avatar

I feel the entire neuralink technology is a retread of something that already works without needing brain surgery. Paul Bach-y-Rita already demonstrated that you could substitute senses via neuroplasticity. The brain will figure out inputs you give it in almost any form as long as you train it right. He proved it by having people receive camera visual input through electrodes on their back and, later on, used the tongue to do so.

https://antonyhall.net/blog/seeing-with-the-tongue-paul-bach-y-rita/

With the technology he showed in the 90s we could have already had drone operators who are trained to receive all the data readout from drone sensors into their tongue or forearm to react instantaneously with exotic information such as heat signature scan or electromagnetic scans. It's a shocking step backwards to claim, as neurolink does, that we need to carve someone's skull open. The neuralink philosophy demonstrates a real primative understanding of how the brain works.

Expand full comment
V. N. Alexander's avatar

Precisely.

Expand full comment
Rob (c137)'s avatar

Yep!

Besides the huge medical issues of implants and the left brain points made in the article, there's another huge issue with trying to read thoughts or implant thoughts.

We all code information differently based on our past experience which structures our brain.

We all have a different “language” or format of the way we store and process information in our brains.

Thus, there’s no easy way to read or write memories and thoughts.

The furthest they got is crude. Zapping certain areas to induce or inhibit certain areas can change the way information is processed. It’s sort of like how trauma can wire the brain.

However, this is like banging your computer randomly in order to get it to open up your browser etc…. It’s brute force garbage.

Expand full comment
Bugey libre's avatar

Thank you Rob for having sent that link in Riley's rural hooka lounge...

Expand full comment
yantra's avatar

Thanks Rob for the link to this article!

Expand full comment
V. N. Alexander's avatar

100%

Expand full comment
Yeue's avatar
Apr 6Edited

Wasn't there news about a supercomputer being able to read minds a few months ago? I cannot seem to find anything about it, but I remember read an article on it.

Expand full comment
Fanta Sea's avatar

"The researchers claim the computer only had to learn 39 phonemes (vowel and consonant combinations) to be able to identify any word in English."

I laughed so hard reading that line! At the use of the word 'only'.

English has 44 phonemes, so it had to learn 39/44 and acted like that was SO few 🤣

Though I'm curious which 5 phonemes it failed to learn.

(English has 44 phonemes is something I picked up learning how to teach my severely dyslexic child to read)

Expand full comment
V. N. Alexander's avatar

One of the tendencies that McGilchrist noted in left-hemisphere dominated thinking is unrealistic optimism. "We only had to learn 39 phonemes to be able to identify any word in English." There's a lack of critical awareness in that sentence.

I don't like categorizing people this way -- that's such a left-brainy thing to do -- but here we are.

Expand full comment
Fanta Sea's avatar

"I don't like categorizing people this way "

Are you referring to me referring to my son as dyslexic?

Expand full comment
V. N. Alexander's avatar

I am referring to myself categorizing people as left or right brain. I am being self-critical. I thought that was apparent. Sorry if you interpreted otherwise.

Expand full comment
Fanta Sea's avatar

You're not responsible for my interpretations. I was asking to clarify in case I misunderstood, which I had. Thanks :)

Expand full comment
V. N. Alexander's avatar

Well, it's my job to make myself clear. I appreciate your comments.

Expand full comment
Vicki Napper's avatar

In reading the information above about getting a computer to correctly replicate a voice pattern, I can't help but wonder about my voice to text experiences with my laptop. It makes some pretty entertaining and incorrect decisions about what I am saying, but I don't have to train it to understand 95% of what I am saying. Why would I opt to train a device that can't understand me?

Expand full comment
V. N. Alexander's avatar

Neuralink is a solution in search of a problem. They could try it on comatose patients who can't even blink.

Expand full comment
Stu Summer's avatar

Beautifully clear analysis, Tori! Thank you, I will share it with others.

Expand full comment
Valerie Grimes, Hypnotist's avatar

Reminds me of the "assistance" given to farmers decades ago and who are stuck (in their minds) in an unhealthy, unsustainable system. But maybe machine and synthetic assistance on their farms leaves them time to play video games til 6 am.

Expand full comment
V. N. Alexander's avatar

Freeing people up from healthy manual labor leaves them free to take up addictions. Like they say about idle hands.

Expand full comment
Biff Thuringer's avatar

You might like this anarchist philosophy professor’s screed on “external mind” theory.

https://youtu.be/HHccyFieD4Q?si=FHA1CT1EGC4-n9WE

Expand full comment
V. N. Alexander's avatar

Yes. McGilchrist does go on to talk a lot about reciprocity between the perceiver and the object, which is also foundational to Biosemiotics. All meaning is deeply contextualized. That's why I think it's important to learn things by being immersed, not by learning the basics first and the labels of things that haven't been experienced yet. Crispin mentions embodied robotics too, which makes me think of Rodney Brooks, who is very good on this topic, very anti-hype AI, like me.

Expand full comment