I feel the entire neuralink technology is a retread of something that already works without needing brain surgery. Paul Bach-y-Rita already demonstrated that you could substitute senses via neuroplasticity. The brain will figure out inputs you give it in almost any form as long as you train it right. He proved it by having people receive camera visual input through electrodes on their back and, later on, used the tongue to do so.
With the technology he showed in the 90s we could have already had drone operators who are trained to receive all the data readout from drone sensors into their tongue or forearm to react instantaneously with exotic information such as heat signature scan or electromagnetic scans. It's a shocking step backwards to claim, as neurolink does, that we need to carve someone's skull open. The neuralink philosophy demonstrates a real primative understanding of how the brain works.
Besides the huge medical issues of implants and the left brain points made in the article, there's another huge issue with trying to read thoughts or implant thoughts.
We all code information differently based on our past experience which structures our brain.
We all have a different “language” or format of the way we store and process information in our brains.
Thus, there’s no easy way to read or write memories and thoughts.
The furthest they got is crude. Zapping certain areas to induce or inhibit certain areas can change the way information is processed. It’s sort of like how trauma can wire the brain.
However, this is like banging your computer randomly in order to get it to open up your browser etc…. It’s brute force garbage.
One of the tendencies that McGilchrist noted in left-hemisphere dominated thinking is unrealistic optimism. "We only had to learn 39 phonemes to be able to identify any word in English." There's a lack of critical awareness in that sentence.
I don't like categorizing people this way -- that's such a left-brainy thing to do -- but here we are.
I am referring to myself categorizing people as left or right brain. I am being self-critical. I thought that was apparent. Sorry if you interpreted otherwise.
In reading the information above about getting a computer to correctly replicate a voice pattern, I can't help but wonder about my voice to text experiences with my laptop. It makes some pretty entertaining and incorrect decisions about what I am saying, but I don't have to train it to understand 95% of what I am saying. Why would I opt to train a device that can't understand me?
Reminds me of the "assistance" given to farmers decades ago and who are stuck (in their minds) in an unhealthy, unsustainable system. But maybe machine and synthetic assistance on their farms leaves them time to play video games til 6 am.
Yes. McGilchrist does go on to talk a lot about reciprocity between the perceiver and the object, which is also foundational to Biosemiotics. All meaning is deeply contextualized. That's why I think it's important to learn things by being immersed, not by learning the basics first and the labels of things that haven't been experienced yet. Crispin mentions embodied robotics too, which makes me think of Rodney Brooks, who is very good on this topic, very anti-hype AI, like me.
I feel the entire neuralink technology is a retread of something that already works without needing brain surgery. Paul Bach-y-Rita already demonstrated that you could substitute senses via neuroplasticity. The brain will figure out inputs you give it in almost any form as long as you train it right. He proved it by having people receive camera visual input through electrodes on their back and, later on, used the tongue to do so.
https://antonyhall.net/blog/seeing-with-the-tongue-paul-bach-y-rita/
With the technology he showed in the 90s we could have already had drone operators who are trained to receive all the data readout from drone sensors into their tongue or forearm to react instantaneously with exotic information such as heat signature scan or electromagnetic scans. It's a shocking step backwards to claim, as neurolink does, that we need to carve someone's skull open. The neuralink philosophy demonstrates a real primative understanding of how the brain works.
Precisely.
Yep!
Besides the huge medical issues of implants and the left brain points made in the article, there's another huge issue with trying to read thoughts or implant thoughts.
We all code information differently based on our past experience which structures our brain.
We all have a different “language” or format of the way we store and process information in our brains.
Thus, there’s no easy way to read or write memories and thoughts.
The furthest they got is crude. Zapping certain areas to induce or inhibit certain areas can change the way information is processed. It’s sort of like how trauma can wire the brain.
However, this is like banging your computer randomly in order to get it to open up your browser etc…. It’s brute force garbage.
Thanks Rob for the link to this article!
100%
"The researchers claim the computer only had to learn 39 phonemes (vowel and consonant combinations) to be able to identify any word in English."
I laughed so hard reading that line! At the use of the word 'only'.
English has 44 phonemes, so it had to learn 39/44 and acted like that was SO few 🤣
Though I'm curious which 5 phonemes it failed to learn.
(English has 44 phonemes is something I picked up learning how to teach my severely dyslexic child to read)
One of the tendencies that McGilchrist noted in left-hemisphere dominated thinking is unrealistic optimism. "We only had to learn 39 phonemes to be able to identify any word in English." There's a lack of critical awareness in that sentence.
I don't like categorizing people this way -- that's such a left-brainy thing to do -- but here we are.
"I don't like categorizing people this way "
Are you referring to me referring to my son as dyslexic?
I am referring to myself categorizing people as left or right brain. I am being self-critical. I thought that was apparent. Sorry if you interpreted otherwise.
You're not responsible for my interpretations. I was asking to clarify in case I misunderstood, which I had. Thanks :)
Well, it's my job to make myself clear. I appreciate your comments.
In reading the information above about getting a computer to correctly replicate a voice pattern, I can't help but wonder about my voice to text experiences with my laptop. It makes some pretty entertaining and incorrect decisions about what I am saying, but I don't have to train it to understand 95% of what I am saying. Why would I opt to train a device that can't understand me?
Neuralink is a solution in search of a problem. They could try it on comatose patients who can't even blink.
Beautifully clear analysis, Tori! Thank you, I will share it with others.
Reminds me of the "assistance" given to farmers decades ago and who are stuck (in their minds) in an unhealthy, unsustainable system. But maybe machine and synthetic assistance on their farms leaves them time to play video games til 6 am.
Freeing people up from healthy manual labor leaves them free to take up addictions. Like they say about idle hands.
You might like this anarchist philosophy professor’s screed on “external mind” theory.
https://youtu.be/HHccyFieD4Q?si=FHA1CT1EGC4-n9WE
Yes. McGilchrist does go on to talk a lot about reciprocity between the perceiver and the object, which is also foundational to Biosemiotics. All meaning is deeply contextualized. That's why I think it's important to learn things by being immersed, not by learning the basics first and the labels of things that haven't been experienced yet. Crispin mentions embodied robotics too, which makes me think of Rodney Brooks, who is very good on this topic, very anti-hype AI, like me.