If you read and believe headlines, it seems scientists are very close to being able to merge human brains with AI. In mid-December 2023, a Nature Electronics article triggered a flurry of excitement about progress on that transhuman front:
“‘Biocomputer combines lab-grown brain tissue with electronic hardware”
“A system that integrates brain cells into a hybrid machine can recognize voices”
Scientists are trying to inject human brain tissue into artificial networks because AI isn’t working quite as well as we have been led to think. AI uses a horrendous amount of energy to do its kind of parallel processing, while the human brain uses about a light bulb’s worth of power to perform similar feats. So, AI designers are looking to cannibalize some parts from humans to make artificial networks work as efficiently as human brains. But let’s put the fact of AI’s shortcomings aside for the moment and examine this new cyborg innovation.
“Brainoware: Pioneering AI and Brain Organoid Fusion”
The breakthrough in biocomputing reported by Hongwei Cai et al. in Nature Electronics involves the creation of a brain organoid. That is a ball of artificially-cultured stem cells that have been coaxed into developing into neurons. The cells are not taken from someone’s brain—which relieves us of certain ethical concerns. But because this lump of neurons does not have any blood vessels, as normal brain tissue does, the organoid cannot survive for long. And so ultimately, the prospect of training organoids on datasets does not seem practical, economically speaking, at present.
But that is not going to stop this research. The drive to seamlessly integrate biology and technology is strong. But can it be done? And why do so many research scientists and funding agencies assume it’s possible?
Transhuman Hopes
Underlying the hopes of a transhumanist is a philosophy of materialism that follows a logic something like this: living systems are composed of matter and energy: the interactions of all matter and energy can be represented in code, and the material used to create biohardware should be irrelevant and can be synthetic.
With such founding assumptions, transhumanists are confident they can learn to upgrade biological “hardware” with non-biological materials, and reprogram biological “software,” after cracking its “code,” and mix and match with electronics to augment human capabilities.
When researchers integrate brain tissue into an artificial network setup, they treat it as if it were the hardware they’re used to working with. They see each neuron as being either on or off—firing or not—like an electronic switch, and they see the dendrites connecting to other neurons like wires. They see stronger connections between neurons as being “weighted,” in a statistical sense, through differential repeated interactions.
Not incidentally, if such minded people were to exercise their influence in education, they would treat students like neural networks that can be programmed by rote memorization, and they would assume that they could better trigger the targeted response by simply applying rewards and punishments. This technique produces automatons, not critical thinkers. But that’s another essay.
Organoids Might Have a Different Kind of Intelligence
If researchers think of living systems as digitized computers, they are going to have trouble with their organoids. What if neurons process information very differently from the way that artificial neural nets do? What if neurons communicate with each other by propagating bioelectric waves through a medium? and what if those waves interact like concentric rings in a pool of water, creating interference patterns? What if it’s complicated?
Researchers in my field, Biosemiotics, are now asking such questions. And in their vision of brain activity, neurons are not just connected as if with wires, but are coordinated with each other by virtue of their shared milieu. When a human brain has a thought, three dimensional bioelectric waves wash over the tissue, creating virtual connections — groups affected by the wave become momentarily coordinated. I don’t think there is an analogous process going on in an artificial neural network, where fluidity is only a metaphor and the structure of the setup is a lot more brittle and fixed.
An incredibly complex system like an organoid cannot be understood better by thinking of it in terms of a less complex system like a circuit board. Each neuron has the benefit of billions of years of evolution; environmental conditions can trigger DNA to produce a variety of proteins for all sorts of uses. Each cell has complex little organelles (that are descended from free-roaming protist creatures!) to handle the processing of all sorts of different signals from the outside. Each cell has receptors and little ion-gated pores that filter signals.
Computer nodes are just on/off switches.
But I’m not a bio snob. Computers are incredible tools in the hands of people. But should digital computers be tools inside the heads of people or can brain tissue be incorporated into digital computers?
Brainoware: How it Works
The setup for the invention described in the Nature Electronics article is remarkably simple. The organoid is placed on 2D high-density multielectrode array (MEA), which emits electric pulses, to which the organoid neurons respond by producing their own electrical patterns. This device has been dubbed, “Brainoware,” and it can recognize voices.
The above illustration of the setup is from the actual article, not from a pre-school reader version of the article.
First, voice recordings are made and digitized into a 2D pattern that can be modeled on the 2D MEA. This digitized voice model is the input used to stimulate the brain organoid, which sits on the MEA; the array is like a plate. The organoid neurons, in turn, output a pattern that reflects both the voice model (the input) and the internal structure of the organoid’s own dynamics. I’ll try to explain in more detail in a minute, but for now, suffice it to say that the organoid—that lump of neurons—acts like a medium through which pulses travel and kind of reverberate around. The neurons stimulate and are stimulated by other neurons in a non-linear fashion, that is, some features of the input maybe be dampened, others amplified.
The experiment was declared a success when, after training, the organoid had been conditioned to distinguish the vowel sounds of a male speaker from seven other male and female speakers. Prior to training, the setup could distinguish the speaker about 51% of the time, and after training, it was about 78% accurate. When they say “distinguish” that means the organoid output different patterns that corresponded to different speakers.
But Wait!
Before we get too excited about this success of finally merging man and machine, using enslaved brain cells to build a computer that can eavesdrop on our conversations, I note that over twenty years ago, a very similar experiment was done with a perturbed bucket of water performing a similar role as the brain organoid.
That’s right. A bucket of water.
In that experiment, the water was used to “distinguish” between voice recordings of the words, “One” and “Zero,” with an error rate of only 1.5%. Below is a picture of these researchers’ three-dimensional models of the spoken words.
It is my opinion that the Brainoware researchers are not using the full potential of a neuron, if a bucket of water can “process” information better than a brain organoid. It's a bit like using Shakespeare's collected works as a doorstop.
In “Pattern Recognition in a Bucket,” Chrisantha Fernando and Sampso Sojakka note that similar experiments have been done at the Unconventional Computing Laboratory, run by the devilishly charming Andy Adamatzky at the University of the West of England, Bristol UK. For many years now, Adamatsky has used chemicals (forming reaction-diffusion waves) and slime mold to do computation and act as memory reservoirs.

What is a Computer Reservoir?
I had to look this up. Reading computer science papers is—for me, a philosopher of science who originally started out in literary theory—reminiscent of reading Jacqueses Lacan and Derrida; there is a lot of unnecessarily opaque terminology covering up rather mundane statements. I gather that a reservoir can be any kind of physical system that is made of individual units that can interact with each other in non-linear ways, and these units must be capable of being changed by the interaction. Even a bucket of water can function as a reservoir, apparently. Miguel Soriano explains it this way in “Viewpoint: Reservoir Computing Speeds Up,”
Reservoirs are able to store information by connecting the units in recurrent loops, where the previous input affects the next response. The change in reaction due to the past allows the computers to be trained to complete specific tasks.
Hope that helps.
Reservoirs are also referred to as “black boxes” because the researchers don’t know (or don’t have to know) the complex dynamics that go on while transforming the input into the output. I reckon that, because every spoken word is never quite the same twice, a non-linear system must process that sound so that it captures an essence of what it is and can identify the same word again and again in very different contexts.
This may a very important insight, so I’m going to say it again. You may not quite understand what I’m getting at, but please think of it as food for later thought: because every spoken word is never quite the same twice, a non-linear system must process that sound so that it captures an essence of what it is and can identify the same word again and again in very different contexts.
Computer Redesign?
Science Fiction is often ahead of actual research. In the movie Ex Machina, the femme fatale robot has an artificial brain that is made out of gel, not silicon chips and electronic switches. She might have come out of Adamatsky’s unconventional computing lab.
One of my colleagues, J. Augustus Bacigalupi, proposed a computer redesign called Synthetic Cognition back in 2012, based on an understanding that biological information processing looks a bit more like this:
than this:
Bacigalupi envisioned a terrain emerging in the medium between neurons and imagined that the intersections of diffusing signals, the interference, could itself be harnessed as a useful signal. He suggests that such a different approach would make computers much more efficient insofar as they would naturally integrate multiple signals for free.
Since that early hardly-watched lecture on Synthetic Cognition (while TED talks by Nicholas Negroponte of MIT Media Lab—who thinks we will soon be able to ingest digitized Shakespeare as a pill—get a lot more views), Bacigalupi has gone on to specialize in Biosemiotics, writing papers with me and our mutual colleague, Don Favareau, like their latest one in the Journal of Physiology.
A dozen years ago Bacigalupi saw more AI robots in our future if we used his proposed new technology that would be able to harness what’s special about brain organoids and slime mold, namely a fluid medium through which interference patterns could be generated.
But the arrival of cyborgs, integrating man and machine, faces banal challenges, like rotting organic matter and inflammation of cells in contact with the various chemicals of electronic devices.
There is a reason why most of Elon Musk’s Neuralinked primates didn’t make it. A similar issue is the unintended (we hope!) side-effects of synthetic pharmacological interventions, which are the bane of that industry. You see, biological cells tend to make interpretations of signs, not strict decryptions of code. Such flexibility allows adaptive creativity to happen, as well as terrible, unpredictable outcomes, for example, various autoimmune diseases. Even relatively simple transhuman tech, like pacemakers and hip replacements can, in some people, provoke allergic reactions to metals.
And I don’t see the point of cannibalizing biology so that computer scientists can make robots pass the Turing Test better. I do see, for example, NASA’s Artemis team using redesigned technology to create better robots, whose proprioception avails itself of a fluid medium capable of generating interference patterns that help orient it while it explores the lunar surface. Imitating the way biological organisms process information to make better, more reliable and efficient tools, seems common sense.
But I don’t see the point of making tools seem human—or of mixing human and electronic parts.
Computer Slaves
As Ian McEwan makes clear in his 2019 novel, Machines Like Me, the point of making a humanoid robot is to use it as a sex toy and a dishwasher. The drive to dehumanize people into cyborgs or to humanize robots probably grows out of the fact that it is no longer considered okay to enslave ordinary humans (or spouses). I suspect that those who want a humanoid computer want a perfect mate, who knows everything about the master, can anticipate his every thought and move, and responds accordingly. Such perfection in a mate does not allow it to express its own opinions or come up with its own goals and purposes, which is what it means to be intelligent.
It is worth going beyond the hype of headlines to explore these issues further. We can learn a lot about ourselves in doing so. I lead a monthly webinar called We Are not Machines through IPAK-EDU where my students and I explore these kinds of issues. Despite some concerted efforts to terrorize citizens, I do not believe we are about to be replaced in the workforce (only the shit jobs will go) and I don’t believe computers will be capable any minute now of taking over and turning us into workerborgs or batteries.
You are amazing just as you are, with your wonky neurons and your viscous brain. And if we perfect our external tools and use them wisely, we can be even better.
V. N. Alexander is a philosopher of science and a novelist.
Thank you. I fully agree with your conclusions. The transhuman road is a dead-end ultimately and the drivers behind it, are seriously flawed. Let's just become better humans, who use technology in a way that respects humanity and the larger Natural world we are a part of.
They're desperate because they hit a physical limit with digital chips.
They first hit a limit on speed, the cycles of calculation per second.
Then as they shrunk the chips, they hit a limit on power and heat dissipation.
My desktop's processor from 2018 gets a similar power vs performance as the chips today.
GPU chips which the AI is reliant on also hit limits on efficiency.
Quantum computing is a farce because after all, quantum theory is delusional and ignores the methods used. For example, the double slit experiment ignored the fact that any detector uses energy and changes the outcome. Observer effect my ass. It's the physical detector effect!
But what do we expect from an insane society that values geniuses with the heads in the clouds over practical science? 😂
https://robc137.substack.com/p/left-brain-vs-whole-brain-in-battlestar