If you read and believe headlines, it seems scientists are very close to being able to merge human brains with AI. In mid-December 2023, a Nature Electronics article triggered a flurry of excitement about progress on that transhuman front:
“‘Biocomputer’ combines lab-grown brain tissue with electronic hardware”
“A system that integrates brain cells into a hybrid machine can recognize voices”
“Brainoware: Pioneering AI and Brain Organoid Fusion”
Scientists are trying to inject human brain tissue into artificial networks because AI isn’t working quite as well as we have been led to think. AI uses a horrendous amount of energy to do its kind of parallel processing, while the human brain uses about a light bulb’s worth of power to perform similar feats. So, AI designers are looking to cannibalize some parts from humans to make artificial networks work as efficiently as human brains. But let’s put the fact of AI’s shortcomings aside for the moment and examine this new cyborg innovation.
The breakthrough in biocomputing reported by Hongwei Cai et al. in Nature Electronics involves the creation of a brain organoid. That is a ball of artificially-cultured stem cells that have been coaxed into developing into neurons. The cells are not taken from someone’s brain—which relieves us of certain ethical concerns. But because this lump of neurons does not have any blood vessels, as normal brain tissue does, the organoid cannot survive for long. And so ultimately, the prospect of training organoids on datasets does not seem practical, economically speaking, at present.
But that is not going to stop this research. The drive to seamlessly integrate biology and technology is strong. But can it be done? And why do so many research scientists and funding agencies assume it’s possible?
Transhuman Hopes
Underlying the hopes of a transhumanist is a philosophy of materialism that follows a logic something like this: living systems are composed of matter and energy: the interactions of all matter and energy can be represented in code, and the material used to create biohardware should be irrelevant and can be synthetic.
With such founding assumptions, transhumanists are confident they can learn to upgrade biological “hardware” with non-biological materials, and reprogram biological “software,” after cracking its “code,” and mix and match with electronics to augment human capabilities.
When researchers integrate brain tissue into an artificial network setup, they treat it as if it were the hardware they’re used to working with. They see each neuron as being either on or off—firing or not—like an electronic switch, and they see the dendrites connecting to other neurons like wires. They see stronger connections between neurons as being “weighted,” in a statistical sense, through differential repeated interactions.
Not incidentally, if such minded people were to exercise their influence in education, they would treat students like neural networks that can be programmed by rote memorization, and they would assume that they could better trigger the targeted response by simply applying rewards and punishments. This technique produces automatons, not critical thinkers. But that’s another essay.
Organoids Might Have a Different Kind of Intelligence
If researchers think of living systems as digitized computers, they are going to have trouble with their organoids. What if neurons process information very differently from the way that artificial neural nets do? What if neurons communicate with each other by propagating bioelectric waves through a medium? and what if, when they fire, it’s like rain drops creating concentric rings in a pool of water, with the clashing concentric rings creating interference patterns? What if it’s complicated?
Researchers in my field, Biosemiotics, are now asking such questions. And in their vision of brain activity, neurons are not just connected as if with wires, but are coordinated with each other by virtue of their shared milieu. When a human brain has a thought, three dimensional bioelectric waves wash over the tissue, creating virtual connections — groups affected by the wave become momentarily coordinated. I don’t think there is an analogous process going on in an artificial neural network, where fluidity is only a metaphor and the structure of the setup is a lot more brittle and fixed.
An incredibly complex system like an organoid cannot be understood better by thinking of it in terms of a less complex system like a circuit board. Each neuron has the benefit of billions of years of evolution; environmental conditions can trigger DNA to produce a variety of proteins for all sorts of uses. Each cell has complex little organelles (that are descended from free-roaming protist creatures!) to handle the processing of all sorts of different signals from the outside. Each cell has receptors and little ion-gated pores that filter signals.
But I’m not a bio snob. Computers are incredible tools in the hands of people. But can/should digital computers be tools inside the heads of people or can/should brain tissue be incorporated into digital computers?
Brainoware: How it Works
The setup for the invention described in the Nature Electronics article is remarkably simple. The organoid is placed on 2D high-density multielectrode array (MEA), which emits electric pulses, to which the organoid neurons respond by producing their own electrical patterns. This device has been dubbed, “Brainoware,” and it can recognize voices.
First, voice recordings are made and digitized into a 2D pattern that can be modeled on the 2D MEA. This digitized voice model is the input used to stimulate the brain organoid, which, in turn, outputs a pattern that reflects both the voice model and the internal structure of the organoid’s own dynamics. The neurons stimulate and are stimulated by other neurons in a non-linear fashion, that is, some features maybe be dampened, others amplified.
The above illustration of the setup is from the actual article, not from a pre-school reader version of the article.
The experiment was declared a success when, after training, the organoid had improved its ability to distinguish the vowel sounds of a male speaker from seven other male and female speakers. Prior to training, the setup could distinguish the speaker about 51% of the time, and after training, it was about 78% accurate.
But Wait!
Before we get too excited about this success of finally merging man and machine, using enslaved brain cells to build a computer that can eavesdrop on our conversations, I note that over twenty years ago, a very similar experiment was done with a perturbed bucket of water performing a similar role as the brain organoid.
In that experiment, the water was used to distinguish between voice recordings of the words, “One” and “Zero,” with an error rate of only 1.5%. Below is a picture of these researchers’ three-dimensional models of the spoken words.
It is my opinion that the Brainoware researchers are not using the full potential of a neuron, if a bucket of water can “process” information better than a brain organoid. It's a bit like using Shakespeare's collected works as a doorstop.
In “Pattern Recognition in a Bucket,” Chrisantha Fernando and Sampso Sojakka note that similar experiments on have been done at the Unconventional Computing Laboratory, run by the devilishly charming Andy Adamatzky at the University of the West of England, Bristol UK. For many years now, Adamatsky has used chemicals (forming reaction-diffusion waves) and slime mold to do computation and act as memory reservoirs.
What is a Computer Reservoir?
I had to look this up. Reading computer science papers is—for me, a philosopher of science who originally started out in literary theory—reminiscent of reading Jacqueses Lacan and Derrida; there is a lot of unnecessarily opaque terminology covering up rather mundane statements. I gather that a reservoir can be any kind of physical system that is made of individual units that can interact with each other in non-linear ways, and these units must be capable of being changed by the interaction. Even a bucket of water can function as a reservoir, apparently. Miguel Soriano explains it this way in “Viewpoint: Reservoir Computing Speeds Up,”
Reservoirs are able to store information by connecting the units in recurrent loops, where the previous input affects the next response. The change in reaction due to the past allows the computers to be trained to complete specific tasks.
Hope that helps.
Reservoirs are also referred to as “black boxes” because the researchers don’t know (or don’t have to know) the complex dynamics that go on while transforming the input into the output. I reckon that, because every spoken word is never quite the same twice, a non-linear system must process that sound so that it captures an essence of what it is and can identify the same word again and again in very different contexts.
Computer Redesign?
Science Fiction is often ahead of actual research. In the movie Ex Machina, the femme fatale robot has an artificial brain that is made out of gel, not silicon chips and electronic switches. She might have come out of Adamatsky’s unconventional computing lab.
One of my colleagues, J. Augustus Bacigalupi, proposed a computer redesign called Synthetic Cognition back in 2012, based on an understanding that biological information processing looks a bit more like this:
than this:
Bacigalupi envisioned a terrain emerging in the medium between neurons and imagined that the intersections of diffusing signals, the interference, could itself be harnessed as a useful signal. He suggests that such a different approach would make computers much more efficient insofar as they would naturally integrate multiple signals for free.
Since that early hardly-watched lecture on Synthetic Cognition (while TED talks by Nicholas Negroponte of MIT Media Lab—who thinks we will soon be able to ingest digitized Shakespeare as a pill—get a lot more views), Bacigalupi has gone on to specialize in Biosemiotics, writing papers with me and our mutual colleague, Don Favareau, like their latest one in the Journal of Physiology.
A dozen years ago Bacigalupi saw cyborgs in our future if we used his proposed new technology that would be able to harness what’s special about brain organoids and slime mold.
But the integration of man and machine faces banal challenges, like rotting organic matter and inflammation of cells in contact with the various chemicals of electronic devices.
There is a reason why most of Elon Musk’s Neuralinked primates didn’t make it. A similar issue here is the unintended (we hope!) side-effects of synthetic pharmacological interventions, which are the bane of that industry. You see, biological cells tend to make interpretations of signs, not strict decryptions of code. Such flexibility allows adaptive creativity to happen, as well as terrible, unpredictable outcomes, for example, various autoimmune diseases. Even relatively simple transhuman tech, like pacemakers and hip replacements can, in some people, provoke allergic reactions to metals.
And I don’t see the point of cannibalizing biology so that computer scientists can make robots pass the Turing Test better. I do see, for example, NASA’s Artemis team using redesigned technology to create better robots, whose proprioception avails itself of a fluid medium capable of generating interference patterns that help orient it while it explores the lunar surface. Imitating the way biological organisms process information to make better, more reliable and efficient tools, seems common sense.
But I don’t see the point of making tools seem human—or of mixing human and electronic parts.
Computer Slaves
As Ian McEwan makes clear in his 2019 novel, Machines Like Me, the point of making a humanoid robot is to use it as a sex toy and a dishwasher. The drive to dehumanize people into cyborgs or to humanize robots probably grows out of the fact that it is no longer considered okay to enslave ordinary humans (or spouses). I suspect that those who want a humanoid computer want a perfect mate, who knows everything about the master, can anticipate his every thought and move, and responds accordingly. Such perfection in a mate does not allow it to express its own opinions or come up with its own goals and purposes.
It is worth going beyond the hype of headlines to explore these issues further. We can learn a lot about ourselves in doing so. I lead a monthly webinar called We Are not Machines through IPAK-EDU where my students and I explore these kinds of issues. Despite some concerted efforts to terrorize us, I do not believe we are about to be replaced in the workforce (only the shit jobs will go) and I don’t believe computers will be capable any minute now of taking over and turning us into workerborgs or batteries.
You are amazing just as you are, with your wonky neurons and your viscous brain. And if we perfect our external tools and use them wisely, we can be even better.
V. N. Alexander is a philosopher of science and a novelist.
Thank you. I fully agree with your conclusions. The transhuman road is a dead-end ultimately and the drivers behind it, are seriously flawed. Let's just become better humans, who use technology in a way that respects humanity and the larger Natural world we are a part of.
This is great. For decades the marketing machine has exagerated and falsified the claims for technology. In the 1950s / 60's 'labour saving devices' (vacuum cleaners, washing machines, electric tin openers etc) were going to give us all lots of leisure. Like generative AI (so called) they are useful, but the promises never came true. At the end of the 70's a famous book and BBC TV series, called 'The Mighty Micro', promised that due to computer technology we would all be working a 20-hour week and retiring comforably at 50, by the year 2000. In the 1990's we were endlessly told how the internet was going to make the world more democratic. We are badly in need of a more realistic view and more realistic expectations.
One aspect of that (I mean apart from simply looking at the history of broken promises) is a better appreciation of the differences between the mechanistic and the organic, and some real scientific analysis of that, which you provide here. You do a really great job of clarifying a more realistic picture step by step.
I find it very telling that the Nature Electronics article that you cite presents its case in such a childish manner – it tells us a lot about the mentality of those driving this mad-professor approach to the human future. 'Ingesting the complete works of Shakespeare in a digitised pill' is another graphic illustration of the fantasy world they are living in. And I also love the comparison with Derrida etc – the converse (I mean opposite of childish over-simplification) strategy of using bizarrely over-complex language to make audiences feel that they are just not smart enough to understand.
In reality, I find many people don't buy the hype. I have asked many people in recent months, variously whether a machine can be intelligent, whether a machine can think, etc. All of them have said no – including many adolescents who are themselves quite comfortable using ChatGPT.
Despite that intuitive relationship to the truth in many people though (and I don't think the people I've asked are necessarily a representative cross-section of the population), we really need people like you to keep providing the honest, clearly presented science behind the hype, and I hope you will continue to do so!
By the way, I just order a copy of your book 'The Biologist's Mistress'.
Best,
Michael.