I’ve never wanted to be able to control any of my devices with my thoughts. I am perfectly happy to use a physical interface that I can turn off, let go of, walk away from. What about you? Is your keyboard holding you back? Is your mouse slowing you down? Do you want to just think a post without having to thumb it in? Why is cyborgian tech being pushed so hard on us? Did anyone ask for it? Does anyone need it?
In this essay, I will look at the ethicists who are raising concerns about Brain Machine Interface (BMI) technology, the projected utility of which would be the ability to swipe with your mind and to click with your brainwaves. Frankly, I don’t see a demand for that, not even for paralyzed people, since we have brain-surgery-free interfaces, such as those used by Stephen Hawking. Also, do we really want our impulsive rage tweets instantly sent?
No, no, no. That’s not where this tech is going. Nobody wants BMI to perform ordinary tasks in a new way, especially not if it means wearing some weird helmet all day or getting brain surgery. The carrot and stick here is the promise of enhanced mental abilities. There seems to be a coordinated fearmongering campaign to convince us that, any moment now, transhumaned AI cyborg legions will out-perform us mentally. So everybody is going to have to get a BMI just to keep up. Unfortunately, when we do this, our brains will be readable to anyone with the right software, and we won’t be able to distinguish between our own decisions and those that are implanted in our heads via wireless devices. Neuroethicists are suggesting that we have to act fast, maybe even rewrite our Constitutions!
I find that their new neurorights suggestions are designed, not to protect us so much as to limit the ways in which we may be violated for the greater good. Neuroethicists are wolves in sheep’s clothing. Let’s see if you agree with my assessment.
For the larger context, let’s first look at what’s known as the “Trolley Problem” in the field of ethics. Suppose a man is operating the switch station in a trainyard. If a runaway train trolley is about to plow into five workers on a track, is he morally obligated to pull the lever to reroute the trolley so that it only kills a single worker on another track?
You may notice that I have made a significant change to the standard image, this one lifted from Wikipedia, depicting this dilemma. My switchman is not acting under his own agency. He is a representative of government, acting according to some policy or standard procedure. That’s why he is pictured with a government building behind him. This change changes everything. According to protocol, he has to kill one guy to save five. But when actions are automated, there is no agency, and therefore, what the man does cannot be described as choosing to act ethically. Old time ethicists, for example Aquinas or Kant, argued that morality flows from the agent who freely decides the action, but today, the idea that an individual has the responsibility (not the freedom, not the right, but the responsibility) to choose between right or wrong has all but disappeared from the discussion of ethics.
Jose Munoz is with the Mind-Brain group in Spain, he is also at Harvard Medical and a few other really important places. Reviewing the work of a colleague, Nita Farahany, he sums up the approach of today’s neuroethicists to a T. He argues that we need to “establish guidelines for neural rights.” (Guidelines, that sounds gentle, but I wonder if they will have the kind of power of the CDC guidelines, which were implemented with all the force of law). Munoz says there needs to be discussion between academics, governments, corporations and the public. (I wonder who is going to be doing the all talking in the discussion?) He says “citizens” must be guaranteed access to their data. (Okay, I have to jump through some hoops to find out what data has been collected on me without my knowledge or consent, and then what? ) I note the use of the word “citizens” instead of human beings or people. He doesn’t want us to forget that, as citizens, we are subjects of a state. He also believes “a literacy around such data must be cultivated,” which is a weaselly way of saying people need to be told what to think about data collection. When were people asked to agree to data collection? The possibility of refusing to allow any data to be collected at all is not on this menu of ethical policies.
Nowadays, ethicists seem to just assume that ultimately the state needs to make those decisions about ethics—based on consensus, of course. So that’s okay because it’s a democratic loss of agency. Soon AI will be optimizing those decisions for us, we’re told. The individual human being has become a mere instrument through which some one else’s “ethical” choices are executed. This is not ethical. This is dangerous.
We can adapt the trolley problem to the question of whether or not the individual ought to make personal sacrifices for the good of society. The illustration below shows the kind of logic that says people ought risk their lives in war for the good of their country, or take a vaccine that carries some risk because it is necessary for herd immunity.
In this essay, I will not argue that the individual has the right to be selfish and decide not to make personal sacrifices for the supposed good of others. That’s not why we must value individual responsibility over the collective good. We value individual responsibility because, if individuals are compelled, coerced or bribed to make sacrifices for the collective good, there is a grave danger that the entity that has the power to mandate policy could use that power to harm, unintentionally or intentionally. At least when individual responsibility is granted, more brains are applied to problems and more opportunities will exist to find good solutions. Do we really want to wage war? Are vaccines actually safe and effective? Moreover, the mistakes an individual may make are usually confined to a small circle. The mistakes a policy-maker makes affect the entire population.
After a three-year nightmare, in which Mistakes were Not Made—to reference Margaret Anna Alice’s poem by that title, accusing the “philanthropaths” and other leaders of intentional democide—we ought to be skeptical of any “ethicists” asking for more sacrifices from us to further policy-makers’ notions of a greater good. As far as C0vlD goes, the consensus is developing that they got everything wrong: the lockdowns, masking and isolation, withholding early treatment and repurposed drugs, and promoting an experimental vaccine.
Lately, we are hearing quite a lot about the need to redefine human rights as the societal landscape adapts to new technologies that are changing what it means to be human. Claims are being made that a new “Transhumanism Ethics” is needed to save us from the dangers of hackers or governments and corporations who may want to employ AI to read our thoughts and control our minds.
Ienca and Andorno, authors of “Towards a New Human Rights in the Age of Neuroscience and Neurotechnology,” note how much information is collected on internet users now, and they assume that new technology will collect brain data too. These are the kinds of ethical considerations they ponder,
“For what purposes and under what conditions can brain information be collected and used? What components of brain information shall be legitimately disclosed and made accessible to others? Who shall be entitled to access those data (employers, insurance companies, the State)? What should be the limits to consent in this area?”
They do not even mention the more obvious argument that any online data collection could be considered a violation of privacy. Strangely, the first right they discuss in this paper is the right of individuals to decide to use emerging neurotechnologies. In their discussions, it is also assumed that the new technologies will do what they’re advertised to do. No discussions about the need for long-term studies or testing for possible technology blunders. We should recall that the Emergency Use Authorization for the C0vld vaccine was justified because it was said that people should have the right to use untested technology if they want. In her discussion of neurorights being pushed in Chile, Whitney Web notes that the poor and disenfranchised are being ushered to the front of the BMI trial volunteer line.
I happen to think that adults should be able to opt in for new, possibly dangerous, therapies, get double D breast implants, commit suicide, do heroine, work as prostitutes or castrate themselves, if they freely choose to. But I don’t think it’s ethical to encourage or enable anyone to commit self-harm. An ethical society generally tries to help people see they may have other options. We don’t want to encourage people to take risks, certainly not unnecessary ones.
Ienca and Andorno also inform us that
“Most human rights, including privacy rights, are relative, in the sense that they can be limited in certain circumstances, provided that some restrictions are necessary and are a proportionate way of achieving a legitimate purpose. In specifically dealing with the right to privacy, the European Convention on Human Rights states that this right admits some restrictions ‘for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others’ (Art. 8, para 2).”
This sounds like Europeans do not have privacy rights; they have privileges that may withdrawn any time the state deems it necessary.
It is well-known that godzillionaire Elon Musk, forsaking his pretensions to Libertarianism, wants government regulations on AI, even as he is hyping his invasive Neuralink AI tech, to be irreversibly implanted in human brains in order to read them. So far the test primates haven’t fared so well, and the Fall 2022 Show & Tell was underwhelming in the extreme. One problem, among the many they’ve had, was that the primate’s brainwave patterns for a particular letter that AI learned on one day, morphed into a different pattern five or six days later.
Way back when, Heraclitus already understood that we never step into the same river twice. All biological processes are dynamic and ever in flux, especially emergent brainwaves; they have to be because the world is too. Living beings must keep changing just to stay more or less the same. Computer algorithms, not even Deep Learning ones, are not as plastic as brainwaves.
Since the time of that cringy Show & Tell, Neuralink has been denied permission to prey upon human subjects, for the time being. But I doubt that this will prevent the bad money that has been thrown at this technology from attracting more good money. The investors have to make their pump money back before they dump. The FDA will come around.
I am not against technological transhumany progress that helps people overcome hardships. Bionic arms are awesome and even limb regeneration sounds like a great idea to pursue, carefully. But something is not right with this discussion of BMI tech and neurorights.
This year, Nita Farahany, Professor of Law and Philosophy, and self-described Nueroethicist at Duke University, has been promoting her new book, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology, which defends nothing of the sort. In this book, she is not giving guidance to help people make the best ethical decisions for themselves and their families. She is trying to sell ethical norms to be imposed on us all.
In her book, and in talk at a World Economic Forum meeting at Davos, Farahany opines with a forked tongue. For example, while initially defending the idea of mental privacy and “cognitive liberty” (Oh, brother, do they have to make up such awkward new terms?), she quickly concedes that it is not an absolute right—because, after all, one of the most basics things we do as humans is try to understand what our fellow humans are thinking. We must strike a balance, Farahany argues, between individual and societal interests. That means the policy makers get to decide which rights you need to give up. For instance, she says it might be a good idea to make truckers wear EEG devices to monitor fatigue, for the collective good. If they fall asleep at the wheel, they could potentially kill five or six people. Can I suggest instead that truckers be paid reasonably well for the job they do, so that they don’t want to drive longer than eight hours per day? Alternatively, can we make our political representatives submit to constant surveillance of all their emails and phone calls and even in-person conversations? Because their decisions could potentially kill millions of people.
Farahany praises personal devices that monitor biological data for their potential to give workers quantitative data about their performance so that they can make “informed self-improvements.” The fact that FitBits are so popular, she claims, indicates that people are enthusiastic about being monitored and scored. But I am pretty sure Amazon warehouse workers are not clamoring for neurofeedback devices that will help them be more profitable for the stockholders. Coerced monitoring and bogus quantitative assessment is unethical, I think. Farahany concedes that such monitoring ought to be voluntary and believes that employees will want to accept these devices for self-improvement. Today, we have something similar with auto insurance; people get lower rates if they agree to be monitored while driving. But whenever there is reward offered for sacrificing privacy, it is not ethical. It is coercive. The poor will more likely submit than the wealthy.
Is monitoring employee performance even helpful? As Yagmur Denizhan argues in “Simulated Education and Illusive Technologies,” when people are put into situations where they are judged by points earned—not by more general and holistic qualitative evaluations—the crafty ones quickly focus on gaming the system so that they can earn more points, with less effort and lower quality work. But Farahany never questions the assumption that subjecting employees to negative and positive feedback will be good for productivity.
Meanwhile the fearmongering propaganda keeps coming thick and fast. I looked up the research mentioned in this Vox article. The Facebook project is at the University of California, San Francisco. The study involved three participants who already had electrodes implanted in their brains, as part of preparation for neurosurgery to treat seizures. In order to come up with their algorithm to read people’s thoughts, the scientists had to train the AI, which they could do, thanks to the implant that gave them an image of brain patterns. They asked the patients questions and modeled their answer patterns using AI. In this experiment, the context of the thoughts was well-defined. For example, the subjects were asked, “How is your room?” and they had limited responses such as, “cold,” “hot,” or “fine.” After reading about these alleged tech miracles, it seems to me that the claim that this new interface could “pick up thoughts directly from your neurons and translate them into words” is bit of an exaggeration. The subjects had electrodes in their heads and the accuracy of the AI, with intense focused training, was 60% at best.
The half dozen neuroethicists that I read preparing for this essay, including Farahany, insist that technologies already exist that have to power to decode our thoughts and control them. The tech they mention as examples are EEG, fMRI and Deep Brain Stimulation. I followed their links to dozens of studies and then read the cited studies, as if I were on a scavenger hunt, and I was again and again underwhelmed by the actual results.
With an EEG device you can pick up patterns that, if decoded, might give you some sense of the subject’s emotional state. Are you picking up gamma waves or alpha waves? Is the subject focused, or in a dream state, anxious or relaxed?
In her WEF talk mentioned above, Farahany helpfully gives this graphic of different faces to illustrate the different emotional states that an EEG device might detect. Why not just look at the subject’s face to read her emotion? Because people, like Winston Smith, might try to conceal their feelings and our Oligarchs don’t want that?
EEGs do not read minds. This is hype, maybe to attract investment. That’s my least cynical guess about their motivations. I believe R&D departments are hoping that—if they can just get people to wear EEG devices while online, and also record what kinds of tasks they are performing—they can begin to match tasks to EEG patterns using AI. Good luck with that. As a non-invasive device for picking up brain patterns, EEG doesn’t provide good data. You just can’t tell much from it at all.
Let’s take a look at fMRI. That’s very specialized machinery that is only found in hospitals, and over exposure carries with it some risks. Right now it’s the only tool that can view your brain activity in 3D (one micro slice at a time) to get a sense of the patterns your brain makes when you’re thinking about specific things or doing specific tasks. According to a 2007 study by Haynes et al., “Reading Hidden Intentions in the Human Brain,” still widely cited by neuroethicists, the researchers put eight subjects into fMRI machines and recorded the changes in brain patterns as they were presented with two numbers and told to decide whether or not to add them or subtract them. When they decided which of the two operations to perform, the brain patterns were analyzed by AI until it found a difference between the decisions. After this training period, the researchers tested their AI model. They were able to tell which choices the subjects had made, whether to add or subtract, with an accuracy of about 20% better than a random guess. Mind you, this is a situation in which all other possible thoughts and decisions were intentionally suppressed, and the subjects were focusing only on one simple choice. 20% better than random does not impress me, especially given the highly artificial circumstances. Since we don’t have a safe way to monitor people’s thoughts all day to train AI, and most of us do not spend much time in fMRI machines, I’d say we’re pretty safe from the threat of mind-reading, for the moment at least.
Let’s look at Deep Brain Stimulation. DBS is being used mainly to treat motor diseases like Parkinson’s Disease. It sends electrical impulses deep within the brain and researchers are trying to make it so that the patient can alter the amount and time of the impulses. There is some indication that it may do some good. DBS is also being studied to treat Obsessive Compulsive Disorder, depression, addiction, and pain. But as the Michael J. Fox Foundation for Parkinson’s Research warns DBS may partially alleviate some conditions, but side effects include thinking and memory problems.
This device repurposes old heart pace-maker technology. That monitor in the image above is actually implanted inside the person’s body. The cable runs up inside the neck, and the electrode is like a nail coated in polyurethane, which tends to cause inflammation. Reading this 2020 review in Nature Neurology, I gathered that the effectiveness DBS is hard to assess. The exact placement of the electrode is crucial, but hard to determine, and the mechanism of action, if there is benefit, is not well understood. It does not seem to me that this invasive device should have passed a bioethics review board. My impression is that this tech is mainly investigatory and experimental. Expanding its applications beyond use in elderly persons with severe Parkinson’s Disease does not seem warranted. In this review, none of these ethical concerns were brought up. They do mention, however, that next generation BDS devices are being designed as WiFi controllable, so that patients won’t need that horrible monitor implant. But this will make patients vulnerable to “brainjacking,” and hackers could manipulate emotive states. Let’s hope that the new wireless form of DBS is not widely applied for treatment of mere depression or OCD.
So to conclude, it does seem not true that we are on the verge of developing technology that will enable people to read our minds or that will allow AI to control our thoughts. We’re not there yet. Not even close. So the question arises, Why are these ethicists campaigning for new guidelines for non-existent threats? Why aren’t they talking about protecting individuals from the already existing threat of loss of privacy online?
I fear that BMIs are being pushed for reasons similar to those that allowed the experimental vaccine to be rolled out. Researchers wanted to test out gene therapy techniques on a huge population in order to move that field forward. Likewise, I think, the public is being prepared to be eager to test out BMI in order to move that field forward. How could anyone okay such a plan? Even if it were for the greater good of humanity to rush forward into the transhuman future, do we want to sacrifice a lot of individuals without the full, informed consent to get there?
Unfortunately today, “ethics” are determined by the powerful, and imposed on the people. What if, instead, the individual had the right and the responsibility to make all ethical decisions and suffer the consequences or reap the benefit? I haven’t offered that many opinions about whether or not this or that action is ethical or not. I’ve mainly focused on the idea that ethical decisions, about what individuals are willing to risk, should never be imposed by other, more important, people.
In fact, I’m starting to question all so-called “regulation” by government agencies. It seems that all ethical guidelines and safety regulations “for the greater good” might just be ways of legalizing potential harm of the individual for the benefit of the few.
V. N. Alexander, PhD is a lecturer at IPAK-EDU, and she’s thinking about teaching her course on Transhumanism topics again in the Fall of 2023.
Thank you. I am continuing to gather sources for an article regarding the impact of technology on education and this provides some additional excellent sources. Any societal/governmental movement toward control of the growth of the human mind has a lasting effect far beyond the individual.
So good also to see you taking on the old knee-jerk (common to a shocking number of people, even supposedly thinking ones) that says being an individual is automatically the same as being egotistical and selfish, and being collectivist is automatically being a 'good citizen'. I too have taken this up before, and argued that making the effort to think for oneself is the best way there is to make a contribution to society, while the laziness of falling into collectivism, because the thinking is too much effort, is the real selfishness.
I think we've got a long battle with that one - the assumption, and the lack on critical thinking about it, seems incredibly deep-rooted. Stay on the case!
In case it's of interest, I wrote about it here:
https://michaelwarden.substack.com/p/the-whole-and-the-part