Brain computer interfaces have sparked tremendous interest in the field of neuroscience: we are at a stage where monkeys can control robots and people can answer emails simply with their thoughts. But revolutionary systems like these are criticised for their seeming lack of ethical consideration. Are these criticisms entirely justified?
Last week, a bunch of neuroscientists flocked to the scenic CCH Congress Centre in Hamburg, Germany to attend the annual Organisation for Human Brain Mapping (OHBM) conference. I was one of the many imaging neuroscientists at the conference, and I had the pleasure of listening to talks by forerunners in the field, who discussed everything from neurons in lobster stomachs to the genetics of autism spectrum disorders. One of the most well received lectures was by Professor Andrew Schwartz from the University of Pittsburgh who showed us videos of a monkey feeding itself a marshmallow with a robotic arm, and then licking the prosthetic fingers.
The catch? The monkey was controlling the arm with its thoughts.
Two decades ago, few could have envisaged a future where direct functional interfaces between brains and machines were commonplace. Today, there is a league of futurists, the transhumanists, that foresees an incredible expansion of human potential with the emergence of radical technologies that may one day enable our minds to be uploaded from biological brains and run on computers. As the borders between neuroscience, computer science and bioengineering fade, some of these predictions have already been realised. In fact, only a few years ago, Cathy Hutchinson, paralysed but mentally agile, became one of the first people to have her brain wired directly to a computer, allowing her to move a cursor, a wheelchair, and later a robotic arm with nothing but her mind.
This breakthrough in translational neural interfacing came with the founding of BrainGate, a neuromotor prosthetic system developed by Professors Donoghue and Hochberg in Brown University who described Cathy’s control of the robotic arm as “a magic moment”. While it does indeed seem like magic that a mere imagination of action can result in the action itself, it’s actually quite straightforward. A sensor is chronically implanted into the part of the primary motor cortex that controls arm movements. The sensor detects patterns of neuronal firing that arise from motor imagery in the brain, and transmits these signals to a decoder. Using translational algorithms, the decoder converts this brain activity into its intended outputs. Then, instead of controlling the muscles, which in patients like Cathy are usually damaged, the output is directed at controlling a computer, a robotic limb or a wheelchair, which puts into action the movement that was being imagined.
There has been an enormous and impressively rapid leap from the conception of neural interfacing to its establishment as a field of neuroscience and engineering. The interest in this technology stems from its considerable potential to restore motor function, communication and even confer a certain degree of independence to patients suffering from severe neuromuscular disabilities. As the theoretical distinction between human and machine is gradually blurring, the untapped potential of interfacing looms large. It is likely that these systems will receive considerable attention in the future, but it is difficult to predict their various applications. If this technology can be used to restore functions in those with motor disabilities, can it also be used to enhance or augment the existing capabilities of healthy people?
Indeed there is an entire intellectual movement called transhumanism, or H+, which is devoted to the notion that the human condition can and should be improvable by technological enhancements. Under this way of thinking, people with pacemakers, cochlear implants, prosthetics, or even artificial valves can be thought of as cybernetic organisms, or “cyborgs” – a blend of humans and mechanical parts. They argue that these and other artificial intelligence and reproductive technologies should be made widely available, and that each individual should be granted the freedom to decide if and when they would like to use them.
The opposing camp, the bioconservatives, challenge this “dehumanisation” by trying to implement global bans on human enhancement tools. Aside from the deep-seated religious and political fuel that fires this group (which I won’t discuss here), one of their scientific arguments points to the potential of machines to interfere with one’s behaviour and personality. Take the example of a Parkinson’s patient treated with deep-brain stimulation, a neural interface that aims to correct motor impairments through electrical stimulations to the subthalamic nucleus. Three years after the electrodes were implanted, this patient began to experience stimulation-related bouts of euphoria and unrestrained manic behavior. He bought houses he could not afford, incurred severe financial debts, and indulged in inappropriate sexual behavior towards nurses, all the while unaware of his deviant conduct. When the stimulation parameters were adjusted in an attempt to improve his manic condition, he returned to his usual state of competence, and regained his original capacity to judge “moral” behavior, although at the cost of deleterious effects on his motor abilities, which left him bedridden. In this non-manic state, doctors considered him mentally proficient, and when given a choice, he opted to have the stimulator switched on again, and be admitted to a chronic ward in a psychiatric hospital.
Technological advancements are inevitable, and deep-brain stimulation is undoubtedly one that has had widespread success and improved the lives of over 30,000 people worldwide. In the wake of these exciting cutting-edge interfaces, it is understandable, and even responsible, for one to consider the legal and moral implications of their effects on personality. For instance, who is to blame for seemingly involuntary acts by individuals whose brains appear to have been changed by machines? Is it the fault of the patient, of the doctor, or perhaps even of the computer? Can the behaviour even be considered involuntary if the patient, in a state of competence, chose to continue with the treatment that was itself the underlying cause of the behavior?
However, to recommend that these technologies be entirely banned on the grounds that machines compromise human dignity would impede the progress that has allowed patients like Cathy Hutchinson to regain a level of independence in their daily activities. Perhaps we instead need to recognize that human dignity and human enhancement are not mutually exclusive. In his essay on the subject, renowned transhumanist Nick Bostrom points out that to our hunter-gatherer ancestors we may already appear “posthuman”. We have drastically extended our abilities with innumerable technologies, be it with clothing, cars, or the Internet, yet we haven’t been dehumanised by these. In this context, maybe all we need is a more up-to-date understanding of human dignity, which in Bostrom’s own words would allow for “a more inclusive and humane ethics, one that will embrace future technologically modified people as well as humans of the contemporary kind.”
This is an edited version of an article originally published on Bang!
Latest posts by Sana Suri (see all)
- Mental health in rural India - February 10, 2015
- Sex talk – it’s all about electricity and vibrations - November 17, 2014
- Masters of deception: how spiders trick ants - November 2, 2014