VISION FOR THE FUTURE CREATES HISTORY. To determine the evolution and manner in which science, technology, and society will unfold requires vision. The ability to imagine what can be and work towards that goal. Without creativity, without passion, and without perseverance, we are lost to roam shiftless and blind like a ship without a sail in the night. - Eric C. Leuthardt
What is the one thing that a patient with epilepsy wants? Is it for the seizures to stop? To be off the medications? The top of the list that I hear time and time again is this – to be able to drive a car. Without a doubt seizures are dangerous and socially debilitating (image going about your day to day life with the thought of abruptly losing consciousness). Due to their epilepsy, these patients are legally prohibited from driving because their seizures put them at risk for getting into an accident. At first glance, that of course sounds inconvenient, but to my patients it often means everything. The question is why? Why is driving so important? I think as we delve into the answer it tells us something about how humans have evolved in the modern era.
To answer the question I would cite some work done by Atsushi Iriki’s lab at the Riken Brain Science Institute in Japan. He did some interesting experiments with monkeys in which he trained them to use rakes to pull a morsel of food toward themselves while recording brain activity with electrodes. The electrodes were in the post-central gyrus, a part of the brain involved with sensory perception as it relates to our bodies. So if someone touches your hand there will be increased neuronal firing in that part of the brain. Similarly, Iriki identified neurons that would start to fire when you touched the monkey’s hands. After they were trained to use the rake when they touched the rake, the same activity appeared. Thus, according to that monkey’s brain, in a very real way that tool became a part of his body.
What does that have to do with epileptics driving cars? From monkeys to humans, the tools we use quite literally become a part of who we are. We perceive them as a part of our body. One of the most fundamental tools in modern society is the automobile. It is our new legs in a mechanized environment. Actually, its probably more than that – in addition to mobility we associate cars with our lifestyles, economic class, and personalities (people who drive Hummers are usually quite different from people who drive Minis) Thus, when a patient with epilepsy looses the ability to use a car, he or she has a lost a part of themselves. Essentially, they have lost a physical expression of their persona and, more importantly, they have lost their ability to navigate a specialized environment where most resources are beyond physical walking distance. Interestingly, a patient whose legs are paralyzed have more independence and freedom in the world at large than a patient with epilepsy. Whereas four hundred years ago in a small medieval village, the exact opposite would be true. From a personal standpoint as a surgeon who treats spinal cord injury and epilepsy, whenever I am able to intervene and recover leg function or cure their seizures, the patient’s psychology of being returned to independence is almost exactly the same.
Please don’t confuse my praise of a car as materialistic. Rather, the point here is to show that the boundary of what we call our bodies (according to our brain’s physiology) is more gray than what we would like to believe. Going into the future new “body extensions” will be cropping up with increasing speed and diversity. How many people today feel handicapped without their smart phones, their ipads, or their laptops? More than materialistic desire, the adoption of ever increasing capability is part of how we as humans are built. There is an innate cortical plasticity to take on new functionalities and incorporate those elements into our cognitive model of “me.” It is this cognitive flexibility that allowed our ancestors to first use tools (such as a rake) to advance our ability to survive and proliferate. Interestingly, with the advent of novel human machine interfaces (iphones, cars, and brain computer interfaces), we will see the emergence of new and more impressive capabilities and the emergence of new disabilities when those capabilities are taken away. Thus, you never get free lunch – even if you get it with a rake.
From a traditional scientific perspective, and from a basic common sense point of view, most of us assume that the world we live in is one which we see, hear, touch, taste, and smell. That the environment we perceive around us is the result of our senses telling us the make up of an objective and separate reality. Certain findings, however, conflict with that very fundamental notion. Recent anatomical studies of cortical connectivity tell us something very different, namely, that the input to our brains from sensory organs is actually quite sparse. Rather than synaptic input from sensory organs, the vast majority of connections are interneurons (i.e. neurons talking to other neurons). For the visual system, only about 5-10% o f synapses are connected to the periphery, whereas the majority of neuronal interactions are reflecting intrinsic or thalamocortical connections. When you run the numbers (as one of my graduate students did), the brain consists of approximately 1012 neurons (1 with twelve zeros after it), each making a synaptic contact 5,000 to 10,000 times. That makes for a total of 1016 synapses – a very large number indeed. Now in thinking about information coming in, the retina has about 1,000,000 fibers going into the brain. Again assuming 5,000 to 10,000 synaptic connections that brings us to about 1010 synapses. Taken together, that means there are a million times more connections in the brain than there are for the actual input and processing of vision. The same holds true for the output side of things. When considering the cortiospinal tract, the pathway of all the motor axons that execute every movement we are capable of, again we see a number of about 106 fibers. Again, much much lower than the total number of connections in the brain.
So what’s the brain doing? It would seem the preponderance of brain activity and communication is not in the moment to moment processing of sensory inputs coming from the outside world, nor is it occupied with external behavior either. So again, what gives? There are a growing number of scientists who think the brain is in the business of predicting the world around it, rather than actively processing the activity as its coming in. To predict upcoming events you have to have a “model” to compare against. Maybe another word for a model, is a pattern of expected experiences or behaviors. So rather than sensing the world around us, our brains create a model of what the world should be and use our senses to tell us whether our model is correct or not. From an evolutionary standpoint, being able to predict events – like when I see a shaking of the bushes there will be a tiger there – helps us survive. Its also more energy efficient. Our brains are not busy rebuilding the world on a moment by moment basis, we have a model of what that world is and we check every so often to make sure its correct. If not, then we’ll make changes to the model.
I think what is fundamentally interesting about this proposition is the important role the “observer” plays in their environment. Because it really isn’t our environment that we perceive, its our model of that environment. So we are all really walking around in the world we created for ourselves. Thus, when we are depressed the world is indeed a darker place, and when we are happy the world is brighter. To push this a little further, I think that we can alter and change that model, and thus we can in a certain sense impact the world we live in. So for the cynic, who thinks nothing good come out of anything – they’re right. But the limitation is not that the world is a shitty place, it’s the barriers in their own mind.
Fundamentally, neuroscience strives to understand how complex cortical dynamics represent cognitive perceptions and intentions. Its one of the last mysteries of science – how does the brain work. Analogous to the gravitas and impact that bioinformatics approaches have played in genetics and understanding biologic processes, decoding how the information of human experience is represented in cortex stands to substantially alter our fundamental understanding of brain function and change our approaches to treating neurologic disease. Just as the field of genetics was revolutionized by showing how coded sequences of nucleotides encoded very specific biologic phenomenon (rather than simply general biologic associations with a chromosome), so too is neuroscience moving beyond simple localization of cognitive operations to a region in the brain. Neuroprosthetics is demonstrating that it is possible to decode brain signals associated with a human’s intentions such that an individual can control a device with their thoughts alone. This field has shown that not only is it possible to know where a cognitive operation is occurring (e.g., arm movements are located in motor cortex), but also to know exactly what cognitive intention is occurring by the dynamic changes in brain signals (e.g., these brain signals indicate the person will move their hand up and to the left). With computational power becoming faster and less expensive, this nascent field has shown the capability to decode complex brain signals (to “read someone’s mind”) and is exiting science fiction to enter serious scientific consideration. To achieve the same level of bioinformatic impact that has occurred with human genetics will require a more universal understanding of the physiologic principles that govern human perception and intention (i.e. A Neural Code). In realizing the Neural Code, the brain and human experience stand to become as malleable as life forms are today with genetic modulation. Things that become possible when we fully understand this code include are vastly more efficient person to person communications (I can project my thoughts to your mind), seamless human machine interactions, and treatment of neurologic disease ranging from paralysis to psychiatric disease. Additionally, if we can truly model how information is managed in the brain it also opens to door to create that intelligence artificially. Just as the moon landing, heralded frame shifting technical advances ranging from satellite communications to tang, and currently genomics is revolutionizing biotechnology ranging individualized medicine to better agriculture, understanding the Neural Code stands as science’s next grand project. A project that will both literally and figuratively tap into the human imagination like never before.
As I work to develop this blog, before I can really effectively project my ideas and thoughts on the field and where its going to the blogosphere, it is probably worth giving a “neuroprosthetics 101” lesson to get everyone up to speed as to what people are talking about when they use the terms Brain Computer Interface, Brain Machine Interface, Neuroprosthetics, and so forth.
The most honest statement is that a lot of these terms are used relatively interchangeable and somewhat loosely. Most fundamentally, a Brain Computer Interface (also called a Brain Machine Interface, or a neuroprosthetic) is some kind of device that directly interacts the brain. Of these types of devices there are two basic flavors of how these devices can interact with brain. If the device records brain signals and then converts those signals to some type of external control this would be considered and “output BCI.” Namely, one uses their thoughts or their intentions to control something external to their brain. Conversely, if some element from the environment is converted to an electronic signal that alters the brain’s physiology to create a perception, then this is called an “input BCI.” We have seen examples of both of these types of BCI in various fiction stories. So if anyone recalls the story Firefox (http://www.youtube.com/watch?v=nXaS2EQrX7Q) that would be an example of an output BCI. The idea here was that the Russians had created a fighter plane that could be controlled by ones thoughts alone. In classic eighties style, it was described as the most “the devastating killing machine ever built,” and it was to be stolen by the “the most daring American fighter pilot around,” Clint Eastwood. The point here is that one’s thoughts were converted to real actions, in the Firefox example, it was to shoot rockets and bullets from a plane. Rather than this being a good story, there are numerous projects being funded by the military (http://www.darpa.mil/budget.html) for the express purpose of using brain signals to either control devices for warfare superiority or for the restoration of the injured soldier. In terms of input BCI, probably the best current example of these in fiction is The Matrix (http://www.youtube.com/watch?v=UM5yepZ21pI). The story captured the imagination of people by creating an idea that humans were enslaved by sentient machines who had constructed a completely virtual world which was implemented by prosthetics that were implanted into the back of the person’s head. Basically everything that a individual experienced was an artifice of brain stimulation. Though a far flung story, we also see early beginnings of artificial perceptions being possible. Several research groups are already creating implants in cortex that can create artificial visual perceptions. Rather than having machines enslave us, they could be quite useful in restoring vision to people who are blind. A nice summary of current advances in the field can be found here (http://biomed.brown.edu/Courses/BI108/BI108_1999_Groups/Vision_Team/Cortical.htm).
So to summarize, the question is which direction is the information going. Is it going from the brain to the outside world (output BCI), or is it going from the outside world to the brain (input BCI)? What the neuroposthetic does is create a new channel by which that information can pass from one to the other.
I have had several conversations with people regarding recent movies such as Surrogates and Avatar. Most people treat them as fun and fanciful films about things that are far away and can never be. In truth, the likelihood that we will one day all be a society of though driven robots or that we will inhabit alien bodies is indeed low. There is something more prescient and timely about these movies, however. They speak to the growing cultural imagination and fascination with human minds being accessible. The idea that we can, in some way, plug into our brain and have our thoughts, perceptions, and intentions be implemented without the need of our bodies. The reason this growing interest is important is because societies imagination usually precedes its real world creation. To draw on several historical examples, the notion of a ray gun far preceded the discovery of a laser, and people fantasized about flying to the moon as far back as 400 years (Johannes Kepler’s Somnium,1634). The point here is that science fiction invariably focuses our scientific and technical efforts, because society - as channeled and expressed by the artist - have determined that it is something interesting and worth pursuing. What form in the real world this social imagination for neural interfacing will take will remain to be seen. Certainly, early demonstrations that people can control devices with their thoughts is already in existence (see movies presented in blog). How these early findings will be transformed into a larger social adoption is not yet clear - stay tuned for the next episode!