In the last few years, fMRI and EEG have made it into the popular press as tools for reading minds (here, here, here, here and here for a sample), lie-detection (here, here and here), and telekinesis – controlling/ moving objects with our thoughts – (here, here, here, and here). I think there was even a recent episode of House where the doctors were able to display the dreams of someone in a brain scanner. While these and similar technologies have had many successful applications like allowing paraplegics to control wheelchairs and computers or allowing people with locked-in syndrome to communicate (at least give rudimentary yes-no answers to questions), it’s important to know the limitations of these tools and not get carried away into science fiction.
For people concerned about invasions of privacy or some sort of remote scanning of people’s thoughts or personalities, knowing how fMRI, EEG, MEG etc. operate is especially important. First, these tools do not read minds. You can’t plop someone into a brain scanner and be able to determine (right away) whether they are lying or not. And while one of the 60 Minutes clips linked above does show the reporter using EEG without any training to type letters with his thoughts, that device, like almost all others, is simply using a statistical trick in conjunction with some general properties about brain activity to produce its results.
The goal of this post isn’t to explain how fMRI and EEG work in minute detail, but to illustrate how researchers have exploited their properties to achieve what has been interpreted as mind-reading and telekinesis.
So this post ended up being a lot longer than I expected, so here’s a quick summary of what follows for those who just want a short version:
Some places have reported that people can control wheelchairs, robotic arms, computer cursors, etc. with their minds. Simply by thinking “go left” or “stop” the wheelchair will follow the commands it receives given only in thought. Almost all of these demonstrations are not quite true and instead rely on a statistical trick to get the devices to do what people are thinking. The computer picks up on a pattern of brain activity measured with either fMRI, EEG or intracranial electrodes. This pattern can be reliably reproduced by the person. For example, every time you think of stopping, the same part of your brain might become active. The computer then learns to pair this particular pattern with the “stop” command. This is not mind reading! It could just as easily have paired your thoughts of hamsters with the stopping command if those thought patterns could be reliably reproduced. Almost all of the tools mentioned in the articles linked above require this kind of learning on the part of the computer. It is a going to be a long while yet before we can stick someone in a machine and know exactly what they are thinking about. (Although we can sometimes guess about the category of thing that they are thinking about, like house vs. face or animate vs. inanimate object, but also with the proper learning on the part of the machine.)
The long version:
fMRI measures changes in blood flow to various areas of the brain over time. It is believed that blood flows to brain tissue that is active. So, the reasoning goes, if we put someone in the scanner, and ask them to think about, say, playing tennis, then blood should flow to the part of the brain that is involved with thinking about tennis playing. Furthermore, it is believed that when we think about the same thing again on a different occasion, the same brain area (more else) will become active. Indeed, this is most certainly the case (at least when we think about certain things) – for example, a region of the brain called the parahippocampal place area (PPA) shows reliable activity (blood flow to that region) every time a person thinks about a place or is shown a picture of a place (like a house). This finding is robust across many individuals (e.g., Epstein and Kanwisher 1998, Kanwisher 2010). (This region also responds to things other than places and lots of other parts of the brain are active when people see or think about places, but let’s ignore that for now.) There are several other regions of the brain that reliably activate to certain kinds of thoughts of percepts, like faces, certain actions, animate vs. inanimate objects etc.
This means that we can put a person in a scanner, show them a bunch of pictures, and pinpoint a part of their brain that responds more reliably to houses, say, than any of the other things. That is about as close as we can get to mind-reading at the moment: if we scan someone a bunch and find their “house area,” we can then tell, on a later scan, whether they are thinking about houses or not. A few remarks: (1) This isn’t perfect. As mentioned earlier, the PPA and similar areas activate to stimuli other than houses. (2) remember that fMRI measures blood flow to relatively large parts of the brain – while it’s potentially possible to find a “place area” and a “face area,” it is unlikely that with current technology we’ll be able to find a “hammer area” or the like. That is, if hammer representations are stored in a small, localized batch of neurons (a highly contentious claim!), fMRI is unlikely to be able to detect it. This means that we can’t stick a person in a machine, show them a bunch of pictures, find out what parts of the brain light up and then, whenever we stick them into an fMRI again, be able to tell what they’re thinking about.
However, we can exploit some of these more reliable activations. For example, we can correlate reliable signals with certain computer commands. We can train a computer to recognize the place signal and the face signal. When a computer recognizes the face signal, we can have a wheelchair turn to the left, when it recognizes a place signal we can have the wheelchair turn to the right. In this way, we can control the wheelchair with our thoughts. Notice that this is just a fancy statistical trick! If we could get the computer to reliably learn what our brains look like when we think about going to the left vs. going to the right, we can use our thoughts of going left or right to control the wheelchair’s direction. However, there’s nothing special about those particular thoughts! We could have just as easily correlated our thoughts about apples (assuming they could be reliable detected) with the motion of a wheelchair or a cursor on the computer screen.
Lie-detection can be done in a similar way – someone lies in a scanner repeatedly, the computer learns what that person’s brain looks like when they are lying (it finds a commonly activated area across all lies) and then can use that to determine whether future utterances are lies or not by comparing them to previous scans of false statements and true statements made by that person. Notice that this means that we first need to get the person in the scanner telling lies before we can actually use it as a lie detector. Likewise, for controlling a wheelchair or a computer cursor with our thoughts, we first need to train the computer on some sort of brain pattern that we can reliably reproduce.
The story is the same with EEG. While it measures electrical activity on the surface of the scalp instead of blood flow, some patterns can be reliably reproduced by thinking about the same thing or same category of things. A computer can learn these patterns and then, as with fMRI, correlate a pattern with a particular command or function. There also happen to be some well-known patterns of activity that we can use in a special way. For example, there is a signal called the P300 which is often associated with decision-making and is independent of the nature of the stimulus. We can exploit this to create a brain-controlled device that doesn’t require any training (learning what a reliable brain pattern looks like). Suppose we ask someone wearing an EEG cap a question and then flash the words “yes” and “no” alternatively on a screen. The P300 signal can sometimes be thought of as being a mental “That’s it!” If the person is thinking of “yes” as the answer to the question, then we should observe the P300 whenever the word “yes” is flashed on the screen. We can use this to ask simple yes-no questions of people who cannot communicate in a normal way (e.g., if they have locked-in syndrome) and we can do this without training the computer! Again, as with the fMRI work, this is just a statistical trick – we could just as easily have used a different, reliable signal to get to the yes/no answer; we’re not actually reading people’s thoughts.
All that being said, these tools are extremely useful. We can use them to help people communicate who otherwise could not. We can use them to allow people who cannot move to move wheelchairs, prosthetic limbs or computers. At the same time, it’s important to keep in mind the limitations of these technologies and not get carried away too far into science fiction.
Epstein R. and Kanwisher N. (1998) A cortical representation of the local visual environment. Nature vol. 392, 598-601
Kanwisher, N. (2010) Functional specificity in the human brain: A window into the functional architecture of the mind. PNAS, vol. 107, no. 25, 11163-11170