Article Image

IPFS News Link • Technology: Software

How Close Is a Workable Brain-Computer Interface?

• Technology Review
The ultimate goal of brain-computer interfaces is something direct, noninvasive and relatively high bandwidth.

Not even science fiction authors believe that a non-invasive approach is ever going to happen. Think about all the times you've seen someone in movies like The Matrix "jack in" to a computer via a gnarly port in their skull.

In the real world, however, few people are ever fitted with direct neural interfaces to computers. The results have been impressive--macaques moving robot arms just by thinking about it, and patients with locked-in syndrome communicating for the first time in ages. But this hasn't translated to a viable solution for most people who might need such an interface.

That's why new research from Spain is so exciting. Scientists led by Eduardo Iáñez of Miguel Hernandez University have for the first time combined a number of desirable features into a single brain-computer interface that is noninvasive, spontaneous and asynchronous.

About that asynchronicity: it turns out that, because of the bandwidth limitations of recording brain activity through EEGs -- external electrodes placed on the outside of the head -- previous attempts at noninvasive brain computer interfaces required that users only direct the computer during certain time slots. Imagine a metronome ticking very slowly, say once a second, directing you to imagine the movement of your robotic arm starting... now. How tedious.

Iáñez and colleagues' approach gets around this limitation by using four different models, each with assumptions that are sometimes the opposite others. This way, however a subject's brain happens to be wired up, all the computer has to figure out is whether they mean "left" or "right" in order to direct a robot arm in two dimensions.

Here's a video of the results. First, you'll see the simulation, running in MatLab, and then the arm itself responding in near-real time to the user. (The computer has to sample brain activity in half-second intervals in order to gather enough data to detect what the user intends.)

Users drive the system simply by imagining what they want to happen -- for example, they could visualize moving their hand in the direction they want the arm to move.

Here's a slightly more impressive video, of an arm being activated in three dimensions, although the movements are clearly pre-programmed.

Future  research goals include moving this interface out of two dimensions and into three. If they succeed, they'll have at least matched in humans an experiment performed with Macaques in which an EEG-driven arm was used by the monkeys to feed themselves. That would be quite a feat for patients who are currently unable to engage in such activities, and the main barrier appears to be how clever computers can be about processing the signal. In other words, the sophistication of their algorithm.

thelibertyadvisor.com/declare