textarea
Nov 27 2007
Networking

Mind-Reading Machines

Think radical human-computer interfaces are years away from commercial development? Think again.
John W. Ellis
Illustration: Thinkstock/Jupiter Images

Ever wish your computer could read your mind? Two researchers on opposite sides of the United States hope to develop human-computer interfaces that will make more direct connections between human brains and computers possible. At Columbia University in New York, Paul Sajda received a $758,000 grant from the Defense Advanced Research Projects Agency (DARPA) last year to focus on a visual computer interface (VCI) technology, which can be used to analyze vast amounts of images very quickly. Instead of replacing human vision and image processing, Sajda and his colleagues are trying to tap into those vast capabilities.

“No computer vision technology comes close to our ability to analyze and recognize objects in the face of noise, occlusion and changing sizes of objects,” says Sajda, director of the Laboratory for Intelligent Imaging and Neural Computing at Columbia University.

But while the human brain can process and interpret images much more accurately than any computer, registering that information, managing the analysis and recording results has been difficult and time consuming. Currently, the easiest ways for humans to indicate interest in an image to a computer involve pressing a key, clicking a mouse or speaking, all of which have a narrow bandwidth and slow the process down, says Sajda.

Connecting a human directly to the computer and using brainwaves as input is a much faster way of transferring information between the two objects — the brain and the computer. Sajda uses an electroencephalogram to read the electrical impulses generated in the brain of a person viewing a succession of images. The trick is to intercept neural frequencies that are related to a decision regarding the value of an image without slowing down the display of multiple images. Sajda’s group shows subject images at rates as high as 10 images per second, and measures changes in brain activity as each image is shown. In this triage process, interesting images are automatically tagged for later, more thorough, analysis.

An immediate application of this technology is analysis of the huge volumes of still and video images being collected by the intelligence community. “There are video cameras going up all over the world that could capture something important, but there aren’t enough expert eyes to view them,” says Sajda. “A combination of the computer screening certain images and the human brain tagging them as important creates a dramatic increase in efficiency.”

Other applications for VCI include radiology, where a physician must examine hundreds of images a day while quickly scanning for abnormalities, and air traffic control. But DARPA is most interested in having federal agents sift through large video data streams.

Misha Pavel is approaching the human computer interfaces from another direction. Pavel and colleagues at Oregon Health & Science University in Beaverton are trying to see if by examining brain state, a machine can determine whether a human is receptive to additional information and how that information should be provided.

“It’s not just computers; it could be anything,” Pavel says. “In the future, all machines will contain computers,” says Pavel, who is a professor of biomedical engineering and director of Point of Care Laboratory at OHSU. Pavel hopes to directly connect machines to the user experience. For example, a cell phone would automatically go to voice mail when its owner is fighting heavy traffic but ring when that driver is cruising on a deserted rural interstate.

IT Takeaway

Transforming the human-computer interface into a direct connection between brain and machine is years away from broad commercial application, but the research promises to achieve many things:

• Providing access to individual human intelligence and experience that were previously difficult or impossible to share in a systematic way;
• Allowing machines to sense their users in the way some now sense their environments and adjust their performance in response;
• Manipulating actions in video games, devices for which developers such as Emotiv and NeuroSky say could hit the market as early as next year.
textfield
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT