How much can eye movements tell us?

Using eye movements as a human-computer interface HCI, as Poole and Bali point out in their article, has many advantages. Perhaps the most useful advantage is its applications for disabled people who would not otherwise be able to use a keyboard or a mouse. The way eye movements has made great strides; what used to need very invasive methods can now be accomplished with an infrared camera.

 

Detecting eye movements is one thing, but interpreting these metrics in order to infer some form of thought is something entirely different and more complex. This process is built off the eye-mind hypothesis, which is exactly what it sounds: if your eyes are drawn or fixated on something, this provides some insight into the thought process behind these actions. These conclusions can then be used to analyze and improve the design of interfaces. The applications from this process are endless – ranging from a better cockpit interface to reduce pilot error all the way to an improvement of doctors in performing medical procedures.

 

The main difficulty in this technology lies in interpreting the various eye movement metrics. If someone blinks a lot is this indicative of a low work load, or do they simply have dry eyes? As the authors point out, we are limited by technology to process the enormous amount of data generated, and it must be done at a reasonable costs. With current technology, algorithms are in need of constant recalibration. Another con is that for every handicapped person this technology may help, it also may exclude people with lazy eyes or those in need of hard contact lenses.

 

 

Pointy McPolygon

 

Comments are closed.