The history of eye-movement research extends back at least to 1794, when Erasmus Darwin (Charles' grandfather)
published Zoonomia, including descriptions of eye movements due to self-motion. But research on eye movements was
restricted to the laboratory for 200 years, until Michael Land built the first wearable eyetracker at the University of
Sussex and published the seminal paper "Where we look when we steer" . In the intervening centuries, we learned a
tremendous amount about the mechanics of the oculomotor system and how it responds to isolated stimuli, but virtually
nothing about how we actually use our eyes to explore, gather information, navigate, and communicate in the real world.
Inspired by Land's work, we have been working to extend knowledge in these areas by developing hardware, algorithms,
and software that have allowed researchers to ask questions about how we actually use vision in the real world. Central
to that effort are new methods for analyzing the volumes of data that come from the experiments made possible by the
new systems. We describe a number of recent experiments and SemantiCode, a new program that supports assisted
coding of eye-movement data collected in unrestricted environments.