Open Access
12 February 2015 Simultaneous drift, microsaccades, and ocular microtremor measurement from a single noncontact far-field optical sensor
Author Affiliations +
Abstract
We report on the combined far-field measurement of the three involuntary eye movements, drift, microsaccades, and ocular microtremor (OMT), using a noncontact far-field optical method. We review the significance of the smallest and least measured, and thus least understood, of the three, OMT. Using modern digital imaging techniques, we perform detailed analysis, present experimental results, and examine the extracted parameters using a noncontact far-field sensor. For the first time, in vivo noncontact measurements of all three fixational in-plane movements of the human eye are reported, which simultaneously provide both the horizontal (left-right) and vertical (up-down) displacement results.

1.

Introduction

The existence of involuntary minute eye movements has been known for centuries. In the 18th century, James Jurin (1684 to 1750) described “a trembling of the eye,” while Herman von Helmholtz (1821 to 1894) noted that he could not maintain stabilization during vision.1 However, it was not until the 20th century that such involuntary eye movements could be measured. Adler and Fliegelman2 were the first researchers to describe the three miniature fixational eye movements based on experimental recordings in the 1930s. The three in-plane motions they reported are drift, microsaccades, and ocular microtremor (OMT).24 Drift (amplitudes of 2 to 10μm) is a gradual shift in fixation direction, while microsaccadic bursts (5 to 50μm) correct for this gradual drift in gaze. Drift has the lowest-frequency component of all these miniature fixational eye movements (1Hz), while microsaccades have frequencies of 5Hz. The last type of in-plane motion, OMT, has a typical peak-to-peak (pk-pk) amplitude range of 150 to 2500 nm, which is much smaller than either drift or microsaccades, but it has a much higher reported dominant frequency. The largest OMT study to date found an average dominant OMT frequency of 83.68 Hz with a standard deviation of ±5.78Hz for clinically normal human subjects.5 However, frequency observations above 100 Hz have also been made.4 OMT frequencies have been extracted using a zero crossing analysis of the detected temporal signal.57 OMT is caused by constant activity of the oculomotor units in the reticular formation of the brain.4 Tone or tension of the extraocular muscles, which control eye movements, is achieved by a constant stream of neural impulses. OMT is believed to be a result of rapidly fluctuating imbalances in the tension of opposing muscle groups.4,810

Clinical interest exists in OMT owing to its neurological origins. Accurate OMT measurement could prove a useful tool to indicate and aid clinical diagnosis of a number of conditions. Previous studies have investigated the use of OMT as a method of unambiguous brainstem death confirmation,11 prediction of the outcome of coma,12 as well as monitoring a patient’s depth of anesthesia.1315 Atypical records have presented in patients with idiopathic Parkinson’s disease16 as well as multiple sclerosis.17 In addition, a decrease of the central OMT frequency value with age has been observed in clinically normal patients.18 Furthermore, it has been suggested that OMT and the other fixational eye movements affect the human visual process.19 We note that OMT measurements have been performed on cats,20 rabbits,21 and rats.22

Eye movement was monitored using contact lenses which contained mirrors as early as the 1950s.23,24 Using such optical lever systems, the motion of the eye (rotational and torsional) could be monitored.25 Such systems did not achieve widespread use as both the application of the lenses and the effects of the presence of the lenses on the eye movements gave rise to difficulties in performing measurements as the lens might not adhere to the eye adequately.23 The monocular measurements conducted in the 1950s contained a small tremor superimposed on the other involuntary movements of drift and microsaccades. However, it was not until the 1960s when the OMT frequency was more accurately measured as 85Hz (Ref. 25) using this contact lens technique. This result was possible after examining the moment of inertia of the eye and using a contact lens that exerted a small force on the eye so that it was sensitive to all fixational movements. Thus, slippage was reduced and an accurate estimate of the OMT frequency was calculated. However, as with any mechanical contact method, damping occurs.

Using high-speed camera systems, frequency analysis of the crystalline lens motion and/or distortions have been studied.26 However, such systems monitor lower-frequency eye motions, around 20 Hz, when compared with OMT. Eye motion, and in particular saccades, has also been analyzed at the level of the retina allowing tracking of single cones,27,28 but the limited frame rate of scanning light ophthalmoscopes, as well as the low retinal reflection, makes it impractical to study eye tremor at the frequencies possible at the posterior eye with the added advantage of a simpler optical illumination path.

None of the most recent clinical OMT studies have used the far-field contact lens system. Instead a mechanical contact method has been employed to record OMT data.4,5,21,29,30 A length of piezoelectric material is covered with a protective silicone membrane to produce the probe. The probe is sterilized before it makes contact with the sclera of the eye to which it is held in contact by a sprung screw mechanism. A voltage proportional to the tensile pressure of the contact against the eye is produced. The most recently published study attempts to investigate the perceptual role of all three fixational eye movements. This system incorporates both the probe measurement method to measure OMT as well as a video eye tracker to measure the large and lower-frequency amplitude drift and microsaccaddes.31 Despite having a resolution as low as 10 nm (pk-pk), some significant drawbacks exist with the probe method. First, the contact probe method does not give accurate amplitude information as it relies on good continuous contact with the eye. Second, by mechanically loading the eye, the method filters out drift and can significantly dampen the eye’s movement and interfere with the microsaccadic and OMT records.31 Third, it requires access to sterile probes and that the eye of the examined patient be anesthetized, which causes blepharospasm (spasm of the eyelid) in some patients. Fourth, the contact method requires that the subject’s eye-lid be restrained, e.g., using adhesive tape, thus preventing observation/measurement of the effects of the human blinking mechanism. In addition to these disadvantages of the contact probe method, the procedure is time consuming, highly uncomfortable and stressful for the patient, and requires the presence of highly trained medical staff to perform safe and accurate measurements. Furthermore, the sensitivity of the contact probe method is so great that unwanted environmental vibrations and other physiological signals, e.g., blood pulse, are also recorded. Highly accurate OMT measurements using this method, therefore, require the recording to be performed in a vibration isolated environment to reduce effects from unwanted vibrations and motions, e.g., ambient vibrations and gross head motion. Thus, the subject’s head must be rigidly restrained.

Eye movements are customarily quoted in the literature in units of angular rotation of the eye. For a typical eye with a diameter of 23 mm, a 1 arcsec rotation corresponds to 56nm displacement. OMT has a random, noise-like appearance with intermittent sinusoidal bursts. The pk-pk amplitude of OMT is of the order of 1 micron (17.86 arcsec) with an estimated range from 150 to 2500 nm (pk-pk) (2.67 to 44.64 arcsec). To accurately observe OMT, a minimum resolution of 25 nm (pk-pk) (0.45 arcsec) has been suggested.6

Recently, accurate noncontact optical methods employing speckle interferometric techniques have been devised specifically to measure OMT.6,32 However, the resulting sensors only measure in-plane motion along one direction axis in one eye. Furthermore, they must adhere rigorously to strict safety standards in order to avoid laser damage to the retina. Because of this low light intensity, illumination and detection must be positioned to deliver and collect light close to the eye surface (10 to 70 mm). Efforts to measure OMT by imaging the biospeckle pattern on the sclera have been reported. However, similar laser safety considerations as in the case of the speckle interferometric technique must be adhered to when using speckle imaging while attempting to measure OMT.7,33,34

In the 1950s, researchers in the Kodak Eastman company measured both the amplitude and frequency of OMT (Ref. 35) in one direction. Information about the eye movements were captured in the far-field using a slit camera, magnifying lens, and extremely blue light sensitive film, which was exposed as it passed the aperture at high velocity. The system involved an area of the sclera being illuminated by partially coherent spectrally filtered light from a mercury lamp, which emits light with wavelengths from 400 to 520nm. Environmental and other unwanted motions which might affect accurate OMT recordings were reduced by performing the experiment on a vibration isolated platform as well as by restraining the subject’s head movement by means of a bite bar. Following data capture, magnified images from the processed film were then projected onto a ruled screen containing a reference grid. A convenient blood vessel was chosen as a reference point from which technicians manually compared changes in displacement to obtain both the OMT amplitude and frequency. While the processing time was significant (off-line) and impractical for regular clinical use, this ingenious demonstration of a far-field noncontact imaging scheme is extremely impressive and suggestive.

There has been very significant progress in the development of digital sensors (cameras) offering high-speed and high-resolution imaging capabilities. The spatial resolution of the resulting imaging systems is then governed by the choice of imaging lens, the number of sensor pixels, the sensor area, the active pixel area size, and by the wavelength used. The temporal resolution of such a system is limited by the number of frames that the sensor can capture per second. Finally, given the physical system limitations, the spatial resolution can also be improved using computational (software-based) statistical pattern analysis36 or subpixel interpolation techniques.37,38

In this paper, we introduce a far-field method using spatially incoherent illumination from a light-emitting diode (LED), ruling out interfering effects caused by speckle noise, and an ultrafast high-resolution black and white digital sensor to capture eye movements. Digital processing of the captured data, i.e., correlation, yields in-plane two-dimensional (2-D) motion information. Using such a measurement technique, it is possible to simultaneously measure both vertical and horizontal fixational eye movements. With the eliminations of the need for mechanical contact with the eye, many of the practical problems associated with the piezoelectric system are resolved. This permits such noncontact systems to be used in a clinical setting. In this way, OMT may become a practical clinical indicator of neurological function. In addition to this, the system described here uses incoherent light, and as such, the self-interfering scattering of intense laser light (speckle noise) is not a factor in designing the system.

This paper is structured as follows. In Sec. 2, we first present an overview of the principles required to measure in-plane displacement in both the x and y directions using a correlation-based technique. A section of the sclera/iris is illuminated and an optically magnified high-speed video is captured. The relative frame-to-frame displacements are calculated and the absolute motion, relative to the first frame, is calculated. Section 3 describes in practical terms the equipment used, while the results from a number of different arrangements are investigated and presented in Sec. 4. All three involuntary eye motions as well as a blink are registered using this noncontact far-field imaging system with results in both the x (horizontal) and y (vertical) directions. Finally, in Sec. 5, we present a discussion of the measurements as well as suggestions for the practical issues that need to be addressed prior to implementation by clinicians for in vivo studies.

2.

Far-Field Motion Sensor Principles

Let us first describe various aspects of the proposed implementation of a far-field eye motion sensor, as schematically illustrated in Fig. 1. The elements required in realizing such a far-field motion detector include (1) a suitable illumination light source (LS1, LS2, LS3, LS4); (2) optical magnification (imaging system 1 and imaging system 2) consisting of two lenses (LI and LII); (3) digital image capture device (sensor 1 and sensor 2); and, finally, (4) the digital processing of the recorded data. A target object (Target) onto which the subject’s vision fixates is also used to control the gaze.

Fig. 1

Schematic of the experimental setup showing the subject’s left (L) and right (R) eyes, the illuminating light source (Source) compromising each of the light sources (LS1, LS2, LS3, or LS4), optical magnification (Imaging system) consisting of lenses LI and LII, and digital video capture device (Sensor) while the right eye’s vision is fixated on a target (Target).

JBO_20_2_027004_f001.png

The chosen light source must be bright enough to illuminate a part of the sclera (white of the eye) without causing irritation (or damage) to the subject. It should also not be so bright as to interfere with normal fixational vision. An area on the sclera is chosen with a suitable reference feature, e.g., blood vessels or the iris/sclera border, to act as the reference points when comparing subsequent images during the cross-correlation process (described in detail later in this section). Additionally, the light source should emit a constant illumination (no variation of intensity or color with time) to avoid noise problems, e.g., the flickering, which, in extreme conditions, can sometimes be observed from overhead lighting powered from the sinusoidally varying mains electricity supply.

Lenses LI and LII are introduced to bring a magnified image of the field of view of the eye to the imaging sensor face. The geometrical optical magnification of the imaging optics is defined as Mopt and can be used to increase the spatial resolution of the imaging system.

The digital image capture device then captures a number of (magnified) intensity images (i.e., a video) of the optical field at the eye. The intensity images or sensor frames are spatially sampled on a grid in x and y defined by the size of the square sensor pixels of widths Δx and Δy giving a maximum spatial frequency resolution in the X and Y directions. Each captured image (frame) represents a sample in time. Such sampling must satisfy the Nyquist criterion such that the temporal sampling frequency is at least twice the highest signal temporal frequency one wishes to measure, i.e., the system temporal resolution. From the acquired series of images, it is possible to determine any change between subsequent images by cross-correlating each frame with the previous frame. Then, examining the resulting shifted correlation peak, the relative spatial displacement between frames can be obtained. These are the basic principles of the digital processing carried out to perform motion detection that are used in this paper.

Given a sensor with a sufficient number of pixels per unit area, capturing enough frames per second, any eye motion can be unambiguously determined. Unfortunately, sensors become much more expensive as the number of pixels and the frame grabbing rate increase. Therefore, it is sensible to use software-based algorithms to improve performance where possible. Increased resolution can be obtained from the captured data by image interpolation. Image interpolation is the process whereby an estimate of the intensity at a virtual intrapixel is obtained from the intensity values captured at neighboring real pixels.39 There is a link between the sensor gray-scale resolution (i.e., the resolution in intensity) and the possible increase in the resolution in position in x and y. Estimating and inserting new virtual pixel values and, thus, doubling the achievable sampling frequency in x and y results in an effective halving of the sampling period (the pixel size). This is a software-based method to create super-resolution images. Using this concept, coupled with the effects of optical magnification, the effective spatial resolution of the digital capture device is

Eq. (1)

Δxmag=ΔxsensMopt×Minterp,
where Δxsens is the real pixel width on the sensor face, Mopt is the magnification introduced by the optical system, and Minterp is the increase in spatial resolution due to the interpolation process (numerical magnification). In the example stated above, Minterp=2, i.e., we double the spatial resolution in both x and y implying four times as many pixel values. The drawback to computation magnification is the time necessary to perform the interpolations and the fact that halving the pixel widths increases the time needed to process the four times larger number of samples.

The correlation algorithm used in this paper is illustrated in Fig. 2. It involves the cross-correlation of two sequentially captured image frames, i.e., fn1 and fn of the sclera/iris border of the human eye. The cross-covariance normalized to the geometric mean of the variances is often used and referred to as the correlation function of the recorded intensities.39

Eq. (2)

cn=E(fn×fn1)E(fn)E(fn1)[E(fn2)E2(fn)]×[E(fn12)E2(fn1)].
In the frequency domain, it can be conveniently calculated as

Eq. (3)

Cn=FT1(Fn*×Fn1)E(fn)×E(fn1)[E(fn2)E2(fn*)]×[E(fn12)E2(fn1)],
where Fn denotes the Fourier transform (FT) of the n’th frame fn, and Fn1 denotes the FT of the (n1)’th frame fn1. Fn* represents the complex conjugate of Fn and E() is the expected or mean value of the complex values of Fn and Fn1:

Eq. (4)

E=1P×QpPqQfn,where1pP,1qQ.

Fig. 2

Correlation algorithm flow diagram. The two images (128×128pixels) of the iris/sclera border as highlighted in the figure were obtained using LS3, Imaging system 1, and Sensor 1 from subject B.

JBO_20_2_027004_f002.png

Figure 2 shows two frames (intensity images), i.e., fn1 and fn, of an upsampled section of the eye. The n’th captured frame, fn, is captured a time Δt=1/fs after the (n1)’th frame fn1, where fs is the temporal sampling frequency, i.e., the frame grabbing rate, of the digital sensor. The FT of both frames is calculated, using MATLAB®’s built-in fast Fourier transform (FFT) algorithm,40 producing the 2-D energy spectrum of each frame, i.e., Fn and Fn1, respectively. The cross-correlation spectrum image is generated by first multiplying the two spectra, see Eq. (2). Then, recalling the shift theorem of the FT,41 the inverse FT of this cross-correlation spectrum image is calculated, resulting in a 2-D correlation distribution whose peak is shifted from the origin. This shift distance/direction corresponds to the in-plane movement between the images in the two frames. Thus, the location of the shifted peak denotes the relative displacement, in both the horizontal x axis and vertical y axis, between subsequent frames. This relative frame-to-frame displacement vector is denoted here by dn¯. Adding subsequent values of the displacement of the correlation peaks, dn¯, allows the motion of the eye in x and y to be tracked.

Further increases to the accuracy in resolution of the detected displacement can be achieved by upsampling and interpolating the resultant correlation peak data obtained using the correlation method described here. We note that such an approach has previously been applied to sharpen the correlation peak in an attempt to more accurately estimate the values and locations of interspaced results.3739,42

While there is a desire to capture as many frames per second over as long a time interval as possible and to retain as many pixel values as possible in the subsequent calculations, in practice, this leads to very large datasets and correspondingly long processing times.

3.

Equipment Overview

Having described the principle of operation of the technique used to detect in-plane motions of the human eye, we now describe the experimental implementation used to detect such motions. A schematic of the experimental setup is presented in Fig. 1. As stated, the light source illuminates the eye so as not to interfere with normal fixational vision and not to cause accidental damage to the retina.

In our case, experiments with a number of different light sources were attempted. The first source used (LS1) was a high-power red Phillips Luxeon Star LED (central wavelength: 650 nm; rated electrical power: 1 W). The second source (LS2) was a Volpi Intralux 4000 halogen lamp with fiber illumination arms and adjustable power output. The third source (LS3) tested was an incandescent bulb (Phillips, 25 W). The fourth and most successful source examined (LS4) was a blue Kingbright Superflux LED (central wavelength: 468 nm). This LED was found to be the best because it was found to (1) increase the resolution of the optical imaging system, (2) increase the contrast between the sclera and blood vessels (absorbent to the blue light), and (3) be a convenient supply of these well-known LEDs existed. In addition, when used, LS4 was powered by four alkaline AA batteries in series via a current limiting resistor (resistance: 146 Ohms) eliminating time-varying ac power effects. LS4 was positioned 10 to 15 cm slightly to one side of the eye of each subject so as to illuminate the sclera and not expose the retina directly. No additional optics was employed to focus or collimate the light emitted by LS4. Furthermore, the illumination intensity was not so great as to be irritating or to interfere with the normal fixational visual process. This aspect was subjective to each candidate, but no squinting of the eyelids by subjects was observed and no discomfort as a result of the illumination levels used was reported by any of the candidates.

A small area on the well-defined eye structure, i.e., the border between the iris and the sclera or a blood vessel, was imaged. This was done so that a moving edge with a rapid variation of the intensity appeared in the images captured.

Optical imaging of the field of view, for the results presented in this paper, was achieved using two different systems. The first imaging system, imaging system 1, consisted of a combination of two macro-imaging lenses (LI: a Melles Griot Macro invaritar 5X and LII: a Sigma zoom lens, 24 to 70 m 1:2.8 EX macro). Using this system, an optical magnification of Mopt1 was achieved. A second imaging system, imaging system 2, with a greater optical magnification was then implemented. This consisted of two imaging lenses (LI: a Buhl Optical 848MCZ500 2.75 to 5 in. lens from a Hitachi CP-X990 XGA projector and LII: a macro-imaging lens f=5.5in., lens input width: 70 mm). These were arranged to give an overall optical magnification of Mopt9.17.

Studies were also performed with two different digital imaging sensors being employed (sensor 1 and sensor 2). Sensor 1 was an ultrafast charge coupled device (CCD) digital sensor (Phantom 5.1, maximum resolution: 1024×1024pixels, pixel size: 7.4μm). Sensor 2 was an ultrafast scientific complementary metal oxide semiconductor (sCMOS) sensor (Andor Neo 5.5 low-noise sCMOS 12-bit monochrome, maximum resolution: 2560×2160pixels, pixel size: 6.5μm). The video capture frame rate on both these sensors could be increased by reducing the region of interest (ROI), and thus, in this way, the number of sampled pixel intensities captured could be controlled. We note that both of these sensors were operated in modes in which only a subset of the full pixels available were used, e.g., 240×256pixels from the sCMOS sensor having 2560×2160pixels.

Two sets of far-field experimental results are presented in this paper. First, using sensor 1, imaging system 1, and LS3, a sequence of digital images was captured over a small field, i.e., 128×128pixels, and at a very high sampling rate of fs=3000 frames per second (fps), i.e., one image capture every 1/3000s. Postprocessing was then performed on the recorded data. Second, experimental results are discussed when using sensor 2, imaging system 2, and LS4. Several different sequences of digital images were captured, for example, a video image size of 240×256pixels at a frame rate of fs=300fps, i.e., one image capture every 1/300s. The digital postprocessing described above was performed on the various recorded video datasets. The results for the various cases are discussed below.

All the numerical calculations reported here were performed on a laptop with an Intel P8400 Core Duo processor with clock speeds of 2.26 GHz per processer and with 3 GB of RAM. The MATLAB® 7.4.0 (R2007a) (Mathworks Inc.) programming environment was used to implement the correlation algorithm (as described in Sec. 2) as well as all other data processing operations. Spectral filtering of the extracted eye movement data was carried out using finite impulse response (FIR) filters implemented in MATLAB®. The low-pass and band-pass filters were designed with a unity gain in the passband. The cutoff frequencies are discussed in Sec. 4.

It should be noted that the optimized data processing time when using the FFT algorithm occurs for frames containing 2n×2n pixel values, e.g., 256×256pixels. In the case of sensor 1, it was possible to capture video images with square ROI base 2 values, i.e., 27×27=128×128pixels, for optimal spatial processing using the FFT algorithm as well as a sufficiently high temporal sampling rate. However, in the case of sensor 2, a trade-off between acquiring a large enough ROI and having a sufficient temporal sampling rate resulted in a nonsquare non-base 2 video image size. In an effort to systemize the data processing operations from datasets captured using sensor 2, each individual 240×256pixel frame was cropped to a square image of the lower pixel number, i.e., 240×240pixels. Following this, the correlation algorithm, presented in Fig. 2, was applied to the data. The processed results are presented in Sec. 4.

4.

Results

In this section, for the sake of comparison, we first present a set of OMT records (movement versus time), captured for a clinically normal healthy subject (subject A: male, age: 24 years) using the contact method. Then some initial results using ultrafast digital sensors (sensor 1 and sensor 2) to measure fixational eye movements on three other healthy male subjects (subjects B, C, D, and E: ages: 22 to 30 years) are presented and discussed in detail.

4.1.

Typical OMT Record

A typical OMT record produced using the contact method for a clinically normal subject (subject A, age: 24) is presented in Fig. 3(a). These results were produced at St. James’s Hospital, Dublin, using the standard piezoelectric contact method.30,43 The data presented consist of 0.5 s of a 15-s measurement from a right eye, with the subject highly constrained (device strapped firmly to the head). The data were sampled at 1 kHz following which a digital implementation of an FIR band-pass filter (132 order) with corner frequencies (3dB points) at 20 and 200 Hz was applied to the data.

Fig. 3

(a) Subject A’s ocular microtremor (OMT) record obtained using the contact probe method. (b) The resulting spectrum with highlighted peak (black circle) at 83.68±5.78Hz, a typical result for a clinically normal subject.5 Lower-frequency fixational motions cannot be measured accurately due to the drawbacks inherent in the contact system.

JBO_20_2_027004_f003.png

Drift and microsaccades are not present in this dataset as their frequency components lie outside (below) the lower range of the digital band-pass filter (20 Hz). Some characteristic sinusoidal-like OMT bursts5 are highlighted in Fig. 3(a). The corresponding spectrum, with the OMT frequencies indicated, is presented in Fig. 3(b). Significant OMT signal peaks appear at the expected frequency range for a clinically normal subject, i.e., at 83.68±5.78Hz.5

4.2.

Light Sources and Far-Field Correlation: Image Interpolation

As noted, a number of light sources were tested to illuminate a portion of the sclera in attempts to measure OMT using the far-field sensor presented in this paper, i.e., LS1, LS2, LS3, and LS4.

Illumination from a high-powered LED source (LS1) was initially used; however, it was found to be too intense for the subjects with the resulting discomfort and squinting interfering with the normal visual process. For these reasons, use of this dc-powered LED was abandoned. Next the Volpi Intralux halogen lamp (LS2) with bunched fiber illuminators was employed. This source was intense enough to illuminate the eye comfortably and safely, and did not interfere with normal fixation vision. However, the results were not useful for a number of reasons. The light from the bunched fiber guides was not sufficiently concentrated on the eye. In addition, the subject’s head could not be suitably restrained so as to remain rigid with respect to the camera. As a result of this, the eye was out of focus during the recording process, yielding unsatisfactory image sharpness and stability.

It was found that illumination using a mains powered dc source connected to an incandescent flashlight bulb (LS3), which was suitably focused on the desired area of the eye, could be successfully used as an illuminating source to measure eye movements. Comfortable levels of illumination were maintained using the Phillips 25 W red bulb and the normal fixational visual process was not interfered with.43

As discussed, a small area with a defined structure, i.e., on the border between the iris and the sclera, was illuminated and images were recorded using the ultrafast CCD (sensor 1). Using the setup described in Sec. 3 (imaging system 1 and sensor 1), 15,993 image frames of this section of the eye from subject B (male, age: 25 years) were captured at 3000 fps; thus, a temporal frequency resolution of 3000/15,933=0.1875Hz exists. The subject’s vision was fixed on a target, located above and to the left of the sensor. In an attempt to minimize head motion (without the use of restraints or a bite bar), the subjects were only asked to rest their chins on a surface attached to the optical bench. The subject’s eye was then illuminated with comfortable levels of light from the incandescent lamp. The correlation algorithm described in Sec. 2 was applied to the resulting image data. In this way, a total of 5.3 s of in-plane eye movements could be extracted from the measured data, see Figs. 4(a) and 4(b). Figures 4(a), 4(c), 5(a), and 5(c) indicate simultaneously measured horizontal movements (H), while Figs. 4(b), 4(d), 5(b), and 5(d) indicate vertical motions (V).

Fig. 4

Subject B: using LS3, imaging system 1 (Mopt1 and Minterp=8), and sensor 1, subject B’s detected horizontal (left column) and vertical (right column) eye movements are presented in (a) and (b). These data were filtered using a low-pass filter with a 500 Hz cut-off frequency. A blink (Bl) is highlighted. In (c) and (d), the results from a digital low-pass filter with a 20 Hz cut-off frequency is presented after being applied to the same data in (a) and (b). Drift (D) and microsaccades (M) are highlighted.

JBO_20_2_027004_f004.png

Fig. 5

Subject B: using LS3, imaging system 1 (Mopt1 and Minterp=8), and sensor 2, the detected horizontal (left column) and vertical (right column) eye movements are presented. (a) and (b) show data filtered using a band-pass filter with 20 and 200 Hz cut-off frequencies such that OMT should be present. The corresponding spectra of (a) and (b) are shown in (c) and (d) with the range of OMT frequencies for a clinical normal subject highlighted by a black circle. The suggested central OMT frequency of 84Hz is denoted by an arrow while rectified mains frequency components are clearly visible in (c) and (d).

JBO_20_2_027004_f005.png

Figures 4(a) and 4(b) show the measured eye motions filtered using a low-pass FIR filter with an upper cutoff frequency of 500 Hz. In addition to the fixational eye movements, the subject’s eye underwent a blinking motion (labeled Bl) half way through this measurement, which is illustrated by the large amplitude motion roughly half way through the results. In order to clearly present the lowest-frequency fixational eye motions, drift (D), and microsaccade (M), an FIR low-pass filter with a cutoff frequency of 40 Hz was applied to the data presented in Figs. 4(a) and 4(b). These motions were manually identified, although detection algorithms can be used to determine occurrences of microsaccadic movements.44 The corresponding frequency spectra are presented in Figs. 4(c) and 4(d). The gradual change in gaze (drift) and the corresponding correctional microsaccadic movement can be identified in Figs. 4(c) and 4(d).

In order to highlight the OMT record of a clinically normal healthy subject, an FIR band-pass filter was employed and applied to the raw displacement data to suppress signals outside the expected frequency range for OMT records. The band-pass filter employed had cutoff frequencies of 20 and 100 Hz. The ensuing results are presented in Figs. 5(a) and 5(b). The characteristic sinusoidal-like burst observed when using the contact method, i.e., see Fig. 3(a), is present in the signal. The signal presented has a duration length of 5s. OMT appears as a burst of sinusoidal-like activity, and it has previously been indicated that 5 s provide an adequate time interval in which to measure OMT characteristics.29 The spectra of the detected motion, in Figs. 4(c) and 4(d), contain a strong frequency component at 100 Hz. This arises because (1) the light source (LS3) was powered using a mains powered dc source and (2) the experiment was performed with some overhead lighting present. The 100 Hz signal can be attributed to both of these sources and arises due to the Irish 50 Hz ac electrical supply. These results indicated the necessity of performing experiments using stable dc battery-powered light sources (illuminating the eye) and in darkness (without overhead lighting during data capture) to reduce this unwanted signal.

The subject (subject B) involved in this set of experiments was a healthy 25-year-old Irish male and is expected to have an OMT peak frequency around 83.68±5.78Hz.5 The expected location of the peak OMT frequency is highlighted using a circle in Figs. 5(c) and 5(d). As the subject is young, it is not expected that there will be a significantly lowered peak frequency.18 Since the subject was not unhealthy, e.g., comatose, brain-dead, under anesthesia or suffering from Parkinson’s disease, there should be no significant deviation from the normal peak frequency.4 Thus, it is fair to assume that a dominant OMT peak signal should appear in the detected spectrum of eye motion around the suggested band of frequencies centered at 84Hz. In fact, such peaks can be just observed in the horizontal (H) spectrum in Fig. 5(c) at 77, 82, and 85 Hz and vertical (V) spectrum in Fig. 5(d) at 79, 86, and 90 Hz. However, the presence of the noise sources discussed makes the result inconclusive.

A factor leading to difficulties in resolving accurate OMT measurements is the spatial resolution of the imaging system used. Using imaging system 1 and sensor 1 as described in Sec. 3, we assumed the focus error to be negligible. Images of the iris/sclera border are clearly visible in Fig. 2. During this series of experiments, the optical imaging system used (imaging system 1) produced an optical magnification of Mopt1. The resolution was increased by upsampling and interpolating the intensity images captured by sensor 1. This resulted in a numerical magnification of Minterp=8. Thus, with a pixel width of 7.4μm, the effective resolution of our system is, according to Eq. (1), Δxmag=[7.4μm/(1×8)]=925nm. Therefore, our system did not have sufficient spatial resolution to measure the whole range of amplitudes (150 to 2500 nm) associated with OMT. Furthermore, the subject’s head was not restrained, e.g., by means of a bite bar in order to reduce the blurring (smearing of the frequency spectra) associated with gross head movement. Therefore, significant sources of noise are present.

In an attempt to overcome these limitations, a different dc-powered light source (LS4), imaging optics (imaging system 2) with an optical magnification of Mopt9, and a low-noise sCMOS detector (sensor 2) were employed. To eliminate background light noise, optical signal measurements took place in a dark environment. Furthermore, two rigid metal bars were introduced into the setup against which the subject could rest his forehead while the measurements were taking place. This was done to help reduce noise effects arising from head movements, especially those with large amplitudes compared to OMT. We note, however, that no straps, restraints, or bite bar was used.

The sclera was illuminated by a light source (LS4) located above and to the right of the subject’s head (at 30deg to the normal forward direction of vision). Illumination was achieved using a battery-powered blue LED (LS4), with the intent of avoiding all ac noise effects at 100 Hz, see Figs. 5(c) and 5(d).

Imaging system 2 was used to image a magnified region of the sclera onto sensor 2 and consisted of two imaging lenses. A Buhl Optical 848MCZ500 2.75 to 5 in. lens from Hitachi CP-X990 XGA projector was placed 8 to 10 cm from eye, while the macro-imaging lens with a focal length f=5.5in. was located 1750mm from the Buhl Optical lens. All components were fixed to an optical table. The input to the 70-mm-wide macro lens was located 250mm from the mechanical enclosure housing sensor 2. An optical magnification of Mopt9 is expected from the setup, thus, the effective resolution without spatial upsampling, according to Eq. (1), is Δxmag=[6.5μm/9]=722nm. This indicates that when using imaging system 2 and sensor 2, the system is theoretically capable of detecting motion of Δxmag/2=361nm between two subsequent data frames. The system resolution can then be further enhanced numerically using spatial upsampling.

The optical performance of the imaging system was first tested by imaging the screen of a tablet computer (Apple iPad Mini, 1024×768pixels, 163 pixels per inch). Figure 6(a) shows the resulting image captured using the improved optical imaging system (imaging system 2) and sensor 2. A cross-section along the center of a row of pixels marked by the dashed white line in Fig. 6(a) is shown in Fig. 6(b). The black dashed line in Fig. 6(b) denotes five illumination pixels on the tablet, which account for 1100 pixels on the sCMOS sensor, where Δxsens=6.5μm. In this way, the magnification of this system is experimentally verified as being Mopt=9.17, which suggests that the minimal resolvable resolution of this improved system is Δxmag/2=354nm. Therefore, although this optical system does not have sufficient spatial resolution to measure the whole range of necessary amplitudes (150 to 2500 nm) associated with OMT, the improved optical magnification increases the minimal resolvable resolution by a factor of 9 compared with the first optical system (imaging system 1) described earlier. Measurements on a number of subjects were made using imaging system 2 and sensor 2. Each measurement has a duration T=10s containing 3000 frames at sampling frequency of 300 fps were made; thus, the results presented with sensor 2 have an improved temporal frequency resolution of 300/3000=0.1Hz compared with that of sensor 1.

Fig. 6

Using LS4, imaging system 2, and sensor 2, the screen of a mass-produced Apple iPad Mini was captured with the pixels visible in (a). A cross-section along the dashed line in (a) is presented in (b) to experimentally verify the optical magnification of the imaging systems used here. Note that (a) is stretched along the horizontal plane.

JBO_20_2_027004_f006.png

Figure 7(a) shows the absolute extracted eye motions from subject C, a healthy 30-year-old male using LS4, imaging system 2, and sensor 2. The subject was asked to press his head against the two metal bars to reduce blurring of the results due to head movements. This also had the advantage of minimizing the motion of the head out-of-plane, thus, keeping the eye in focus. As indicated in Fig. 7(a), the dashed gray signal corresponds to up-down vertical (V) eye motions, while the black signal indicates left-right horizontal (H) eye motions. Each signal also contains the three fixational eye motions. The long drift (D) and sporadic rectifying microsaccadic (M) motions are present as well as a superimposed noise-like signal (OMT). The subject was asked to fix his gaze on a target slightly left front normal to his eye. However, as will be seen later on in this paper, an overall movement of up to 3mm is shown to take place during the 10 s of monitoring. This result clearly demonstrates that the subject’s eye was free to move independently, unlike in the far-field contact lens and contact piezo methods where the subject’s eye is in constant contact with the contact lens or piezoelectric element, with the latter pressed tightly and uncomfortably against the eye.43

Fig. 7

Subject C: using LS4, imaging system 2, and sensor 2, both the vertical (V) and horizontal (H) eye motions (a) were obtained from subject C, a healthy 30-year-old male, while the corresponding spectra are shown in (b) with the region of interest for peak OMT frequencies shown in a cut-out box in (b).

JBO_20_2_027004_f007.png

The overall spectrum of each vertical and horizontal motion ranging from 0 to 150 Hz, i.e., up to half the temporal sampling rate, fs=300Hz, of the detected signal (using our noncontact far-field method), is shown in Fig. 7(b). An enlarged cropped ROI from 70 to 100 Hz is included as inset in the figure to highlight the expected region where peak OMT activity, at 83.68±5.78Hz for clinically normal healthy human subjects, should be.5 First, it is worth noting that the strong peak at 100 Hz, seen in the results using the earlier system in Figs. 5(c) and 5(d), is not present in the results obtained using the improved OMT detection system. This is as a direct consequence of using a battery-powered illumination source as well as performing data capture without overhead lighting. In Fig. 7(b), relatively strong signal peaks (of amplitude 14 dB) stand out in the vertical (up-down) spectrum at 78, 80, 83, and 93 Hz, in the expected region of OMT frequency. The horizontal (left-right) spectrum has similarly strong peaks of a similar amplitude at 80, 83, and 85 Hz. Two of these frequency peaks, 80 and 83 Hz, appear in the results for both the vertical and horizontal directions. The peaks at 78 and 93 Hz appear only in the motions in the vertical direction. A relatively strong unique peak at 85 Hz appears only in the eye motions in the horizontal (left-right) direction. These results are summarized in Table 1. This previously unreported result is highly suggestive, indicating that the different muscle groups controlling the vertical and horizontal motions appear to contribute differently to the overall OMT spectrum.

Table 1

Measured frequency peaks for all subjects within the ocular microtremor (OMT) dominant frequency range for each of the measurements examined here.

SubjectPeak OMT frequency components (Hz)
Subject A85
Male, 24 years87
(Contact method)90
[see Fig. 3(b)]
HorizontalVertical
Subject B7779
Male, 25 years8286
(LS3, imaging system 1, sensor 1)8590
[see Figs. 5(c) and Fig. 5(d)]
Subject C8078
Male, 30 years8380
(LS4, imaging system 2, sensor 2)8583
[see Fig. 7(b)]93
Subject D7775
Male, 27 years8280
(LS4, imaging system 2, sensor 2)8483
[see Fig. 9(b)]90
Subject E7473
Male, 22 years7882
(LS4, imaging system 2, sensor 2)9590
[see Fig. 11(b)]94

The same datasets used to produce Fig. 7 are also used to produce Fig. 8, with the results being examined in several different ways. Figure 8(a) shows the detected relative frame-to-frame displacements in the x and y directions, i.e., dn¯ between each captured frame. A black box has been inserted into Fig. 8(a). The dimensions of this box indicate the minimum spatial displacement that can be measured using the system, i.e., Δxmag=708nm. We note that the displacements are well spread out in both the vertical and horizontal directions apart from those close to the vertical axis. This clearly indicates that the eye does not typically move up or down between frames. A cluster of measured relative motions occurs at either side of the box and may be an artifact of the digital processing algorithm arising due to noise and/or aliasing.

Fig. 8

Subject C: using LS4, imaging system 2, and sensor 2, both the vertical (V) and horizontal (H) eye motions from subject C were plotted against one another. (a) shows the detected relative (frame-to-frame) motion, (b) shows the continuous displacements relative to an initial position, and (c) shows the time-varying continuous motion in both directions.

JBO_20_2_027004_f008.png

Adding together the relative motions, Fig. 8(b) shows the absolute displacement from the first frame position, with the horizontal motion (±2mm) plotted against the vertical motion originating at 0 and having a maximum movement of 3.5 mm. This shows that while the subjects were asked and attempted to fixate their gaze on the target, the eye was, in fact, always undergoing the fixational movements of drift and microsaccadic movement as well as OMT. Examining these results, it can be observed that a large number of the small shifts in motion are densely clustered within smaller localized regions of fixation. Larger motions of gaze occur between these clusters. The clusters correspond to fixation at another point. Figure 8(c) shows the measured horizontal and vertical positions during the measurement.

We now discuss the analogous measured results for two other healthy male subjects (subject D, age: 27 and subject E, age: 22) and compare and contrast the results obtained using LS4, imaging system 2, and sensor 2. Once again each subject was asked to rest his head against the two metals bars attached to the optical table and fixate his gaze on the target located slightly left front normal to his eye. The frequency results are again summarized in Table 1.

Figure 9(a) shows subject D’s detected eye motion with the black and dashed gray lines indicating directions in the horizontal (H) and vertical (V) directions, respectively. This is the absolute eye motion, relative to the first image. The spectrum of the dataset presented in Fig. 9(a) is shown in Fig. 9(b). The region where OMT should exist for a clinically normal healthy human is inset in the figure. In all cases of the detected eye motions, the spectra for horizontal (H) and vertical motions are in colors black and dashed gray, respectively. In the horizontal spectrum, a dominant strong frequency peak with amplitude 17db/Hz occurs at 77Hz peak, while two other peaks are present at 82 and 84Hz. These three spectral frequency peaks are in the region associated with OMT motions. In the vertical direction, a strong frequency component appears at 90 Hz with other spectral peaks at 75, 80, and 83 Hz. Once again, the locations of the peaks in the frequency range of interest are different in the two directions (H and V).

Fig. 9

Subject D: using LS4, imaging system 2, and sensor 2, both the vertical (V) and horizontal (H) eye motions (a) were obtained from subject D, a healthy 27-year-old male, while the corresponding spectra are shown in (b) with the region of interest for peak OMT frequencies shown in a cut-out box in (b).

JBO_20_2_027004_f009.png

Figure 10 was produced using the same datasets used to produce Fig. 9. Figure 10(a) shows the detected relative frame-to-frame displacements in x and y, i.e., dn¯ between each captured frame. A box is presented in Fig. 10(a) showing the minimum spatial displacement, i.e., Δxmag=708nm, which can be measured using this system. Figure 10(b) shows the detected absolute motion relative to the first frame. As is visible from the detected motions in both directions, the eye could move freely and unrestrained; in the case of subject D, the eye was observed to move over 5 mm in both x and y directions. The eye motion is shown again in Fig. 10(b) with positions plotted against time.

Fig. 10

Subject D: using LS4, imaging system 2, and sensor 2, both the vertical (V) and horizontal (H) eye motions from subject D were plotted against one another. (a) shows the detected relative (frame-to-frame) motion, (b) shows the continuous displacements relative to an initial position, and (c) shows the time-varying continuous motion in both directions.

JBO_20_2_027004_f010.png

Figure 11 shows subject E’s detected eye motion. The same convention of using black lines to denote horizontal (H) and vertical (V) motions is used. In Fig. 11(a), as with Figs. 7(a) and 9(a), large amplitude drift and microsaccadic motions are present with an underlying low-amplitude noise-like OMT signal being observable. The calculated frequency spectrum of this signal is presented in Fig. 11(b). Examining the frequency peaks in the horizontal direction, dominant peaks exist at 74 and 78 Hz as well as a peak at a higher frequency at 95 Hz. In the vertical spectrum, dominant peaks are present at 73 and 82 Hz as well as at 90 and 94 Hz. As with subject D, there are no common spectral peaks at the same frequencies in the horizontal and vertical directions. We note that as stated in Sec. 1, OMT dominant peak frequencies of up to and beyond 100 Hz have been reported.4 We are drawing attention to these higher-frequency components to emphasize that there appears to be a pattern of different spectral components for OMT in the horizontal and vertical directions.

Fig. 11

Subject E: using LS4, imaging system 2, and sensor 2, both the vertical (V) and horizontal (H) eye motions (a) were obtained from subject E, a healthy 22-year-old male, while the corresponding spectra are shown in (b) with the region of interest for peak OMT frequencies shown in a cut-out box in (b).

JBO_20_2_027004_f011.png

The same datasets used to produce Fig. 11 are used to produce Fig. 12. Figure 12(a) shows the detected relative frame-to-frame displacements in x and y, i.e., dn¯ between each captured frame with displacements well spread out in both the vertical and horizontal directions. Again, a black box indicates the minimum spatial displacement that can be measured using the system, i.e., Δxmag=708nm. Figure 12(a) presents the detected absolute motion in the horizontal and vertical directions relative to the first frame. As shown in Fig. 12(b), and as is the case with subject C in Fig. 8(b) and subject D in Fig. 10(b), subject E’s eye was able to move freely and unrestrained. In this case, the eye moved 4 mm in the horizontal direction and a range of 5 mm in the vertical direction. This motion is further illustrated in Fig. 12(c).

Fig. 12

Subject E: using LS4, imaging system 2, and sensor 2, both the vertical (V) and horizontal (H) eye motions from subject E were plotted against one another. (a) shows the detected relative (frame-to-frame) motion, (b) shows the continuous displacements relative to an initial position, and (c) shows the time-varying continuous motion in both directions.

JBO_20_2_027004_f012.png

In order to provide evidence supporting the meaningfulness (validity) of our measured results, experiments were performed in an attempt to determine the noise present in the system. Rigidly held blocks of metal/plastic were used in the place of the subject’s eye. A typical result is shown in Fig. 13. We note that this spectrum is significantly different from those found when measuring an eye. The power spectral values are randomly distributed, resulting in a zig-zag like pattern over the expected range of OMT frequencies. No clear maxima or minima (as observed in the case of eye measurements) are observed. This indicates that the OMT results presented in the paper are not due to any frequency structure introduced by the measurement system itself or due to ambient noise. While such a qualitative comparison is possible, it is extremely difficult to quantitatively compare the actual values in Fig. 13 to those appearing in previous results, e.g., Figs. 7(b), 9(b), and 11(b). The blocks of metal/plastic used have reflectivity properties different from those of the eye arising, for example, due to differences in curvature and roughness. Dead cow’s eyes have been used in the past in an attempt to more accurately simulate real eyes; however, this is also not without significant practical challenges.

Fig. 13

Using LS4, imaging system 2, and sensor 2, a typical detected horizontal spectrum from rigidly held blocks of metal/plastic is presented showing no clear spectral maxima or minima.

JBO_20_2_027004_f013.png

As noted, Table 1 contains a list of the peak frequencies measured in the 70 to 100 spectral band for each of the subject. At the top of Table 1, the results using the contact method for subject A are presented. As noted, these results are not a true representation of the motion due to the inherent drawbacks of that measurement system, as mentioned earlier. For example, frequencies from both the horizontal and vertical motions are not separated. The results of the far-field measurements, in the vertical and horizontal directions, are also presented in Table 1. These results indicated that different peak frequencies appear in the different, i.e., horizontal and vertical directions. As stated earlier, the temporal frequency of the results measured with sensor 1 and sensor 2 are 0.1875 and 0.1 Hz, respectively. Therefore, the locations and separations of the peaks are significant.

In examining these data, we recall that using the contact piezo system, the empirical dominant OMT frequency range for healthy humans is 83.68 Hz with a standard deviation of ±5.78Hz. However, higher OMT dominant frequencies have also been observed using such a system. Higher-frequency components have been observed for all the subjects measured using the far-field noncontact system described here. Most significantly, different components in the x (H) and y (V) directions are, in general, always observed. A more complete and larger comparative study must be undertaken using a far-field noncontact system such as that used here to (1) determine the significance of the different peak frequency components in the vertical and horizontal directions for OMT, as well as (2) determine the frequency range of OMT in both the horizontal and vertical directions of fixational eye motion.

5.

Conclusion

In this paper, we review the significance of three involuntary fixational eye movements: drift, microsaccades, and OMT. A digital far-field technique to measure all three eye movements is proposed, inspired by the ingenious far-field system used by Higgins and Stultz at Kodak Eastman in 1953 (Ref. 35), which measured all three fixational motions along one direction. Significantly, in this paper, it has been demonstrated that all three components of fixational eye movements can be measured using a single noncontact apparatus without the need for the mechanical loading and extremely uncomfortable (but sensitive) contact probe apparatus.

An in vivo result from subject A using the piezoelectric contact method is presented for reference. Using this method, OMT motions are recorded in one direction, but drift and microsaccadic motions cannot be accurately and simultaneously measured as they are filtered out due to the presence of the probe. Thus, owing to the dampening of the motions of the human eye, only a mechanically restrained version of eye movement can be measured. To overcome the inherent drawbacks of the contact methods, several different implementations of a noncontact system using a far-field sensor and imaging system are explored.

In the first light source examined (LS1), the LED source proved too powerful and uncomfortable for the subject. The second set of experiments were performed using a halogen lamp with fiber optic delivery that proved unsuccessful owing to experimental difficulties. A DC-mains powered LED which produced light containing a rectified frequency of 100 Hz was used. It was found to produce the most substantial results when used in conjunction with imaging system 1 and sensor 1. Sensor 1 had adequate temporal resolution, i.e., fs=3000Hz, to measure the maximum OMT frequency. Increased spatial resolution from numerical interpolation was employed in an attempt to overcome the lack of availability of suitable imaging optics (imaging system 1) for use with sensor 1. Using this apparatus, a blink was detected along with all three involuntary eye movements in x and y for subject B.

In an attempt to overcome the limitations associated with the system just described, a number of changes to the measurement system were introduced. First, a battery-powered light source (LS4) was used to illuminate the eye and the experiments were carried out without overhead lighting to reduce mains flicker. Increased optical magnification was achieved using imaging system 2 with an optical magnification approximately nine times better than the optical magnification of imaging system 1. A low-noise detector unit (sensor 2) was used to reduce possible shot noise. Furthermore, the subject was asked to rest his head against two rigid metal bars to reduce any effects introduced by gross head movements. In addition to this, an increased data capture duration of 10 s was employed. Using this improved system, measurements were performed on three healthy male subjects (subject C, subject D, and subject E) with an age range of 22 to 30 years.

Datasets were captured using the improved measurement system. In all cases, the subjects were asked to fix their gaze at a target and eye motions were recorded. Fixational eye motions were present in all three subjects tested. The measured OMT frequencies for each of the subject lie in or around the suggested range for healthy humans. However, the frequency components in the horizontal and vertical directions are different. To date, the authors are unaware of any other study in which OMT has been measured using a noncontact method in both the horizontal and vertical directions simultaneously. Clearly, a more complete and larger comparative study must be undertaken to confirm our results and to more fully analyze the implications of OMT having different peak frequency components in the vertical and horizontal directions.

The aim of this paper was to develop a robust, safe, and comfortable OMT measurement system. In this paper, we have introduced and for the first time reported in vivo measurement results using a far-field noncontact system. Furthermore, the system employs incoherent illumination to measure OMT. Preliminary results have been presented, showing in-plane movements of the human eye simultaneously in both the horizontal (left-right) and vertical (up-down) directions. Unlike the contact probe method, the subject’s eyelids were not taped open, nor was there any pressure applied against the sclera during measurement. Thus, the detected eye motions are not affected by mechanical loading.

We note that extracting undamped displacement data at a constant rate, i.e., the frame-to-frame displacement dn¯, can be easily processed to extract eye velocity and acceleration. Following this, the amount of energy or work associated with the fixational eye movement of drift, microsaccades, and OMT can be extracted. This might prove interesting in future work.

While the system proposed and implemented here is promising, additional improvements to the system could be implemented to make it even more accurate at measuring OMT. Using a separate imaging camera system, the subject’s forehead measurements could be simultaneously performed and used to eliminate head movement from the detected eye movement signal. The subject’s pulse can also be measured in the far-field using noncontact optical methods.45 With further enhancements to the optical system, such as the inclusion of an interference out-of-plane displacement measurement system, longitudinal motions of the eye could also be detected. Using a second binocular system, i.e., detections in both eyes simultaneously, OMT detection could be achieved to measure the horizontal and vertical motions in both eyes.

Acknowledgments

One of the authors (J. P. R.) currently holds a Government of Ireland Postdoctoral Fellowship in Science, Engineering and Technology. We acknowledge the support of the Irish Research Council through the Empower scheme, Enterprise Ireland and Science Foundation Ireland under the National Development Plan. We thank Professor Davis Coakley and the staff at Mercer’s Institute for Successful Aging for discussions on the clinical aspects of ocular microtremor (OMT). OMT contact measurements were performed on the author (J. P. R.) by Dr. N. Collins and Dr. M. Al-Kalbani.

References

1. 

H. von Helmholtz, Helmholtz’s Treatise on Physiological Optics, Optical Society of America, Dover, New York (1962). Google Scholar

2. 

F. H. Adler and R. Fliegelman, “The influence of fixation on visual acuity,” Arch. Opthmol., 12 475 –483 (1934). http://dx.doi.org/10.1001/archopht.1934.00830170013002 AROPAW 0003-9950 Google Scholar

3. 

H. B. Barlow, “Eye movements during fixation,” J. Physiol., 116 290 –306 (1952). http://dx.doi.org/10.1113/jphysiol.1952.sp004706 JPHYA7 0022-3751 Google Scholar

4. 

D. Coakley, Minute Eye Movement and Brainstem Function, CRC, Boca Raton, Florida (1983). Google Scholar

5. 

C. Bolger et al., “Dominant frequency content of ocular microtremor from normal subjects,” Vis. Res., 39 1911 –1915 (1999). http://dx.doi.org/10.1016/S0042-6989(98)00322-8 VISRAM 0042-6989 Google Scholar

6. 

G. Boyle, D. Coakley and J. F. Malone, “Interferometry for ocular microtremor measurement,” Appl. Opt., 40 167 –175 (2001). http://dx.doi.org/10.1364/AO.40.000167 APOPAI 0003-6935 Google Scholar

7. 

E. Kenny, D. Coakley and G. Boyle, “Non-contact in vivo measurement of ocular microtremor using laser speckle correlation metrology,” Physiol. Meas., 35 1229 –1243 (2014). http://dx.doi.org/10.1088/0967-3334/35/7/1229 PMEAE3 0967-3334 Google Scholar

8. 

M. Eizenman, R. C. Frecker and P. E. Hallett, “Precise noncontacting measurement of eye movements using the corneal reflex,” Vis. Res., 24 167 –174 (1984). http://dx.doi.org/10.1016/0042-6989(84)90103-2 VISRAM 0042-6989 Google Scholar

9. 

N. F. Sheahan, “Ocular microtremor: measurement technique and biophysical analysis,” Trinity College, (1991). Google Scholar

10. 

G. K. Hung, Models of Oculomotor Control, World Scientific Pub. Co. Inc., Singapore (2001). Google Scholar

11. 

D. Coakley and J. G. Thomas, “The ocular microtremor record as a potential procedure for establishing brain death,” J. Neurol. Sci, 31 199 –205 (1977). http://dx.doi.org/10.1016/0022-510X(77)90106-X JNSCAG 0022-510X Google Scholar

12. 

D. Coakley and J. G. Thomas, “The ocular microtremor as a prognosis of the unconscious patient,” Lancet, 309 512 –522 (1977). http://dx.doi.org/10.1016/S0140-6736(77)91374-5 LANCAO 0140-6736 Google Scholar

13. 

S. T. Bojanic, T. Simpson and C. Bolger, “Ocular microtremor: a tool for measuring depth of anaesthesia?,” Br. J. Anaesth., 86 519 –522 (2001). http://dx.doi.org/10.1093/bja/86.4.519 BJANAD 0007-0912 Google Scholar

14. 

L. G. Kevin, A. J. Cunningham and C. Bolger, “Comparison of ocular microtremor and bispectral index during sevoflurane anaesthesia,” Br. J. Anaesth., 89 551 –555 (2002). http://dx.doi.org/10.1093/bja/aef225 BJANAD 0007-0912 Google Scholar

15. 

M. Heaney et al., “Ocular microtremor during general anesthesia: results of a multicenter trial using automated signal analysis,” Anesth. Analg., 99 775 –780 (2004). http://dx.doi.org/10.1213/01.ANE.0000133145.98702.C0 AACRAT 0003-2999 Google Scholar

16. 

C. Bolger et al., “Ocular microtremor in patients with idiopathic Parkinson’s disease,” J. Neurol. Neurosurg. Psychiatry, 66 528 –531 (1999). http://dx.doi.org/10.1136/jnnp.66.4.528 JNNPAU 0022-3050 Google Scholar

17. 

C. Bolger et al., “Ocular microtremor (OMT): a new neurophysiological approach to multiple sclerosis,” J. Neurol. Neurosurg. Psychiatry, 68 639 –642 (2000). http://dx.doi.org/10.1136/jnnp.68.5.639 JNNPAU 0022-3050 Google Scholar

18. 

C. Bolger et al., “Effects of age on ocular microtremor activity,” J. Gerontol.: Med. Sci., 56A M386 –M390 (2001). http://dx.doi.org/10.1093/gerona/56.6.M386 1079-5006 Google Scholar

19. 

S. Martinez-Conde, S. L. Macknik and D. H. Hubel, “The role of fixational eye movements in visual perception,” Nat. Rev.: Neurosci., 5 229 –240 (2004). http://dx.doi.org/10.1038/nrn1348 NRNAAN 1471-0048 Google Scholar

20. 

G. W. Hebbard and E. Marg, “Physiological nystagmus in the cat,” J. Opt. Soc. Am., 50 151 –155 (1960). http://dx.doi.org/10.1364/JOSA.50.000151 JOSAAH 0030-3941 Google Scholar

21. 

A. R. Shankovich and J. G. Thomas, “Ocular microtremor: an index of motor unit activity and of the functional state of the brain stem,” J. Physiol., 238 36P (1974). JPHYA7 0022-3751 Google Scholar

22. 

V. Golda et al., “Oculomicrotremor and the level of vigilance,” Sb. Ved. Pr. Lek. Fak. Karlovy Univ. Hvada Kralone, 24 (1), 77 –83 (1981). Google Scholar

23. 

F. Ratliff and L. A. Riggs, “Involuntary motions of the eye during fixation,” J. Exp. Psychol., 40 687 –701 (1950). http://dx.doi.org/10.1037/h0057754 JEPSAK 0022-1015 Google Scholar

24. 

D. H. Fender, “Torsional motions of the eyeball,” Br. J. Ophthalmol., 39 65 –72 (1955). http://dx.doi.org/10.1136/bjo.39.2.65 BJOPAL 0007-1161 Google Scholar

25. 

L. Matin, “Measurement of eye movements by contact lens techniques: analysis of measuring systems and some new methodology for three-dimensional recording,” J. Opt. Soc. Am., 54 1008 –1018 (1964). http://dx.doi.org/10.1364/JOSA.54.001008 JOSAAH 0030-3941 Google Scholar

26. 

J. Tabernero and P. Artal, “Lens oscillations in the human eye. Implications for post-saccadic suppression of vision,” PloS One, 9 (4), e95764 (2014). http://dx.doi.org/10.1371/journal.pone.0095764 POLNCL 1932-6203 Google Scholar

27. 

S. B. Stevenson and A. Roorda, “Correcting for miniature eye movements in high resolution scanning laser ophthalmoscopy,” Proc. SPIE, 5688 145 –151 (2005). http://dx.doi.org/10.1117/12.591190 PSISDG 0277-786X Google Scholar

28. 

Q. Yang et al., “Closed-loop optical stabilization and digital image registration in adaptive optics scanning light ophthalmoscopy,” Biomed. Opt. Express, 5 3174 –3191 (2014). http://dx.doi.org/10.1364/BOE.5.003174 BOEICL 2156-7085 Google Scholar

29. 

C. Bolger et al., “High frequency eye tremor: reliability of measurement,” Clin. Phys. Physiol. Meas., 13 151 –159 (1992). http://dx.doi.org/10.1088/0143-0815/13/2/007 CPPMD5 0143-0815 Google Scholar

30. 

N. F. Sheahan et al., “Ocular microtremor measurement system: design and performance,” Med. Biol. Eng. Comput., 31 205 –212 (1993). http://dx.doi.org/10.1007/BF02458038 MBECDY 0140-0118 Google Scholar

31. 

M. B. McCamy et al., “Simultaneous recordings of ocular microtremor and microsaccades with a piezoelectric sensor and a video-oculography system,” PeerJ, 1 e14 (2013). http://dx.doi.org/10.7717/peerj.14 PEERDV 2167-8359 Google Scholar

32. 

J. P. Ryle et al., “Compact portable ocular microtremor sensor: design, development and calibration,” J. Biomed. Opt., 14 014021 (2009). http://dx.doi.org/10.1117/1.3083435 JBOPFO 1083-3668 Google Scholar

33. 

E. Kenny, D. Coakley and G. Boyle, “Non-contact measurement of ocular microtremor using laser speckle,” Proc. SPIE, 7715 771528 (2010). http://dx.doi.org/10.1117/12.854557 PSISDG 0277-786X Google Scholar

34. 

E. Kenny, D. Coakley and G. Boyle, “Ocular microtremor measurement using laser speckle metrology,” J. Biomed. Opt., 18 016010 (2013). http://dx.doi.org/10.1117/1.JBO.18.1.016010 JBOPFO 1083-3668 Google Scholar

35. 

G. C. Higgins and K. F. Stultz, “Frequency and amplitude of ocular tremor,” J. Opt. Soc. Am., 43 1136 –1140 (1953). http://dx.doi.org/10.1364/JOSA.43.001136 JOSAAH 0030-3941 Google Scholar

36. 

D. Mas et al., “Resolution limits to object tracking with subpixel accuracy,” Opt. Lett., 37 4877 –4879 (2012). http://dx.doi.org/10.1364/OL.37.004877 OPLEDP 0146-9592 Google Scholar

37. 

M. Sjödahl and L. R. Benckert, “Electronic speckle photography: analysis of an algorithm giving the displacement with subpixel accuracy,” Appl. Opt., 32 2278 –2284 (1993). http://dx.doi.org/10.1364/AO.32.002278 APOPAI 0003-6935 Google Scholar

38. 

M. Sjödahl, “Electronic speckle photography: increased accuracy by nonintegral pixel shifting,” Appl. Opt., 33 6667 –6673 (1994). http://dx.doi.org/10.1364/AO.33.006667 APOPAI 0003-6935 Google Scholar

39. 

T. Fricke-Begemann, “Three-dimensional deformation field measurement with digital speckle correlation,” Appl. Opt., 42 6783 –6796 (2003). http://dx.doi.org/10.1364/AO.42.006783 APOPAI 0003-6935 Google Scholar

40. 

The Mathworks Inc., “Matlab fast Fourier transform algorithm implementation,” (2014) http://www.mathworks.co.uk/help/matlab/ref/fft.html ( July ). 2014). Google Scholar

41. 

J. W. Goodman, Introduction to Fourier Optics, 3rd ed.Roberts and Company, Greenwood Village, Colorado (2004). Google Scholar

42. 

P. Bing et al., “Performance of sub-pixel registration algorithms in digital image correlation,” Meas. Sci. Tech., 17 1615 –1621 (2006). http://dx.doi.org/10.1088/0957-0233/17/6/045 MSTCEP 0957-0233 Google Scholar

43. 

J. P. Ryle, “Optical engineering applications in biomedical and bioprocess engineering: ocular microtremor (OMT) sensor and digital in-line holographic microscopy (DIHM),” University College Dublin, (2010). Google Scholar

44. 

R. Engbert and R. Kliegl, “Microsaccades uncover the orientation of covert attention,” Vis. Res., 43 1035 –1045 (2003). http://dx.doi.org/10.1016/S0042-6989(03)00084-1 VISRAM 0042-6989 Google Scholar

45. 

Y. Beiderman et al., “Remote estimation of blood pulse pressure via temporal tracking of reflected secondary speckle pattern,” J. Biomed. Opt., 15 (6), 061707 (2010). http://dx.doi.org/10.1117/1.3505008 JBOPFO 1083-3668 Google Scholar

Biography

James P. Ryle held an Irish Research Council postdoctoral fellowship at Maynooth University until the end of 2014. He is a visiting researcher at the Optical Engineering Research Group in UCD’s School of Electrical, Electronic and Communications Engineering. He is also a consulting engineer and provides optics and photonics advice to a number of companies. His primary research interests include opto-electronic systems design and development applied to ocular microtremor measurement, digital holographic microscopic imaging systems of industrial particle flow systems, development of light based equine therapy systems and noncontact diagnostic platforms.

Brian Vohnsen is a senior lecturer in the School of Physics, University College Dublin, where he founded the Advanced Optical Imaging Group in 2008. His research interests are centered on biomedical optics and spans from novel microscopy and adaptive optics to the biophotonic properties of the human eye. He chairs the Applications of Visual Science Technical Group below the Optical Society of America and is current subcommittee chair for FiO Vision and Color.

John T. Sheridan has a BE in Electronic Engineering, (NUIG), an MScEE from Georgia Tech., and a DPhil from Oxford University. He held an Alexander von Humboldt Foundation postdoctoral research fellowship in Erlangen-Nürnberg University, then was a visiting scientist at the European Commission Joint Research Centre in Italy. He was appointed as a permanent lecturer at Dublin Institute of Technology in 1997. He joined in Electronic and Electrical Engineering, UCD, in 2000, where he is currently professor of optical engineering. He is the cofounder and director of Equilume Ltd. In 2014 he became an SPIE fellow. He has authored over 400 publications.

© 2015 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2015/$25.00 © 2015 SPIE
James P. Ryle, Brian Vohnsen, and John T. Sheridan "Simultaneous drift, microsaccades, and ocular microtremor measurement from a single noncontact far-field optical sensor," Journal of Biomedical Optics 20(2), 027004 (12 February 2015). https://doi.org/10.1117/1.JBO.20.2.027004
Published: 12 February 2015
Lens.org Logo
CITATIONS
Cited by 9 scholarly publications and 3 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Eye

Sensors

Imaging systems

Motion measurement

Head

Light sources

Sclera

Back to Top