9 July 2013 Minimizing eyestrain on a liquid crystal display considering gaze direction and visual field of view
Author Affiliations +
Optical Engineering, 52(7), 073104 (2013). doi:10.1117/1.OE.52.7.073104
Abstract
Recently, it has become necessary to evaluate the performance of display devices in terms of human factors. To meet this requirement, several studies have been conducted to measure the eyestrain of users watching display devices. However, these studies were limited in that they did not consider precise human visual information. Therefore, a new eyestrain measurement method is proposed that uses a liquid crystal display (LCD) to measure a user’s gaze direction and visual field of view. Our study is different in the following four ways. First, a user’s gaze position is estimated using an eyeglass-type eye-image capturing device. Second, we propose a new eye foveation model based on a wavelet transform, considering the gaze position and the gaze detection error of a user. Third, three video adjustment factors—variance of hue (VH), edge, and motion information—are extracted from the displayed images in which the eye foveation models are applied. Fourth, the relationship between eyestrain and three video adjustment factors is investigated. Experimental results show that the decrement of the VH value in a display induces a decrease in eyestrain. In addition, increased edge and motion components induce a reduction in eyestrain.
Lee, Heo, Lee, and Park: Minimizing eyestrain on a liquid crystal display considering gaze direction and visual field of view

1.

Introduction

Currently, various display devices, such as the plasma display panel (PDP), liquid crystal display (LCD), light-emitting diode, active-matrix organic light-emitting diode, and stereoscopic TV, are being manufactured. The use of these display devices is becoming increasingly widespread, with the devices being rapidly adopted for laptop computers, mobile phones, high-definition TV (HD TV), and so on. Many manufacturers and consumers are interested in the attributes of these display devices, including their field of view, spatial resolution, response speed, and degree of motion blur. In addition to these kinds of quantitative characteristics, consumers expect good display capability in terms of human factors.

Researchers have previously measured the eyestrain of users watching display devices.12.3.4.5.6.7.8 Some of these studies compared the levels of eyestrain caused by watching LCD and PDP devices based on the change in pupil size, eye blinking, and subjective tests.12.3 Other studies investigated the relationships between the eyestrain caused by an LCD device and video factors such as brightness, contrast, saturation, hue, edge difference, and scene changes.4,5 In addition, the eyestrain caused by a stereoscopic display was examined using a subjective measurement method, optometric instrument-based measurement method, optometric clinically based measurements, and brain activity measurements.6,7 In previous research, the eyestrain caused by two- and three-dimensional (2-D and 3-D) displays was compared using the average blinking rate (BR).8 However, most previous studies did not consider human visual information, such as the gaze position and the visual field of view, for estimating eyestrain. For instance, Lee and Park measured eyestrain on the basis of the change in pupil size in relation to the changes in four adjustment factors: brightness, contrast, saturation, and hue.5 However, each factor was calculated from the whole image in the display without considering the influence of the human gaze position. Other factors, such as edge difference and scene change, were also calculated from the whole image in the display.4 In other words, these studies were conducted under the assumption that every region in a given image on the display was perceived equally by the subject. To overcome this problem, a new eye foveation model is proposed here that considers a user’s gaze position and the error of gaze detection. Three video adjustment factors—variance of hue (VH), edge, and motion information—are extracted from the successive images in the displays in which these eye foveation models are applied.

This article is organized as follows. In Sec. 2, the proposed device for gaze tracking and eye response measurement and the methods of analysis are presented. In Sec. 3, the methods for extracting video features, considering the gaze position and the foveation-based visual field of view, are explained. The experimental setup and results are presented in Sec. 4. Finally, Sec. 5 presents the conclusion of this article and the plans for future work.

2.

Proposed Device and Analysis Methods

2.1.

Device for Measuring Gaze Position and Eye Response

Figure 1 shows the proposed gaze tracking and eye response measurement device.8,910.11 The eye-capturing camera is attached to an eyeglass frame near the lower part of one eye, as shown in Fig. 1. The camera is a small web camera with universal serial bus port that captures the images at a speed of 15frames/s. The spatial resolution of the captured image is 640×480pixels. A zoom lens is used to capture the magnified images of the eye. To screen out visible light, a near-infrared (NIR) passing filter is attached to the camera lens.8,910.11

Fig. 1

Eyeglass-type eye-image capturing device.

OE_52_7_073104_f001.png

Figure 2 shows an example of the experimental setup. Four NIR illuminators of 850 nm each are attached to an LCD display.8,910.11 They do not affect the user’s vision because an NIR light of 850 nm does not dazzle the user’s eye. The four NIR illuminators produce four corneal specular reflections, as shown in Fig. 3, which represent the rectangular area of display since these illuminators are attached to its four corners.8,9

Fig. 2

Example of experimental setup of four near-infrared (NIR) illuminators attached to the corners of the liquid crystal display (LCD).

OE_52_7_073104_f002.png

Fig. 3

Example of four specular reflections and detection results.

OE_52_7_073104_f003.png

2.2.

Gaze Tracking Method

As a user-dependent calibration, each user first gazes at a central position on the display, which is required to compensate the angle kappa, which is the angular offset between the visual and the pupillary axis.9,11 Using the captured eye image, the pupil’s center is detected on the basis of circular edge detection, local binarization, component labeling, size filtering, filling of the specular reflection area, and calculation of the geometric center of the remaining black pixels as the pupil center.910.11 Figure 3 shows the four specular reflections of the four NIR illuminators attached to the corners of the LCD screen. These reflections are located by binarization, component labeling, and size filtering.9 The four specular reflections represent the rectangular area of the display. Therefore, on the basis of the detected pupil center and the four specular reflections, the user’s gaze position on the display is calculated according to the geometric transform between the rectangle formed by the four reflections and the rectangle of the display.9,11

2.3.

Eye Response Measurement

In this research, the average eye BR is used for measuring eyestrain. In previous researches,12,13 the increase in the BR can be observed as the function of time on task. Based on these researches, previous studies measured eyestrain, with more frequent blinking corresponding to greater eyestrain.2,4 The average BR is calculated in a time window of 60 s; the time window here is moved with an overlap of 50 s.

3.

Extraction of Video Features by Considering Gaze Position and Visual Field of View

3.1.

Contrast Sensitivity Model Based on Foveation

To measure visual sensitivity according to the gaze position and angular offset, it is necessary to determine the function of retinal eccentricity. For this, previous research on visual sensitivity is referenced, which showed that visual sensitivity reduced as the distance from the gaze position increased. The algorithm for calculating sensitivity, which has been employed to improve image and video coding efficiency, is called foveation.1415.16.17 In this research, eyestrain is measured by calculating a user’s gaze position and by determining the user’s visual information on the basis of foveation. Humans perceive a dramatic decrease in their visual sensitivity in areas away from the point of gaze. In detail, the point of gaze is perceived with high resolution, but the perceived degree of resolution is decreased according to the increase in the distance from this point. Accordingly, a foveation (visual field of view) model based on the gaze information is defined. The foveation is determined using the contrast threshold (CT) formula, which is based on human contrast sensitivity (CS) data measured as a function of spatial frequency and retinal eccentricity.1415.16

(1)

CT(f,e)=CT0exp(αfe+e2e2),
where f is the spatial frequency (cycles per degree), e is the retinal eccentricity (degrees), CT0 is the minimum contrast threshold, α is the spatial frequency decay constant, e2 is the half-resolution eccentricity, and CT is the visible contrast threshold.1415.16 The optimal fitting parameters are determined on the basis of previous research (α is 0.106, e2 is 2.3, and CT0 is 1/64).14,16 The CS is defined as the reciprocal proportion of the CT.14,16

(2)

CS(f,e)=1CT(f,e).

To apply these models to an image, the eccentricity needs to be calculated for any point x=(x1,x2)T (pixels) in the image. Because a user’s gaze position is the foveation point, xf=(x1f,x2f), the distance from x to xf is given by the following equation:14,16

(3)

d(x)=xxf2.

Further, the eccentricity is obtained as follows:14,16

(4)

e(v,x)=tan1[d(x)Nv],
where N is the width of the image and v is the viewing distance (measured in image width) from the eye to the image plane.14,16 The cut-off frequency fc, which is an unperceivable high-frequency component, can be obtained by setting CT as 1 (the maximum possible contrast) in Eq. (1):14,16

(5)

fc(e)=e2ln(1CT0)α(e+e2).

According to the Nyquist–Shannon sampling theorem, the highest frequency that meets the display Nyquist frequency is as follows:14,16

(6)

fd(v)=πNv360.

Combining Eqs. (5) and (6), the final cut-off frequency fm is obtained as follows:14,16

(7)

fm(v,x)=min{fc[e(v,x)],fd(v)}.

Finally, the foveation-based error sensitivity is defined in the following equation 14,16 and in Fig. 4:

(8)

Sf(v,f,x)={CS[f,e(v,x)]CS(f,0),ifffm(v,x)0,otherwise.

Fig. 4

Foveation-based contrast sensitivity.

OE_52_7_073104_f004.png

In Fig. 4, a brighter region represents higher contrast sensitivity.

3.2.

New Foveated Weighing Model in the Wavelet Domain by Considering Gaze Detection Error

A foveation-based visual sensitivity model in the wavelet domain has been proposed previously as follows:14,16

(9)

S(v,x)=[Sw(λ,θ)]β1·{Sf[v,fd2λ+1,dλ,θ(x)]}β2xBλ,θ,
where λ is the wavelet decomposition level and θ represents the LL, LH, HH, or HL subbands of the wavelet transform. β1 and β2 are the parameters that control the magnitudes of Sw and Sf, respectively.14,16 The LL subregion has low-frequency components in both horizontal and vertical directions. The HH subregion includes high-frequency components in the horizontal and vertical directions. The HL subregion comprises high-frequency components in the horizontal direction and low-frequency components in the vertical direction. Finally, the LH subregion contains low-frequency components in the horizontal direction and high-frequency components in the vertical direction.18 Sw (λ, θ) is the error sensitivity in subband (λ, θ); the method for calculating Sw (λ,θ) is shown in Refs. 14 and 16. For a given wavelet coefficient at position xBλ,θ [where Bλ,θ is the set of wavelet coefficient positions existing in subband (λ, θ)], the distance from the foveation point in the spatial domain is shown in Refs. 14 and 16:

(10)

dλ,θ(x)=2λxxλ,θf2forxBλ,θ.

The explanations given in Eqs. (1)–(10) represent the conventional foveation model of Refs. 14 and 16, but they do not consider the errors in gaze detection when calculating the foveation model. In general, there inevitably exists an error in gaze detection between the ground-truth position and the calculated gaze position.910.11 However, the above foveation-based visual sensitivity model of Eqs. 9 and 10 and Fig. 4 does not consider this error.

Therefore, we propose an eye foveation model that considers the gaze position and the error in detecting it, as follows. Since N is the width of an image and v is the viewing distance (measured in image width) from the eye to the image plane,14,16 Nv is the calculated Z distance from the user’s eye to the image plane. Assuming that ε is the accuracy of the gaze tracking (degrees), the consequent gaze detection error is calculated as Nvtanε. In the range of the gaze detection error (Nvtanε), all the positions (x) should be treated as the same for the foveation (user’s gaze) position (xλ,θf) since the error boundary is Nvtanε. Thus, dλ,θ(x) of Eq. (10) becomes 0. Consequently, Eq. (10) is rewritten as Eq. (11), considering the gaze detection error:

(11)

dλ,θ(x)={0,if2λxxλ,θf2<Nvtanε2λxxλ,θfNvtanε,otherwise.

Based on Eqs. (9) and (11), the foveation-based contrast sensitivity mask of the single foveation point (gaze point) in the wavelet domain is found as shown in Fig. 5(b). The four-level discrete wavelet transform (DWT) based on Daubechies wavelet bases is used. Brightness indicates the importance of the wavelet coefficients. Higher-contrast sensitivity is shown as a brighter gray level.

Fig. 5

Foveation-based contrast sensitivity mask in the wavelet domain. (a) Sensitivity mask not considering the gaze tracking error (Refs. 14 and 16). (b) Sensitivity mask considering the gaze tracking error (proposed method).

OE_52_7_073104_f005.png

3.3.

Extracting Video Features Considering the Eye Foveation Model

In this research, eyestrain is measured in relation to the changes in the three adjustment features of video: VH, edge, and motion information. To extract features considering gaze position and foveation, foveated images are obtained as follows.

The original color image is first separated into three images of red, green, and blue channels. These three images are decomposed using a DWT based on Daubechies wavelet bases.

The decomposed three images are multiplied by the foveation-based contrast sensitivity mask of Fig. 5(b). From these three foveated images, three images of the red, green, and blue channels in the spatial domain are obtained by the inverse procedure of DWT.18 With these three images in the spatial domain, the hue image is obtained based on the conversion matrix of RGB to hue, saturation, and intensity (HSI),18 and the VH is obtained as the first feature.

To obtain the motion component (MC) and edge component (EC), the original RGB color image is first transformed into a gray one, and the gray image is decomposed using a DWT based on Daubechies wavelet bases. The decomposed (gray) image is multiplied by the foveation-based contrast sensitivity mask of Fig. 5(b). Figure 6 shows an example of the original gray image and the corresponding foveated one by the proposed method. From the foveated image, the gray image in the spatial domain is obtained by the inverse procedure of DWT.18 The MC and EC are extracted as the second and third features, respectively, from the gray image in the spatial domain. The average magnitude calculated by the Canny edge detector in a gray image is determined as the value of EC. The average pixel difference between successive gray images is determined as the value of MC.

Fig. 6

Example of foveated image. (a) Original image. (b) Foveated image of (a) obtained by the proposed method considering the gaze detection error (user’s foveation position is a white crosshair).

OE_52_7_073104_f006.png

The VH is averaged in a time window of 60 s, and the time window is moved with an overlap of 50 s, as in the method for measuring BR. The MC and EC are also obtained by the same method. Using the calculated features of the foveated images, the eyestrain based on the average BR (Sec. 2.3) is measured in relation to changes in the three adjustment features of video: VH, MC, and EC.

Figure 7 shows some examples of extracted features in video images captured by a commercial web camera. Figure 7(a) shows an original image. Figure 7(b), 7(c), and 7(d) shows the hue image, motion image, and edge image obtained from the original one, respectively. The measured feature values of VH, MC, and EC of Fig. 7(b), 7(c), and 7(d) are 16495.05, 24.28, and 30.84, respectively.

Fig. 7

Examples of extracted features in a video image. (a) Original image. (b) Hue image. (c) Motion image. (d) Edge image. (e) Original gray image including the foveation point as a white crosshair. (f) Hue image after applying the conventional foveated model (Refs. 14 and 16). (g) Motion image after applying the conventional foveated model (Refs. 14 and 16). (h) Edge image after applying the conventional foveated model (Refs. 14 and 16). (i) Hue image after applying the proposed foveated model. (j) Motion image after applying the proposed foveated model. (k) Edge image after applying the proposed foveated model.

OE_52_7_073104_f007.png

Figure 7(e) shows an original gray image including the foveation point as a white crosshair. Figure 7(f), 7(g), and 7(h), respectively, shows the hue image, motion image, and edge image obtained from the foveated one by the conventional foveated model.14,16 The measured feature values of VH, MC, and EC of Fig. 7(f), 7(g), and 7(h) are 16879.22, 14.43, and 9, respectively.

Figure 7(i), 7(j), and 7(k), respectively, shows the hue image, motion image, and edge image obtained from the foveated one by the proposed foveated model. The measured feature values of VH, MC, and EC of Fig. 7(i), 7(j), and 7(k) are 16858.78, 15.31, and 11.15, respectively, which are different from those determined by the previous method,14,16 not considering the gaze tracking error.

To measure eyestrain in this research, a commercial 19-in LCD monitor and a commercial movie file were used. The environmental lighting condition was maintained without any external illumination. The temperature and humidity were kept constant, and there was no vibration or bad odor that could affect the experiments. Each subject watched the movie for 25 min 30 s. The data of eye response were collected from 24 subjects [average age of 26.54 (standard deviation: 2.24); minimum and maximum ages were 23 and 31, respectively]. To remove the dependency of watching distance (from the user’s eye to the monitor) while considering the actual cases of watching distances, the data of 12 subjects were obtained at a watching distance of 60 cm, and the data of the remaining 12 subjects were collected at a distance of 90 cm.

4.

Experimental Results

As mentioned in Sec. 2.3, in previous researches,12,13 the increase in BR can be observed as the function of time on task. Based on these researches, previous studies measured eyestrain, with more frequent blinking corresponding to greater eyestrain.2,4 Accordingly, the eyestrain based on BR was measured according to extracted features (VH, MC, and EC). To validate the relationship between these three features and eye responses, a correlation analysis was performed. In this analysis, the correlation coefficient ranges from 1 to +1. A correlation coefficient close to +1 indicates that two variables are positively related; if it is close to 1, it indicates that two variables are negatively related. If it is close to 0, there is no relationship between the variables. Table 1 shows the relationship between these three features and eye responses, in which the results are calculated by removing outliers based on the confidence interval of 95%. Because the scales of the VH, MC, EC, and BR are different, the values are normalized using the minimum–maximum scaling method.19

Table 1

Relationship between three adjustment features and eye responses (average value of experimental data from 24 subjects).

Eye responsesAdjustment featuresAverage correlation coefficientAverage gradientAverage R2
Blinking rateVariance of Hue (VH)0.41150.26440.2310
Motion component (MC)−0.4059−0.32730.2095
Edge component (EC)−0.5078−0.33870.3455

As listed in Table 1, the average correlation coefficients between these adjustment factors (VH, MC, and EC) and BR were calculated as 0.4115, 0.4059, and 0.5078, respectively. Based on the average correlation coefficient in Table 1, we found that the adjustment of VH is positively related to eyestrain, whereas the adjustments of MC and EC are negatively related to eyestrain. Therefore, the increase in VH causes the increase in eyestrain, and the increase in MC and EC reduces eyestrain.

The average gradient is the slope of the fitted line by linear regression, and it represents the rate of change of VH, MC, or EC according to that of BR. The linear regressions were also performed to analyze the change in eye response in relation to the change in the adjustment factors in Table 1. On the basis of the results (average gradient) of linear regression, it is observed that if the MC or EC increases, the eyestrain decreases. In contrast, if the VH increases, the eyestrain also increases. The R2 values between the three adjustment factors and BR were calculated as 0.2310, 0.2095, and 0.3455, respectively. In Tables 1 and 2, and Fig. 8, R2 refers to the degree of fitting when using the regression method.20 In general, greater values of R2 represent a better fit. Figure 8 shows the examples of 2-D dot graphs of one subject, where one dot denotes the average BR and its corresponding adjustment factors (VH, MC, and EC).

Table 2

Experimental values from 24 subjects.

Subject numberCorrelation coefficientGradientR2
VHMCECVHMCECVHMCEC
10.4836−0.801−0.78490.3681−0.571−0.63160.25620.64170.616
20.5241−0.6158−0.44130.3416−0.4193−0.25850.27470.37920.1948
30.2747−0.4609−0.77690.1505−0.3113−0.53470.07540.21250.6036
4−0.0026−0.2598−0.3763−0.0012−0.1676−0.150200.06750.1494
50.1391−0.0948−0.1540.0715−0.0639−0.07080.01930.0090.0237
60.6222−0.5752−0.53620.3079−0.4269−0.21820.38720.33090.2875
70.6222−0.3634−0.76170.4314−0.3473−0.5620.38710.1320.5802
80.5853−0.4531−0.69550.4635−0.3758−0.57010.34250.20530.4837
90.5718−0.3214−0.68620.3664−0.2534−0.4320.32690.10330.4709
100.5869−0.05780.34020.3127−0.03950.12270.34450.00330.1157
110.6498−0.1503−0.51360.4904−0.1196−0.31490.42220.02260.2637
120.2062−0.5521−0.69540.1202−0.602−0.47950.04250.30070.4836
130.0082−0.3865−0.06670.0046−0.3482−0.03400.14940.0045
140.5786−0.241−0.76090.3123−0.1828−0.53440.33480.05810.579
150.1889−0.6521−0.69760.0867−0.4613−0.41450.03570.42530.4866
160.4028−0.3672−0.23770.2242−0.2949−0.12120.16230.13490.0565
170.1578−0.0129−0.46530.0861−0.0092−0.29730.02490.00020.2165
180.5966−0.4043−0.10590.3495−0.3092−0.04970.3560.16350.0112
190.3686−0.4833−0.85520.2246−0.448−0.67390.13590.2336−0.7314
200.4048−0.6103−0.7680.2756−0.5559−0.56750.16390.37250.5818
210.6952−0.7459−0.82740.5043−0.5424−0.59420.48330.55640.6848
220.7334−0.1463−0.66590.4709−0.1314−0.36750.53780.02140.4434
230.6366−0.401−0.39060.4698−0.305−0.23270.40520.16080.1526
24−0.1577−0.5857−0.2646−0.0866−0.5689−0.14150.02490.3430.07

Fig. 8

Graph and linear regression results for one subject. (a) Relationship between blinking rate (BR) and variance of Hue (VH). (b) Relationship between BR and motion component (MC). (c) Relationship between BR and edge component (EC).

OE_52_7_073104_f008.png

Because the y-intercept points of the fitted lines (the point where the fitted line is intercepted with the y-axis) and the degrees of distributions of all data of the 24 subjects are different for each subject, it is difficult to obtain a meaningful result from the average of all subjects. Instead, we included both the average results and all the results of the 24 subjects in Tables 1 and 2, respectively.

Figure 9 shows the examples of gaze detection results. The circles represent the reference points at which each subject should look, and crosshairs show the gaze points that are calculated by our gaze detection algorithm (explained in Sec. 2.2). A total of five subjects tried to look at the nine reference points five times, and each crosshair shows the average point of five trials per each subject. We measured the gaze detection error as the angle between the vector to the reference point and the vector to the calculated gaze position. The gaze detection error between the reference and gaze points was about 1.12 deg. As seen in Fig. 9, the reference points show differences from the calculated gaze points. In other words, the gaze error for each subject can occur randomly inside the circle whose radius is 1.12 deg, and we consider this circle in the case of generating the eye foveation model. Therefore, the eye foveation model without this gaze detection error, as shown in Fig. 5(a), is different from the proposed eye foveation model, which considers the gaze detection error, as shown in Fig. 5(b).

Fig. 9

Examples of gaze detection results (the circles represent the reference points at which each user should look; the crosshairs show the positions that are calculated by our gaze detection algorithm).

OE_52_7_073104_f009.png

5.

Conclusion

This research introduced a new eyestrain measurement method considering an eye foveation model. On the basis of this measurement, it was confirmed that a stable relationship exists between the eyestrain and the three adjustment factors—color information, edge, and motion information. Experimental results showed that a greater degree of VH induced higher eyestrain. On the contrary, a greater degree of the EC and MC induced relatively lower eyestrain. With the recent developments in television technology, the smart TV, which includes a built-in camera, has become widespread. On the basis of the results of this research, an intelligent display can be expected that has the functionality of reducing the user’s eyestrain by decreasing the VH or increasing the edge and motion information of a video based on the eye response measured by the built-in camera.

In future works, the relationship between eyestrain and video factors in various kinds of displays, such as 3-D stereoscopic or holographic displays, would be researched on the basis of gaze detection and the proposed foveation model.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No. 2012R1A1A2038666).

References

1. 

M. Takahashi, “LCD vs. PDP picture quality status and the task of FPD TVs,” in Proc. Korean Display Conf., COEX, Seoul, South Korea (2006).Google Scholar

2. 

E. C. Leeet al., “Measuring the degree of eyestrain caused by watching LCD and PDP devices,” Int. J. Ind. Ergon. 39(5), 798–806 (2009).0169-8141http://dx.doi.org/10.1016/j.ergon.2009.02.008Google Scholar

3. 

A. Okada, “Physiological measurement of visual fatigue in the viewers of large flat panel display,” in Proc. Korean Display Conf., COEX, Seoul, South Korea (2006).Google Scholar

4. 

E. C. Leeet al., “Minimizing eyestrain on LCD TV based on edge difference and scene change,” IEEE Trans. Consum. Electron. 55(4), 2294–2300 (2009).ITCEDA0098-3063http://dx.doi.org/10.1109/TCE.2009.5373801Google Scholar

5. 

E. C. LeeK. R. Park, “Measuring eyestrain from LCD TV according to adjustment factors of image,” IEEE Trans. Consum. Electron. 55(3), 1447–1452 (2009).ITCEDA0098-3063http://dx.doi.org/10.1109/TCE.2009.5278012Google Scholar

6. 

M. Lambooijet al., “Visual discomfort and visual fatigue of stereoscopic displays: a review,” J. Imaging Sci. Technol. 53(3), 030201 (2009).JIMTE61062-3701http://dx.doi.org/10.2352/J.ImagingSci.Technol.2009.53.3.030201Google Scholar

7. 

M. MenozziC. KornfeldA. Polti, “Visual stress, and performance using autostereoscopic displays,” in Innovationen für Arbeit und Organisation, GfA Press, Dortmund (2006).Google Scholar

8. 

E. C. LeeH. HeoK. R. Park, “The comparative measurements of eyestrain caused by 2D and 3D displays,” IEEE Trans. Consum. Electron. 56(3), 1677–1683 (2010).ITCEDA0098-3063http://dx.doi.org/10.1109/TCE.2010.5606312Google Scholar

9. 

J.-S. Choiet al., “Enhanced perception of user intention by combining EEG and gaze-tracking for brain-computer interfaces (BCIs),” Sensors 13(3), 3454–3472 (2013).SNSRES0746-9462http://dx.doi.org/10.3390/s130303454Google Scholar

10. 

J. W. Leeet al., “3D gaze tracking method using Purkinje images on eye optical model and pupil,” Opt. Lasers Eng. 50(5), 736–751 (2012).OLENDN0143-8166http://dx.doi.org/10.1016/j.optlaseng.2011.12.001Google Scholar

11. 

C. W. Choet al., “Gaze detection by wearable eye-tracking and NIR LED-based head-tracking device based on SVR,” ETRI J. 34(4), 542–552 (2012).1225-6463http://dx.doi.org/10.4218/etrij.12.0111.0193Google Scholar

12. 

K. KanekoK. Sakamoto, “Spontaneous blinks as a criterion of visual fatigue during prolonged work on visual display terminals,” Percept. Mot. Skills 92(1), 234–250 (2001).PMOSAZ0031-5125http://dx.doi.org/10.2466/pms.2001.92.1.234Google Scholar

13. 

J. A. SternD. BoyerD. Schroeder, “Blink rate: a possible measure of fatigue,” Hum. Factors 36(2), 285–297 (1994).HUFAA60018-7208Google Scholar

14. 

Z. WangL. LuA. C. Bovik, “Foveation scalable video coding with automatic fixation selection,” IEEE Trans. Image Process. 12(2), 243–254 (2003).IIPRE41057-7149http://dx.doi.org/10.1109/TIP.2003.809015Google Scholar

15. 

W. S. GeislerJ. S. Perry, “A real-time foveated multiresolution system for low-bandwidth video communication,” Proc. SPIE 3299, 294–305 (1998).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.320120Google Scholar

16. 

Z. Wanget al., “Foveated wavelet image quality index,” Proc. SPIE 4472, 42–52 (2001).PSISDG0277-786Xhttp://dx.doi.org/10.1117/12.449797Google Scholar

17. 

Z. WangA. C. Bovik, “Embedded foveation image coding,” IEEE Trans. Image Process. 10(10), 1397–1410 (2001).IIPRE41057-7149http://dx.doi.org/10.1109/83.951527Google Scholar

18. 

R. C. GonzalezR. E. Woods, Digital Image Processing, 2nd ed., Prentice Hall, Upper Saddle River, NJ (2002).Google Scholar

19. 

Z. ZhuT. S. Huang, Multimodal Surveillance: Sensors, Algorithms, and Systems, 1st ed., Artech House, Norwood, MA (2007).Google Scholar

20. 

N. R. DraperH. Smith, Applied Regression Analysis, Wiley Interscience, River Street, NJ (1998).Google Scholar

Biography

OE_52_7_073104_d001.png

Won Oh Lee received a BS degree in electronics engineering from Dongguk University, Seoul, South Korea, in 2009. He is currently pursuing the combined course of Master and PhD degree in electronics and electrical engineering at Dongguk University. His research interests include biometrics and pattern recognition.

OE_52_7_073104_d002.png

Hwan Heo received the BS degree in computer engineering from National Institute for Lifelong Education, Seoul, South Korea, in 2009. He is currently pursuing the combined course of Master and PhD degree in electronics and electrical engineering at Dongguk University. His research interests include image processing, computer vision, and HCI.

OE_52_7_073104_d003.png

Eui Chul Lee received his BS degree in software in 2005, and his Master and PhD degrees in computer science in 2007 and 2010, respectively, from Sangmyung University, Seoul, South Korea. He is currently an assistant professor in the Department of Computer Science at Sangmyung University. His research interests include computer vision, biometrics, image processing, and HCI.

OE_52_7_073104_d004.png

Kang Ryoung Park received his BS and Master degrees in electronic engineering from Yonsei University, Seoul, Korea, in 1994 and 1996, respectively. He also received his PhD degree in computer vision from the Department of Electrical and Computer Engineering, Yonsei University, in 2000. He was an assistant professor in the Division of Digital Media Technology at Sangmyung University until February 2008. He is currently a professor in the Division of Electronics and Electrical Engineering at Dongguk University. His research interests include computer vision, image processing, and biometrics.

Won Oh Lee, Hwan Heo, Eui Chul Lee, Kang Ryoung Park, "Minimizing eyestrain on a liquid crystal display considering gaze direction and visual field of view," Optical Engineering 52(7), 073104 (9 July 2013). http://dx.doi.org/10.1117/1.OE.52.7.073104
Submission: Received ; Accepted
JOURNAL ARTICLE
9 PAGES


SHARE
KEYWORDS
LCDs

Eye

Eye models

Visualization

Motion models

Video

Wavelets

Back to Top