Commercial Augmented Reality (AR) is expected to open a multitude of possibilities by bridging the gap between computer graphics and human vision. However, the wide spread usage of AR relies on a display device that can seamlessly integrate into human vision, just like prescription eyeglasses, and provide superior quality virtual imagery replicating most or all depth cues. Current available commercial AR devices like Microsoft Hololens and Meta are successful in overlaying high quality virtual images onto the real world, but have limitations in terms of available field of view and resolution. These displays present stereoscopic imagery without correct focal cues resulting in vergence-accommodation conflict (VAC).
Recent studies show that VAC is one of the primary contributors of fatigue and nausea in virtual and augmented reality. To mitigate VAC, it is necessary to provide accurate depth cues for the rendered synthetic imagery, which either requires recreating the entire light field accurately, or actively changing the focus of the virtual image to optically register it to the appropriate real world depth. We tackle the challenge of depth registration by replacing conventional static beamsplitters with novel deformable beamsplitter membranes that can actively change shape to support a range of focal distances. We first introduced this method in Ref. 1, where an LCD display is transmitted through a single axis lens and reflected off of the beamsplitter membrane. The single axis lens worked to minimize astigmatism caused by the membrane shape and off-axis reflection, and was essential for producing resolvable images at depth.
In this work we present an improved prototype display, strictly following the deformable beamsplitter design, but with no additional optical elements. We present resulting high quality augmented reality images and analyze the characteristics of the display prototype.
Contributions Our primary contribution is a light weight and thin form factor near eye display for augmented reality applications, supporting a range of focus cues via deformable beamsplitter membranes. Specific contributions are as follows:
• Field of view We report a very wide monocular field of view of 75.5° × 70.6°, larger/comparable to any state of the art AR devices
• Focal range A varifocal optical design supporting a range of focal distances from infinity to as close as less than 10 cm.
• Angular resolution An angular resolution of 4-6 cpd is achieved and is currently limited only by the choice of projection LCD panel.
• Device size and weight A light and thin form factor is achieved, with head-mounted volume of 5.5 × 12.5 × 15.2 cm and weight of 452 g.
• Analysis An in-depth analysis of the display’s optical and material properties is presented, along with a methodology to custom fabricate the deformable membrane in-house.
In this section we review aspects of human vision and perception that give an insight into near eye displays, various methods found in literature for designing accommodation supporting displays, and designing and fabricating membranes like ours.
Properties of human vision and perception
For a comfortable augmented reality experience, an AR display needs to provide appropriate perceptual cues. If future AR displays are to be widespread and enable prolonged usage, providing correct depth cues is of primary concern. The vergence and accommodation of the human vision is neurally coupled,2 and presenting incorrect and presenting incorrect depth cues will force decoupling of vergence and accommodation, leading to a mismatch commonly known as vergence-accommodation conict (VAC). Many recent studies show that VAC is one of the primary causes of discomfort in virtual and augmented reality.3, 4 Most recent works5-7 evaluated user responses to stimuli in gaze-contingent dynamic focus VR displays and found that adaptive and dynamic focus in near eye displays can mitigate VAC. Similar dynamic focus capabilities were shown to increase task performance in an AR setting in our previous paper Ref. 1. Since blur is also a very important perceptual depth cue,8 such accommodation supporting dynamic focus displays need to render appropriate synthetic blur on objects that are virtually at different focal distances. Recent studies show that computational blur that takes into account the chromatic aberrations of eye, also dubbed Chroma blur,9 can improve perceived quality of synthetic imagery.
Statically Focus Supporting NEDs
We review in this section various accommodation supporting display designs that provide focus cues statically. We also refer interested readers to the review articles Ref. 10 and Ref. 11 for a more exhaustive survey on various focus supporting display architectures.
Maxwellian View displays The procedure of imaging an illuminating source on the eye’s pupil instead of viewing it directly is called Maxwellian viewing.12 Since the source is imaged on the eye’s pupil, the resultant image formed on the retina is typically always in focus. Displays that are designed to follow this principle are called Maxwellian view displays and generally involve scanning the image directly onto the retina. Several variants of the Maxwellian view displays have been proposed in the literature. Ref 13 used a holographic optical element and a Digital Micro-mirror Device (DMD) to achieve retinal projection. Ref. 14 use an LCD and a set of collimation and projection lenses to achieve a quasi-accommodation free Maxwellian viewing with high resolution. However, such displays have a very small depth of field and eye box sizes, and are the primary limitations of a Maxwellian display. A slightly different variation of Maxwellian view displays is by using aperture masks to create a light field at the eye and thus increase the apparent depth of field, but at the expense of resolution.15-17 Ref. 18 shows that the depth of focus of a Maxwellian view retinal scanning display can be improved by using an elliptical scanning beam, whereas Ref. 19 show an improved depth of focus by using an oscillating fluid lens in the retinal projection system. Most recently, Ref. 20 demonstrated a Maxwellian view style light field HMD with multiple projectors and a holographic optical element. While the light field capability of the display extends the depth of field, the small eye box limitation is overcome by employing eye tracking and a moving eye box.
Multifocal displays Multifocal displays is a class of displays consisting of multiple image planes that can generate near-correct focus cues at the eye due to additive or multiplicative interpolations of the images from the image planes. Ref. 21 demonstrated the idea of multifocal planes in head-mounted displays, whereas Ref.22 demonstrated the benefits of fixed-viewpoint volumetric desktop displays in generating near-correct focus cues without any need for eyetracking. Such displays were recently revisited, with improved scene decomposition and gaze-contingent varifocal multi plane capabilities.23,24 However, these displays have large power and compute demands with a complex hardware that doesn’t typically lead to a wearable form factor. The work described in Ref. 25 demonstrates a time-multiplexed multi-plane display in the form of a wearable near eye display. Unfortunately, such a display design offers good resolution, but only with a small field of view.
Light fields displays Light field displays provide both intensity and angular information leading to depicting correct focus cues in a near eye display. However, current implementations of integral light field NEDs sacrifice the spatial resolution for angular resolution leading to poor imagery. Ref. 26 introduced a near eye light field display (NELD) that uses an array of microlenses, resulting in a very thin and light form factor VR NED with a good possible field of view, but with a heavily compromised resolution. More recently, Ref 27 demonstrated a light field stereoscope with a wide diagonal FOV and an improved image resolution. Ref. 16 uses a pinhole mask in front of a display to create a wide diagonal FOV VR display. Similarly, by using a see-through point source array backlight mechanism, Ref. 28 demonstrated a wide diagonal FOV AR display prototype. All the above mentioned light field NEDs present imagery with low resolutions (approximately 2–3 cpd).
Holographic displays The use of Holographic Optical Elements (HOEs) as a replacement to bulky optics has seen growing interest in researchers along with Computer Generated Hologram (CGH) based displays. HOEs have recently been part of retinal NEDs,20, 29 enabling almost eyeglasses like thin form factor, and a very wide field of view. Recently CGH based near eye display designs for VR and AR have been presented by Ref. 29, showing superior quality imagery. For such displays, however, a small eye box, large compute demand, and theoretically limited resolutions still remain major concerns. Ref. 30 demonstrated a real time rendering pipeline for CGH using spherical waves and achieving high resolution and a relatively wider eye box.
Dynamic Varifocal Displays
Our design is a full fledged implementation of our previous proof-of-concept accommodation supporting varifocal AR NED design1 targeted at providing comfortable visual stimuli with the least possible compute and power requirements. Thus, we also review other varifocal display designs in the literature.
A tunable lens system combined with a spherical mirror is used in the work of Ref. 31, demonstrating a 28° of diagonal FOV and an accommodation range of 0-8 diopters switchable within 74 ms. A recent study described in Ref. 32 again takes advantage of an electrically tunable lens system as relay optics and demonstrates 36° diagonal FOV VR prototype. Their solution switches depth from one extreme to another within 15 ms, and provides an accommodation range of 0-9.5 diopters. Ref. 33 uses HOEs for an intermediate image formation before relaying the final image into the eye, offering improved wearable form factor with 60 ° field of view. All of the above mentioned varifocal display designs, including our previous work which demonstrated a 100 ° diagonal FOV, suffer various drawbacks either in form factor, depth-switching speed, or field of view. In this work we show an improved design which is light and thin in dimensions approaching wearable form factors.
Deformable membrane mirrors
Deformable membrane mirrors have been used in displays for a long time.34 Recently, a virtual retinal display was demonstrated in conjunction with micro-electromechanical system (MEMS) deformable mirrors by Ref. 35. Built on the principles of Maxwellian view display, a laser light is scanned onto the deformable MEMS mirror, which then reflects it through a series of mirrors directly into the pupil forming an image on the retina. The surface convexity of the mirror is controlled by the applied voltage, thereby controlling the focus of the displayed objects. Ref. 36 showed an achievable accommodation range of 0–14 diopters by the MEMS mirror. An application of deformable mirror membranes for creating an illusion of volumetric display is attempted by Ref. 37, where the membrane curvature is synchronized with per-frame swapping between two different images, thereby displaying the images at different depths simultaneously simulating a light field. The prototype demonstrated a depth range of 0–16 diopters in a contiguous fashion. In our previous work 1, we use a see-through deformable beamsplitter membrane whose surface convexity is controlled by air pressure. Our deformable membrane demonstrated an achievable depth range of 0–10 diopters.
The task of manufacturing custom flexible membranes is accomplished traditionally through surface microma-chining, bulk micromachining, liquid crystals, piezoelectric or electrostrictive actuators as reviewed in Ref. 38. Pneumatic based systems have also been demonstrated for building tunable microoptics using polydimethysiloxane (PMDS),39 avoiding the use of high voltages or external fields in operation and precise machining in manufacturing. On the other hand, PDMS has numerous attractive material properties such as outstanding transparency in visible wavelengths, high elasticity, and excellent temperature stability. Inspired by these advantages, we created our own recipe for manufacturing the deformable beamsplitter membranes.
Our display mitigates VAC by adjusting the optical depth of the virtual image to match the depth of the fixation point of the user. The optical depth is controlled by changing the curvature of a deformable beamsplitter membrane which reflects the LCD image with varying optical power, while transmitting an unchanged view of the real world. In comparison to our first prototype described in Ref. 1, the goals of our new display were threefold: eliminate the extra optical element, improve the image quality, and decrease the overall display dimensions. Our previous prototype, which was reliant on an additional corrective lens, was large enough that it could not easily be head-mounted and suffered from a strong astigmatism due to its off-axis optical design. In addition to improvements in optical quality and form-factor, our current work fully implements the deformable beamsplitter design in its simplicity: a single optical element to control the focus of the virtual images.
A review of the optical layout presented in Ref. 1 is included in Figure 2. The new parameters for our current prototype are presented in Table 1. By simultaneously reducing the screen tilt to 0° and decreasing the membrane tilt to 20°, we reduced the astigmatism and field curvature aberrations and increased the brightness of the virtual image. A smaller, higher pixel density display is used to increase the angular resolution while shrinking the form factor. With the placement of the lens and display fixed, an analysis of the membrane housing aperture was done. By approximating the deforming membrane with a series of toroidal sections, an optimization reducing the spot size reflected off the torus for 7 focal depths from 10 cm to optical infinity was performed in Zemax OpticStudio, giving the ratio between the minor and major axis of the torus at each depth. By weighting the greater depths more, a ratio of .8733 minor to major axis was determined. An aperture size meeting that ratio was iterated toward based on the fabrication and mechanical constraints of the physical system leading to an aperture of 57.25 ×5̃0 mm. Keeping in mind a diverse user group, an adjustable IPD was included in the design allowing native IPDs from 60 - 78 mm. It is to be noted that the lower end of the human IPD range40 is not accommodated directly, but is fully covered by our large eye box, as reported in section 4.1.5.
Prototype parameters for the current implementation as related to Figure 2.
|Eye Relief||40 mm|
|Aperture||57.25 mm x 50 mm|
|Display Distance||28.629 mm|
Overall, the head-mounted portion of our prototype, as shown in figure 3, consumes a much smaller volume of 5.5 ×1̃2.5 ×1̃5.2 cm, and is much lighter as compared to our previous prototype. The LCD panel with cables and housing are 132 g while each membrane, housing and tube come in at 81 g, while the total mass stands at 452 g including the unoptimized mounting hardware. Improvements in optical qualities are discussed in detail in section 4. Details of our prototype follow.
To generate the images for both eyes, we use a single Liquid Crystal Display (LCD) panel Topfoison TF600 10A-V0 1440×2560 5.98” TFT LCD. Our deformable membranes and housing for each eye are fabricated and assembled in-house using the methodology detailed in section 3.3. Eliminating the air compressor and pressure regulators was achieved by using a Pyle PLMRW8 8” 400 Watt 4 Ohm Marine Subwoofer to modulate the air pressure in the membrane housing for each eye. A Motorola MPX5010DP pressure sensor provides feedback on the current pressure differential between ambient and our membrane housing, thus our system no longer uses power-draining cameras for pressure control. Leak correction and re-pressurizing the system is provided by a PeterPaul 72B11DGM 12/DC solenoid valve venting to atmosphere. Re-pressurization during continuous operation typically occurs about once every 20 min. All pressure modules are connected with SMC Pneumatics ¼ ” OD Tubing, one touch fittings, and T-junctions.
A single Arduino Teensy 3.6 microcontroller, which uses a software proportional integral derivative (PID) controller to hold the membrane at the target depth based on the sensory inputs drives the vacuum system as directed by the PC over USB. The speakers are driven through use of a WGCD L298N Dual H Bridge DC Stepper Module with a 12V 5A DC power supply.
Overall membrane fabrication follows the same process as in Ref. 1 with some alterations to improve the process and membrane quality. After silanizing the 150 mm Silicon wafers, they are transferred to the cleanroom reducing particulate inclusions. Just as before, we use a Sylgard 184 PDMS kit purchased from Dow Corning. After mixing the prepolymer and cross-linking agents, the mixture is now degassed for 40 minutes to remove bubbles introduced during mixing. The Mixed and degassed PDMS prepolymer is spin cast on the silicon wafer for 1 min at a faster rate of 600 RPM reducing the thickness of the membrane to 120 μm, thereby reducing the pressure required to deform the membrane. The membrane is then cured and a layer of silver is vapor deposited as previously reported. After metalization, the film is carefully peeled, and using a custom designed apparatus, stretched uniformly and affixed to the vacuum housing to form the deformable beamsplitter.
Our images are rendered in real time using an open source Python and OpenGL library dGraph*. With the nature of varifocal displays, as the virtual image matches the optical depth of the user’s current gaze, objects that are virtually at different depths also appear sharp. To solve this problem, rendered blur is added in a convolution pixel shader by computing the circle of confusion between the focal depth and virtual object depth. This is then converted into pixel space, generating a 2d top hat kernel. The results can be seen in Fig. 4 It should be noted that while the convolution kernel can be a separable function, for large amounts of blur, a fast fourier transform (FFT) based convolution would be faster. Additionally, more perceptually accurate blur can be achieved by adding chromatic aberration as seen in Ref. 9. Additional distortion and magnification correction are performed in pixel shaders using lookup tables.
Optical Quality Analysis
In this section we evaluate several attributes of our display directly comparing them to studies of human perfor-mance.
Field of View
A typical human monocular visual field extends 60° nasally, 60° superiorly, 100° temporally and 70° inferiorly.41 With a target monocular field of view of 160° horizontal and 130° vertical, our display prototype exhibits a 75.5° horizontal and 70.6° vertical field of view.
Human eye focal range varies by subject and changes with age. The mean focal range of a 10 year old is 13.4 diopters, while at 55 years a mean of 1.3 diopters is reported by Ref. 42. With a range of 11 diopters, our display is capable of matching the mean focal range of a 20 year old, with a focus between 10 cm and optical infinity (represented by 800 cm in all measurements below).
The lens in human eyes has a finite accommodation response with several defining characteristics: a latent period of around 300 ms, a main focus adjustment period determined by the distance of focus change, and a settling period where minor corrections are made until a state of micro fluctuations near the target is reached.43
Using a GoPro Hero 4 camera at 240 frames/second to record the response of the membrane, we measured our prototype’s performance. We visually indicated the initial signal by changing the image on the display and waiting for frame buffer swap before sending the new depth signal to the microcontroller. In all cases, our display exhibited an initial latent period less than our sampling period of 4.16 ms. Our display also exhibited an initial main focus adjustment period followed by a settling period similar to an eye. The main focus adjustment period of our prototype demonstrated a mean velocity of 55 diopters/second. Mean time for the initial adjustment was 139.5 ms with a maximum of 200 ms. The settling period exhibited several cycles of overshoot due to the method of PID control used, but came to rest in a mean of 201 ms and a maximum of 237.5 ms. Total adjustment times had a mean of 340 ms and a maximum of 438 ms. The long settling times indicate that improvements can be made either by tuning the PID parameters or with a better control algorithm.
Human vision acuity is limited by many factors including sensor cell distribution, pupil diffraction, lens aberrations, and contrast of stimulus. The consensus of studies explained in Ref 44 show that the angle of resolution in the central fovea for humans to be about 0.5 min of arc or 60 cycles per degree (cpd).
To determine the spatial and angular resolution limits of our display, we evaluate the modulation transfer function (MTF) of our latest prototype at various depth levels. Our measurements are based on the ISO 122333 slanted-edge MTF method.45 Figure 5 shows the MTF of our prototype at distances 10 cm, 20 cm, 33 cm, 50 cm, 100 cm and 800 cm, all captured with a Samsung EX2F camera placed 40 mm behind the display. The camera aperture was set to f/3.9 with exposure times of 1/10 second.
First, an image is captured of a high resolution printed checkerboard pattern of know distance and size, which is used to measure the angles resolved per camera pixel. Then, a slanted-edge image is imaged by the camera through our prototype, from which a specific region of interest near the center of the field of view is used to measure the MTF of the display. Low frequencies that add noise to the measurements are filtered by thresholding the edge-spread at 10% and 90% of the measured intensities. This process is repeated for all reported depths. It can be seen that the display is capable of consistently producing a spatial resolution of 4 - 6 cpd. The limitation of the spatial resolution of the display primarily comes from the available resolution of the LCD panel used for providing imagery to the eyes. In fact, the individual pixels of the LCD panel are discernible from the reflection off of the deformable beamsplitter membrane. Even a two-fold increase in the resolution of the LCD panel results in a spatial resolution of about 14 cpd, which is the current state-of-the-art for commercially available VR displays. A slightly decreasing trend of MTF is seen with increasing distance of the virtual image, and this behavior is caused by an increase in the magnification of the virtual image combined with a more severe astigmatism.
A display must be able to generate an eye box capable of entering the pupil as the eye moves around the visual field with some additional tolerance for imprecise alignment, adjustment while being worn, and variations of human anatomy. Most displays target a 20 mm × 20 mm eye box.46,47
We measured the eye box by attaching a camera to a 2 axis linear stage and evaluating the images captured. Measured with an eye relief of 40 mm from the membrane, the eye box for a 10 cm focal depth is 40 mm horizontal, 20 mm vertical. For all other depths, it is 30 mm horizontal and 20 mm vertical.
Standard desktop displays designed for use inside buildings with artificial lighting exhibit maximum luminance around 250 cd/m2 while mobile phones which are meant for outdoor use generally have a maximum luminance between 450 and 500 cd/m2. For each of the reported focal depths, we measured the luminance of our display prototype using a Photo Research PR-715 SpectraScan spectroradiometer with a MS-55 lens attachment. We set the aperture to 1/2° and using a 1 second exposure obtained several readings. Mean values are reported in Table 2. A decay in the measured values as the focal distance increases is expected because as our membrane stretches, the distance between silver particles increases causing a reduced amount of reflected light.
Luminace values of display prototype given in candela per meter squared for different focal depths.
|10 cm||195 cd/m2|
|20 cm||135 cd/m2|
|30 cm||133.875 cd/m2|
|50 cm||131 cd/m2|
|100 cm||127.5 cd/m2|
|800 cm||115 cd/m2|
Since our display has only one optical element, it is essential to determine and maintain the quality of the membrane. With this goal, we imaged the profile of the membrane to measure the consistency across the spin-cast surface as seen in Fig. 6. It can be seen that the membrane from outer edge through the half-radius to the center has a consistent profile and a smooth, flat surface. Additionally, we measure the reflectance and transmissive properties of the membrane at the specific angle of our prototype on both a non-attached membrane and a membrane stretched and attached to our housing. It is worthwhile to note the minimal stretching we perform while attaching the membrane improves the transmission characteristics while only slightly decreasing the reflection strength.
As stated in section 3.1, we determined the aperture shape by using a toroid to approximate the shape of the membrane. If we want to produce further enhancements in the angular resolution of our display, a more accurate model of the membrane as it deforms is required. In pursuit of developing such a model, we captured the shape of the membrane at different curvatures as seen in Fig. 9. Using Canny edge detection48 and filtering techniques we were able to calculate the curvature of the surface and fit polynomials by posing and solving a least squares problem.
We demonstrate a varifocal wide field of view augmented reality display, using deformable beamsplitter membranes, that is light and thin in form factor, and remarkably better than any commercial AR or VR devices. We also demonstrate a significant improvement from our previous version of the prototype which suffered a bulky form factor and poor image quality. A methodology for manufacturing the custom beamsplitter membranes is also described for researchers to easily replicate our work. A detailed analysis of our deformable membrane based varifocal AR NED is presented, along with the rendering methodology for computational blur in varifocal displays.
Our new prototype display presents very promising direction for future AR displays that require support for focus cues, wide field of view, less compute and power demands, all while maintaining a light form factor for prolonged usage. The major concern with our improved version of the prototype is a limited resolution which can be significantly improved either by using an LCD panel of increased ppi or a different projection light engine. The current mechanism with which the deformation of the membrane surface is controlled, while improved, is still unwieldy and prevents full autonomy. Better methods of controlling the surface require investigation and must be left for future work.
The authors wish to thank Jim Mahaney who was invaluable in consulting and assisting with the physical set-up of our display prototype.
This work was partially supported by National Science Foundation (NSF) Grant IIS-1645463, by NSF Grant A14-1618-001, by a grant from NVIDIA Research, and by the BeingTogether Centre, a collaboration between Nanyang Technological University (NTU) Singapore and University of North Carolina (UNC) at Chapel Hill, supported by UNC and the Singapore National Research Foundation, Prime Minister’s Office, Singapore under its International Research Centres in Singapore Funding Initiative.
Dunn, D., Tippets, C., Torell, K., Kellnhofer, P., Aksit, K., Didyk, P., Myszkowski, K., Luebke, D., and Fuchs, H., “Wide field of view varifocal near-eye display using see-through deformable membrane mirrors,” IEEE Transactions on Visualization and Computer Graphics 23, 1322–1331 (April 2017). 10.1109/TVCG.2017.2657058Google Scholar
Padmanaban, N., Konrad, R., Stramer, T., Cooper, E. A., and Wetzstein, G., “Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays,” Proceedings of the National Academy of Sciences, 201617251 (2017).Google Scholar
Konrad, R., Cooper, E. A., and Wetzstein, G., “Novel optical configurations for virtual reality: evaluating user preference and performance with focus-tunable and monovision near-eye displays,” in [Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems], 1211–1220, ACM (2016).Google Scholar
Hua, H., “Enabling focus cues in head-mounted displays,” Proceedings ofthe IEEE 105(5), 805–824 (2017).Google Scholar
Huang, F.-C., Luebke, D., and Wetzstein, G., “The light field stereoscope,” ACM SIGGRAPH Emerging Technologies, 24 (2015).Google Scholar
Liu, S., Cheng, D., and Hua, H., “An optical see-through head mounted display with addressable focal planes,” in [Mixed and Augmented Reality, 2008. ISMAR 2008. 7th IEEE/ACM International Symposium on], 33–42, IEEE (2008).Google Scholar
Aksit, K., Lopes, W., Kim, J., Shirley, P., and Luebke, D., “Near-eye varifocal augmented reality display using see-through screens,” ACM Trans. Graph. (SIGGRAPH) (6) (2017).Google Scholar
Schowengerdt, B. T., Seibel, E. J., Kelly, J. P., Silverman, N. L., and Furness, T. A., “Binocular retinal scanning laser display with integrated focus cues for ocular accommodation,” in [Stereoscopic Displays and Virtual Reality Systems X], 5006, 1–10, International Society for Optics and Photonics (2003). 10.1117/12.474135Google Scholar
Mansell, J. D., Sinha, S., and Byer, R. L., “Deformable mirror development at stanford university,” in [International Symposium on Optical Science and Technology], 1–12, International Society for Optics and Photonics (2002).Google Scholar
Dodgson, N. A., “Variation and extrema of human interpupillary distance,” in [Electronic imaging 2004], 36–46, International Society for Optics and Photonics (2004).Google Scholar
Savino, P. J. and Danesh-Meyer, H. V., [Color Atlas and Synopsis of Clinical Ophthalmology – Wills Eye Institute – Neuro-Ophthalmology], Lippincott Williams & Wilkins (2012).Google Scholar
Burns, P. D., “Slanted-edge mtf for digital camera and scanner analysis,” in [Is and Ts Pics Conference], 135–138, SOCIETY FOR IMAGING SCIENCE & TECHNOLOGY (2000).Google Scholar
Tsurutani, K., Naruse, K., Oshima, K., Uehara, S., Sato, Y., Inoguchi, K., Otsuka, K., Wakemoto, H., Kurashige, M., Sato, O., et al., “Optical attachment to measure both eye-box/fov characteristics for ar/vr eyewear displays,” in [SID Symposium Digest of Technical Papers], 48(1), 954–957, Wiley Online Library (2017).Google Scholar
Canny, J., “A computational approach to edge detection,” in [Readings in Computer Vision], 184–203, Elsevier (1987).Google Scholar