OPC model calibration techniques that use SEM contours are a major reason for the modern day improved fitting
efficiency in complex mask design compared to conventional CD-based calibration. However, contour-based calibration
has a high computational cost and requires a lot of memory. To overcome this problem, in conventional contour-based
calibration, the SEM contour is sampled uniformly at intervals of several nanometers. However, such sparse uniform
sampling significantly increases deviations from real CD values, which are measured by CD-SEM. We also have to
consider the shape errors of 2D patterns. In general, the calibration of 2D patterns requires higher frequency sampling of
the SEM contour than 1D patterns do. To achieve accurate calibration results, and while considering the varied shapes of
calibration patterns, it is necessary to set precise sampling intervals of the SEM contour. In response to these problems,
we have developed a SEM contour sampling technique in which contours are sampled at a non-uniform rate with
arbitrary mask shapes within the allowable sampling error. Experimental results showed that the sampling error rate was
decreased to sub-nm when we reduced the number of contour points.
Recently, optical proximity correction model calibration techniques that use SEM contours have enabled possibly
significant improvements in complex mask design. However, compared to conventional CD-based calibration, contour-based
calibration results in increased errors in 1D features. In fact, our research shows that there is a ~1-nm gap, which
we call "CD-gap," between CD measurements directly calculated from a SEM image and CD measurements calculated
from SEM contours. To achieve accurate calibration, SEM contours must match the corresponding CD measurements.
We have developed a CD-gap-free contour extraction technique in response to this problem. In our technique, a mask
edge is classified into shape structures and an optimized SEM contour extraction method is prepared for each shape
structure to reduce the CD-gap. Experimental results show that the CD-gap can be decreased to sub-nm, which clearly
demonstrates the potential of our proposed technique to play a vital role in the lithography process.
We developed two mobile-device size autostereoscopic integral videography (IV) displays with field sequential
color (FSC) liquid crystal displays (LCDs) and micro lens arrays. IV is an autostereoscopic video technique
based on integral photography. The FSC-LCD has a different backlight from that of conventional LCDs. The
backlight is produced by red, blue, and green light emitting diodes (LED) instead of by cold cathode fluorescent
lamps, and each LED emits light sequentially. IV based on FSC-LCD doesn't cause color moires because the
FSC-LCD does not require color filters. One FSC-LCD IV display is 5-inch diagonal with 256×192 lenses and
20 ray directions. Its base FSC-LCD is 300ppi with 1280×768 pixels. The other FSC-LCD IV display is 4.5-inch
diagonal with 192×150 lenses and 80 ray directions. Its base FSC-LCD is 498ppi with 1920×1200 pixels. In this
paper, we first describe the problems of a previous conventional LCD-based IV displays, and then describe the
principle of the IV displays based on the FSC-LCDs. Next, we analyze the IV displays by a plenoptic sampling
theory. Lastly, we compare three versions of the IV displays, two based on the FSC-LCDs and one based on the
We developed a mobile-size integral videography (IV) display that reproduces 60 ray directions. IV is an autostereoscopic
video image technique based on integral photography (IP). The IV display consists of a 2-D display
and a microlens array. The maximal spatial frequency (MSF) and the number of rays appear to be the most
important factors in producing realistic autostereoscopic images. Lens pitch usually determines the MSF of IV
displays. The lens pitch and pixel density of the 2-D display determine the number of rays it reproduces. There
is a trade-off between the lens pitch and the pixel density. The shape of an elemental image determines the shape
of the area of view.
We developed an IV display based on the above correlationship. The IV display consists of a 5-inch 900-dpi
liquid crystal display (LCD) and a microlens array. The IV display has 60 ray directions with 4 vertical rays and
a maximum of 18 horizontal rays. We optimized the color filter on the LCD to reproduce 60 rays. The resolution
of the display is 256x192, and the viewing angle is 30 degrees. These parameters are sufficient for mobile game
use. Users can interact with the IV display by using a control pad.
We propose a spherical layout for a camera array system when shooting images for use in Integral Videography (IV). IV is an autostereoscopic video image technique based on Integral Photography (IP) and is one of the preferred autostereoscopic techniques for displaying images. There are many studies on autostereoscopic displays based on this technique indicating its potential advantages. Other camera arrays have been studied, but their purpose addressed other issues, such as acquiring high-resolution images, capturing a light field, creating contents for non-IV-based autostereoscopic displays and so on. Moreover, IV displays images with high stereoscopic resolution when objects are displayed close to the display. As a consequence, we have to capture high-resolution images in close vicinity to the display. We constructed the spherical layout for the camera array system using 30 cameras arranged in a 6 by 5 array. Each camera had an angular difference of 6 degrees, and we set the cameras to the direction of the sphere center. These cameras can synchronously capture movies. The resolution of the cameras is a 640 by 480. With this system, we determined the effectiveness of the proposed layout of cameras and actually captured IP images, and displayed real autostereoscopic images.
Assuming the surgery under open magnetic resonance imaging (MRI) equipment with manipulators, we developed the coordinate-integration module and the real-time functions that could display the manipulator's position on the volume data of MRI and could obtain the cross-section images of MRI at the manipulator's position. The small field of view from an endoscope is the problem in most of the minimally invasive surgeries with manipulators. Therefore, we propose an endoscopic surgery with manipulators under open MRI equipment. The coordinate-conversion parameters were calculated in the coordinate-integration module by calibration with an optical tracking system and markers. The delay of the manipulator-position display on the volume data was approximately within 0.5 second though it depended on the amount of the volume data. We could also obtain the cross-section images of MRI at the manipulator's position using the information from the coordinate-integration module. With these functions, we can cope with the change of the organ shape during surgery with the guidance based on the individual information. Furthermore, we can use the manipulator as an MRI probe to define cross-section position like an ultrasonic probe.