As the semiconductor structures become increasingly miniaturized and complex, the process of measuring and analyzing the structure using a microscope becomes crucial. High-resolution transmission electron microscope (TEM) images are widely used, but they are expensive to acquire and analyze. If the region and boundary of the material in the TEM image can be automatically segmented, the measurement cost will be reduced. We proposed a method to generate a segmentation label for a TEM image using a deep learning model that performs segmentation based on weak supervision and active learning. The proposed method achieved an accuracy of 98% in 10% of the time compared to the manual method. This approach will reduce the cost of high-resolution TEM image analysis and accelerate the semiconductor device development process.
We present a novel method that can automatically correct astigmatism and focus error with great accuracy in the scanning electron microscopy (SEM). Here, an iterative deconvolution method and the feature-based compensation algorithm were applied to the beam control sequence, enabling us to obtain the clear SEM image without any distortion. A proof of concept was fully verified by both mathematical analysis and experimental results. By utilizing the proposed method, accurate beam profile optimization is possible without malfunction even when imaging a sample with anisotropic pattern.
As semiconductor processing becomes more complicated and pattern sizes shrink, the overlay metrology has become one of the most important issues in the semiconductor industry. Therefore, in order to obtain correct, reliable overlay values in semiconductor fabrication facilities (fab), quantization methods for the efficient management and implementation of a measurement algorithm are required, as well as an understanding of the target structures in the semiconductor device. We implemented correct, reliable overlay values in the pattern using the image processing method. The quantization method, through correlation analysis and a new algorithm for target structures, were able to improve the sensitivity to misalignment in the pattern and enable more stable and credible in-line measurement by decreasing the distribution of the residuals in overlay values. Since overlay values of the pattern in the fab were measured and managed more reliably and quickly, it is expected that our study will be able to contribute to the yield enhancement of semiconductor companies.
To analyze lung regional ventilation using two-phase Xe-enhanced CT with wash-in and wash-out periods, we propose
an accurate and fast deformable registration and ventilation imaging. To restrict the registration to the lung parenchyma,
the left and right lungs are segmented. To correct position difference and local deformation of the lungs, affine and
demon-based deformable registrations are performed. The lungs of wash-out image are globally aligned to the wash-in
image by narrow-band distance propagation based affine registration and nonlinearly deformed by a demon algorithm
using a combined gradient force and active cells. To assess the lung ventilation, color-coded ventilation pattern map is
generated by deformable registration and histogram analysis of xenon attenuation. Experimental results show that our
accurate and fast deformable registration corrects not only positional difference but also local deformation. Our
ventilation imaging helps the analysis of lung regional ventilation.
Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome
of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging
modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is
aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images
and the virtual camera for CT scans. Even though mutual information has been successfully used to register different
imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light
patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as
well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the
effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with
maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which
leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to
initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided
robust matching not only for two identical images with different viewpoints but also for different images acquired before
and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration
than single-resolution.
To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of
pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often
excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT
images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary
is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice.
We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally,
to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied
within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual
inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was
considerably increased by compensating for pulmonary vessels and pleural nodules.
To investigate changes of pulmonary nodules in temporal chest CT scans, we propose a novel technique for segmentation and registration of lungs. Our method is composed of the following steps. First, automatic segmentation is used to identify lungs in chest CT scans. Second, optimal cube registration is performed to correct gross translational mismatch of lungs. This initial registration does not require any anatomical landmarks. Third, a 3D distance map is generated by the narrow-band distance propagation, which drives fast and robust convergence to the optimum value. Fourth, the distance measure between surface boundary points is evaluated repeatedly by the selective distance measure (SDM). Then the final geometrical transformations are applied to ten pairs of successive chest CT scans. Fifth, nodule correspondences are established by the pairs with the smallest Euclidean distances. The performance of our method was evaluated with the aspects of visual inspection and accuracy. The positional differences between lungs of initial and follow-up CT scans were much reduced by the optimal cube registration. Then this initial alignment was refined by the subsequent iterative surface registration. For accuracy assessment, we have evaluated a root-mean-square (RMS) error between corresponding nodules on a per-center basis. The reduction of RMS error was obtained with the optimal cube registration, subsequent iterative surface registration and nodule registration. Experimental results show that our segmentation and registration method extracts accurate lungs and aligns them much faster than the conventional ones using a distance measure. Accurate and fast result of our method would be more useful for the radiologist’s evaluation of pulmonary nodules on chest CT scans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.