This paper proposes a new method to improve contrast of a mammogram using multi-energy x-ray (MEX) images. The
x-ray attenuation differences among breast tissues increase as incident photons have lower energy. Thus an image
obtained by a narrow low energy spectrum has higher contrast than a full (wide) energy spectrum image. The proposed
mammogram enhancement utilizes this fact using MEX images. Lowpass data of a low energy spectrum image and high
frequency components of a wide energy spectrum image are combined to have high contrast and low noise.
Nonsubsampled contourlet transform (NSCT) is employed to decompose image data into multi-scale and multidirectional
information. The NSCT overcomes the shortage of directions of wavelet transform by expressing smoothness
along contours sufficiently. The outcome of the transform is a lowpass subband and multiple bandpass directional
subbands. First, the lowpass subband coefficients of a wide energy spectrum image are substituted by those of a low
energy spectrum image. Before the coefficient modification, the low energy spectrum image is processed to have high
contrast and sharp details. Next, for the bandpass directional subbands, the locally adaptive bivariate shrinkage of
contourlet coefficients is applied to suppress noise. The bivariate shrinkage function exploits interscale dependency of
coefficients. Local contrast of the resultant mammogram is considerably enhanced and shows clear fibroglandular tissue
structures. Experimental results illustrate that the proposed method produces a high contrast and low noise level image,
as compared to the conventional mammography based on a single energy spectrum image.
Breast soft tissues have similar x-ray attenuations to mass tissue. Overlapping breast tissue structure often obscures mass
and microcalcification, essential to the early detection of breast cancer. In this paper, we propose new method to generate
the high contrast mammogram with distinctive features of a breast cancer by using multiple images with different x-ray
energy spectra. On the experiments with mammography simulation and real breast tissues, the proposed method has
provided noticeable images with obvious mass structure and microcalifications.
Despite fast spreading of digital cameras, many people cannot take pictures of high quality, they want, due to lack of
photography. To help users under the unfavorable capturing environments, e.g. 'Night', 'Backlighting', 'Indoor', or
'Portrait', the automatic mode of cameras provides parameter sets by manufactures. Unfortunately, this automatic
functionality does not give pleasing image quality in general. Especially, length of exposure (shutter speed) is critical
factor in taking high quality pictures in the night. One of key factors causing this bad quality in the night is the image
blur, which mainly comes from hand-shaking in long capturing. In this study, to circumvent this problem and to
enhance image quality of automatic cameras, we propose an intelligent camera processing core having BASE (Scene
Adaptive Blur Estimation) and VisBLE (Visual Blur Limitation Estimation). SABE analyzes the high frequency
component in the DCT (Discrete Cosine Transform) domain. VisBLE determines acceptable blur level on the basis of
human visual tolerance and Gaussian model. This visual tolerance model is developed on the basis of human perception
physiological mechanism. In the experiments proposed method outperforms existing imaging systems by general users
and photographers, as well.
In this work we propose a method to build digital still cameras that can take pictures of a given scene with the knowledge of photographic experts, professional photographers. Photographic expert' knowledge means photographic experts' camera controls, i.e. shutter speed, aperture size, and ISO value for taking pictures of a given scene. For the implementation of photographic experts' knowledge we redefine the Scene Mode of currently commercially available digital cameras. For example instead of a single Night Scene Mode in conventional digital cameras, we break it into 76 scene modes with the Night Scene Representative Image Set. The idea of the night scene representative image set is the image set which can cover all the cases of night scene with respect to camera controls. Meanwhile to appropriate picture taking of all the complex night scene cases, each one of the scene representative image set comes along with corresponding photographic experts' camera controls such as shutter speed, aperture size, and ISO value. Initially our work pairs off a given scene with one of our redefined scene modes automatically, which is the realization of photographic experts' knowledge. With the scene representative set we use likelihood analysis for the given scene to detect whether it is within the boundary of the representative set or not. If the given scene is classified within the representative set it is proceeded to calculate the similarities with comparing the correlation coefficient between the given scene and each of the representative images. Finally the camera controls for the most similar one of the representative image set is used for taking picture of the given scene, with finer tuning with respect to the degree of the similarities.
This paper describes the new method for fast auto focusing in image capturing devices. This is achieved by using two defocused images. At two prefixed lens positions, two defocused images are taken and defocused blur levels in each image are estimated using Discrete Cosine Transform (DCT). These DCT values can be classified into distance from the image capturing device to main object, so we can make distance vs. defocused blur level classifier. With this classifier, relation between two defocused blur levels can give the device the best focused lens step. In the case of ordinary auto focusing like Depth from Focus (DFF), it needs several defocused images and compares high frequency components in each image. Also known as hill-climbing method, the process requires about half number of images in all focus lens steps for focusing in general. Since this new method requires only two defocused images, it can save lots of time for focusing or reduce shutter lag time. Compared to existing Depth from Defocus (DFD) which uses two defocused images, this new algorithm is simple and accurate as DFF method. Because of this simplicity and accuracy, this method can also be applied to fast 3D depth map construction.
Image acquisition devices inherently do not have color constancy mechanism like human visual system. Machine color constancy problem can be circumvented using a white balancing technique based upon accurate illumination estimation. Unfortunately, previous study can give satisfactory results for both accuracy and stability under various conditions. To overcome these problems, we suggest a new method: spatial and temporal illumination estimation. This method, an evolution of the Retinex and Color by Correlation method, predicts on initial illuminant point, and estimates scene-illumination between the point and sub-gamuts derived by from luminance levels. The method proposed can raise estimation probability by not only detecting motion of scene reflectance but also by finding valid scenes using different information from sequential scenes. This proposed method outperforms recently developed algorithms.
On a plasma display panel (PDP), luminous elements of red, green, and blue have different time responses. Therefore, a colored trails and edges appear behind and in front of moving objects. In order to reduce the color artifacts, this paper proposes a motion-based discoloring method. Discoloring values are modeled as linear functions of a motion vector to reduce hardware complexity. Experimental results show that the proposed method has effectively removed the colored trails and edges of moving objects. Moreover, the clear image sequences have been observed compared to the conventional ones.