Color filter array (CFA) demosaicking is an essential process for restoring full-color images from incomplete color samples acquired by single-sensor digital cameras. We present two main contributions to CFA demosaicking. First, we analyze the causes of two main types of CFA demosaicking artifacts and examine the schemes that are effective in suppressing them, respectively. Second, by combining and extending the core merits of the schemes examined, we compose a new CFA demosaicking algorithm to suppress as many demosaicking artifacts as possible and obtain full-color images of high quality. Experiments using a large variety of test images have been conducted, and the results show that the proposed method outperforms the existing state-of-the-art methods both visually and in terms of peak signal-to-noise ratio.
Proc. SPIE. 3963, Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts V
KEYWORDS: Matrices, Interference (communication), Measurement devices, Color reproduction, Human vision and color perception, Control systems, Image sensors, Environmental sensing, Sensors, Digital recording
As the spectral sensitivities of most color devices are typically different from that of human vision or corresponding output devices, signals from different channels (such as Red, Green and Blue) of a color recording device need to be properly mixed to generate color information suitable for viewing. The mixing (or transformation) which minimizes some error measure between the target and the transformed colors of a large set of color patches is normally used for this purpose. As the color error is the only criterion in determining such transformation, the measurement noises of the color device may often be amplified in the target color space without much control. We present in this paper a new color correction method that takes account of both the color error and the noise variance in reproduced images. This method is useful in applications where the measurement noises of recording devices are not necessarily low. The proposed method is then extended to include other color reproduction constraints. Analytical solutions and experimental results of the proposed method are both reported in the paper.
Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.
Conference Committee Involvement (1)
Multimedia Systems and Applications IX
2 October 2006 | Boston, Massachusetts, United States