15 October 2012 Content adaptive enhancement of video images
Author Affiliations +
Digital video products such as TVs, set-up boxes and DV players have circuits that enhance quality of incoming video content. User may control parameters of these circuits according to video source for optimum quality. However, there is a need for a procedure that can adjust these parameters automatically without user interaction. A three stages method for content adaptive enhancement of video images (CAEVI) in display processors is proposed. The first stage measures video signal statistics such as intensity and frequency histograms over image’s active area. The following stage generates control parameters for image processing blocks after measured statistics analysis. One of four quality classes (low, medium, high or special) is assigned to the incoming video, and a set of predefined control parameters for this class is selected. At the third stage, the set of control parameters is applied to the corresponding image processing blocks to reduce noise, improve signal transitions, enhance spatial details, contrast, brightness and saturation, and resample the video image. Video signal statistics are measured and accumulated for each frame, and control parameters are gradually adjusted on scene basis. Measuring and processing blocks are implemented in hardware to provide real time response. Image analysis and quality classification algorithm is implemented in embedded software for flexibility. The proposed method has been implemented in video processor as “Auto HQV” feature. The method was originally developed for TVs and. It is currently under adaptation for hand held devices.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Vladimir Lachine, Vladimir Lachine, Louie Lee, Louie Lee, Gregory Smith, Gregory Smith, "Content adaptive enhancement of video images", Proc. SPIE 8499, Applications of Digital Image Processing XXXV, 84991M (15 October 2012); doi: 10.1117/12.930528; https://doi.org/10.1117/12.930528


Back to Top