High dynamic range (HDR) imaging has been the focus of interest for scientific, technical, and artistic communities in recent years. Progress in capture and display technologies, combined with increasing availability of processing power in both professional and consumer devices, as well as the continuing drive for more photorealistic and higher quality image and video content, have attracted further attention to HDR imaging. One can argue that HDR imaging could lead to the next revolution in image and video representation, similar to past transitions from grayscale to color, standard definition to high definition, and 2-D to 3-D.
This special section is composed of seven original contributions aimed at providing readers with some of the latest developments and emerging technologies in the HDR image- and video-processing chain, either at component level, underlying fundamentals, or end-to-end and system issues.
In “Dynamic range reduction and contrast adjustment of infrared images in surveillance scenarios,” A. Rossi et al. propose a technique called cluster-based dynamic range reduction (DRR) and contrast adjustment (CDCA) for visualization of infrared (IR) images. The effectiveness of the introduced technique is analyzed using IR images for surveillance applications, and results are compared with those given by other IR-HDR visualization methods showing the benefits of the proposed CDCA in terms of detail enhancement, robustness against the horizon effect, and presence of hot objects.
The paper “Super-resolution reconstruction of high dynamic range images in a perceptually uniform domain,” authored by T. Bengtsson et al., proposes a novel formulation of the joint super-resolution HDR image reconstruction problem, using an image domain in which the residual function of the inverse problem relates to the perception of the human visual system. They demonstrate that the proposed approach avoids some severe reconstruction artifacts typical of conventional methods for super-resolution.
In “High dynamic range imaging on mobile devices using fusion of multiexposure images,” C. Jung et al. propose a simple but effective method to achieve HDR generation from three differently exposed images, using an original approach different from the state-of-the-art, and they show that their proposed method produces clear details in images and achieves natural HDR rendering results suitable for mobile imaging devices.
F. Toadere’s contribution to this special section, titled “Simulating the functionality of a digital camera pipeline,” is a complete simulation model for a digital camera system covering conversion from light to numerical signal, color processing, and rendering.
In “Context-dependent JPEG backward-compatible high-dynamic range image compression,” P. Korshunov and T. Ebrahimi propose, based on various subjective tests, an architecture to achieve a JPEG backward-compatible HDR image compression scheme and compare its performance to popular state-of-the-art HDR image compression techniques.
The paper “Multiexposure and multifocus image fusion with multidimensional camera shake compensation” by A. L. Gomez et al. presents a single algorithm that can perform both multifocus and multiexposure image fusion. Experimental results and their analysis show that the proposed algorithm is capable of producing HDR or multifocus images by registering and fusing a set of multiexposure or multifocus images taken in the presence of camera shake.
M. Nawaria et al. in “Tone mapping-based high dynamic range image compression: study of optimization criterion and perceptual quality” examine the important problem of quality assessment in tone mapping HDR image compression, both from objective and subjective view points.
We hope these contributions will provide new insights to further accelerate the research and innovation in the emerging field of HDR imaging.
Touradj Ebrahimi is currently a professor at Ecole Polytechnique Fédérale de Lausanne, heading its Multimedia Signal Processing Group. He has been the recipient of various distinctions and awards, such as the IEEE and Swiss national ASE award, the SNF-PROFILE grant for advanced researchers, four ISO-Certificates for key contributions to MPEG-4 and JPEG 2000, and the best paper award of IEEE Trans. on Consumer Electronics. He is also the head of the Swiss delegation to MPEG, JPEG, and SC29, and acts as the chairman of the Advisory Group on Management in SC29. He is a member of the scientific advisory board of various start-up and established companies in the general field of information technology. He has served as scientific expert and evaluator for research funding agencies such as those of the European Commission, the Greek Ministry of Development, the Austrian National Foundation for Scientific Research, the Portuguese Science Foundation, as well as a number of venture capital companies active in the field of information technologies and communications. He is the author or the coauthor of more than 200 research publications and holds 14 patents.
Andrew Tescher is a multimedia technologist and international standards consultant. He is the international representative of INCITS L3 and the head of delegation for the U.S. to SC 29. He has been a major contributor to image/video compression technologies for commercial- and defense-related applications, including space platform implementations. He invented several teleconferencing systems and coauthored related key patents. He has been recipient of numerous major professional awards, including the Edward Rhein Prize from Germany’s Edward Rhein Foundation, as well as the Gold Medal of SPIE in recognition of his pioneering contributions to the image and video compression fields. He is a fellow and life member of SPIE, fellow of the OSA, and past president of SPIE. His publications include over 80 papers covering compression technologies for commercial and space applications. He is an associate editor for Optical Engineering and was chair of the Industrial Advisory Board of the Integrated Media System Center of USC.