High Dynamic Range (HDR) video provides a step change in viewing experience, for example the ability to clearly see the soccer ball when it is kicked from the shadow of the stadium into sunshine. To achieve the full potential of HDR video, so-called true HDR, it is crucial that all the dynamic range that was captured is delivered to the display device and tone mapping is confined only to the display. Furthermore, to ensure widespread uptake of HDR imaging, it should be low cost and available on commodity hardware. This paper describes an end-to-end HDR pipeline for capturing, encoding and streaming high-definition HDR video in real-time using off-the-shelf components. All the lighting that is captured by HDR-enabled consumer cameras is delivered via the pipeline to any display, including HDR displays and even mobile devices with minimum latency. The system thus provides an integrated HDR video pipeline that includes everything from capture to post-production, archival and storage, compression, transmission, and display.
There is a vast body of literature concerning the capture, storing, transmission and display of High Dynamic Range
(HDR) imaging. Nevertheless, there are few works that try to address the problem of getting HDR on mobile devices.
Their hardware limitations, such as processing power, storage space, graphics capabilities and screen characteristics,
transform that problem in a big challenge. However, since more and more HDR content is being produced and given that
in a few years it can become a standard, it is necessary to provide the means to visualize HDR images and video on
mobile devices. The main goal of this paper is to present a survey on HDR visualization approaches and techniques
developed specifically for mobile devices. To understand what are the main challenges that need to be addressed in order
to visualize HDR on mobile devices, an overview of their main characteristics is given. The very low dynamic range of
most of mobile devices' displays implies that a tone mapping operator (TMO) must be applied in order to visualize the
HDR content. The current status of the research on TMO will be presented and analyzed, a special attention will be given
to the ones that were developed taking in account the limited characteristics of the mobile devices' displays. Another
important issue is visualization quality assessment, meaning visualize HDR content without losing the main
characteristics of the original HDR content. Thus, evaluation studies of HDR content visualization on mobile devices
will be presented and their results analyzed.
In the real world we can find large intensity ranges: the ratio from the brightest to the darkest part of the
scene can be of the order of 10000 to 1. Since most of our electronic displays have a limited range of
around 100 to 1, the last 20 years has seen much work done to develop different algorithms that compress
the actual dynamic range of an image to that available in the display device. These algorithms, known as
tone mappers, attempt to preserve as much of the images characteristics as possible . An increasing
amount of research has also been done to try to evaluate the 'best' tone mapper. Approaches have included
pair wise comparisons of tone mapped images , comparison with real scenes  or using images
displayed on a High Dynamic Range (HDR) monitor . None of these approaches are entirely satisfactory
and all suffer from potential confounding factors due to participant's interpretation of instructions and
There is evidence that the spatial and chronological path of fixations made by observers' when viewing an
image (i.e. the scanpath) is repeated to some extent when the same image is again presented to the observer
(e.g. ). In this paper we are the first to investigate the potential of using eye movement recordings,
particularly scanpaths, as a discriminatory tool. We propose that if a tone-mapped image gives rise to
scanpaths that are different from those obtained when viewing the original image this might be an
indication of a poor quality tone mapper since it is eliciting eye movements that are different from those
observed when viewing the original image.
A major challenge in Virtual Reality is to achieve realism at interactive rates. However, the computational time required for realistic image synthesis is significant, precluding such realism in real-time. This paper demonstrates a concept that may be exploited to reduce rendering times substantially without compromising perceived visual quality in interactive tasks.
We demonstrate the principle of Inattentional Blindness; when attention is focused on a specific task, items in the scene that are unrelated to the performance of the task literally go unnoticed. Our experiment utilises this principle and varies the rendering quality over the image according to the task at hand.
Our results show that observers do not perceive the difference in image quality on objects unrelated to their task. We believe this is due to Inattentional Blindness, as their attention was focused on a task rather than image quality. The difference in rendering was clearly visible when the subjects were asked to pay full attention to spotting the quality differences only. Our results show that Inattentional Blindness may be exploited to reduce rendering times substantially without compromising perceived visual quality in interactive tasks.
Conference Committee Involvement (6)
Multimedia on Mobile Devices 2010
18 January 2010 | San Jose, California, United States
Multimedia on Mobile Devices 2009
19 January 2009 | San Jose, California, United States
Multimedia on Mobile Devices 2008
28 January 2008 | San Jose, California, United States
Multimedia on Mobile Devices 2007
29 January 2007 | San Jose, CA, United States
Multimedia on Mobile Devices II
16 January 2006 | San Jose, California, United States
Multimedia on Mobile Devices
17 January 2005 | San Jose, California, United States