We propose an efficient disparity map enhancement method that improves the alignment of disparity edges and color edges even in the presence of mixed pixels and provides alpha values for pixels at disparity edges as a byproduct. In contrast to previous publications, the proposed method addresses mixed pixels at disparity edges and does not introduce mixed disparities that can lead to object deformations in synthesized views. The proposed algorithm computes transparencies by performing alpha matting per disparity-layer. These alpha values indicate the degree of affiliation to a disparity-layer and can hence be used as an indicator for a disparity reassignment that aligns disparity edges with color edges and accounts for mixed pixels. We demonstrate the capabilities of the proposed method on various images and corresponding disparity maps, including images that contain fuzzy object borders (e.g., fur). Furthermore, the proposed method is qualitatively and quantitatively evaluated using disparity ground truth and compared to previously published disparity post-processing methods.
We analyse the impact of depth map post-processing techniques on the visual quality of stereo pairs that contain a novel view. To this end, we conduct a user study, in which we address (1) the effects of depth map post processing on the quality of stereo pairs that contain a novel view and (2) the question whether objective quality metrics are suitable for evaluating them. We generate depth maps of six stereo image pairs and apply six
different post-processing techniques. The unprocessed and the post-processed depth maps are used to generate novel views. The original left views and the novel views form the stereo pairs that are evaluated in a paired comparison study. The obtained results are compared with the results delivered by the objective quality metrics. We show that post-processing depth maps significantly enhances the perceived quality of stereo pairs that include a novel view. We further observe that the correlation between subjective and objective quality is weak.
ABSTRACT H.264 as a new-generation video coding algorithm is becoming increasingly important for international
broadcasting standards such as DVB-H and DMB. In comparison to its predecessors MPEG-2 and
MEPG-4 SP/ASP, H.264 achieves improved compression effciency at the cost of increased computational complexity.
Real-time execution of the H.264 decoding process poses a large challenge on mobile devices due to
low processing capabilities. Multi-core systems provide an elegant and power-effcient solution to overcome this
performance limitation. However, effciently distributing the video algorithm among multiple processing units
is a non-trivial task. It requires detailed knowledge about the algorithmic complexity, dynamic variations and
inter-dependencies between functional blocks. The objective of this paper is an investigation on the dynamic
behavior of the H.264 decoding process and on the interaction between the main decoding tasks in the context
of multi-core environments. We use an H.264 decoder model to investigate the effciency of a decoding system
under various conditions (e.g. different FIFO buffer sizes, bitstreams, coding features and bitrates). The gained
insights are finally used to optimize the runtime behavior of a multi-core decoding system and to find a good
trade-off between core usage and buffer sizes.