This work presents a region-growing image segmentation approach based on superpixel decomposition. From an initial contour-constrained oversegmentation of the input image, the image segmentation is achieved by iteratively merging similar superpixels into regions. This approach raises two key issues: (1) how to compute the similarity between superpixels in order to perform accurate merging and (2) in which order those superpixels must be merged together. In this perspective, we first introduce a robust adaptive multiscale superpixel similarity in which region comparisons are made both at content and common border level. Second, we propose a global merging strategy to efficiently guide the region merging process. Such strategy uses an adaptive merging criterion to ensure that best region aggregations are given highest priorities. This allows the ability to reach a final segmentation into consistent regions with strong boundary adherence. We perform experiments on the BSDS500 image dataset to highlight to which extent our method compares favorably against other well-known image segmentation algorithms. The obtained results demonstrate the promising potential of the proposed approach.
The production of stereoscopic 3D HD content is considerably increasing and experience in 2-view acquisition is in
progress. High quality material to the audience is required but not always ensured, and correction of the stereo views
may be required. This is done via disparity-compensated view synthesis. A robust method has been developed dealing
with these acquisition problems that introduce discomfort (e.g hyperdivergence and hyperconvergence...) as well as
those ones that may disrupt the correction itself (vertical disparity, color difference between views...). The method has
three phases: a preprocessing in order to correct the stereo images and estimate features (e.g. disparity range...) over the
sequence. The second (main) phase proceeds then to disparity estimation and view synthesis. Dual disparity estimation
based on robust block-matching, discontinuity-preserving filtering, consistency and occlusion handling has been
developed. Accurate view synthesis is carried out through disparity compensation. Disparity assessment has been
introduced in order to detect and quantify errors. A post-processing deals with these errors as a fallback mode. The paper
focuses on disparity estimation and view synthesis of HD images. Quality assessment of synthesized views on a large set
of HD video data has proved the effectiveness of our method.
View synthesis brings geometric distortions which are not handled efficiently by existing image quality assessment
metrics. Despite the widespread of 3-D technology and notably 3D television (3DTV) and free-viewpoints television
(FTV), the field of view synthesis quality assessment has not yet been widely investigated and new quality metrics are
required. In this study, we propose a new full-reference objective quality assessment metric: the View Synthesis Quality
Assessment (VSQA) metric. Our method is dedicated to artifacts detection in synthesized view-points and aims to handle
areas where disparity estimation may fail: thin objects, object borders, transparency, variations of illumination or color
differences between left and right views, periodic objects... The key feature of the proposed method is the use of three
visibility maps which characterize complexity in terms of textures, diversity of gradient orientations and presence of high
contrast. Moreover, the VSQA metric can be defined as an extension of any existing 2D image quality assessment
metric. Experimental tests have shown the effectiveness of the proposed method.