30 July 2010 Joint tracking and multiview video compression
Author Affiliations +
Proceedings Volume 7744, Visual Communications and Image Processing 2010; 77440P (2010) https://doi.org/10.1117/12.863066
Event: Visual Communications and Image Processing 2010, 2010, Huangshan, China
In immersive communication applications, knowing the user's viewing position can help improve the efficiency of multiview compression and streaming significantly, since often only a subset of the views are needed to synthesize the desired view(s). However, uncertainty regarding the viewer location can have negative impacts on the rendering quality. In this paper, we propose an algorithm to improve the robustness of view-dependent compression schemes by jointly performing user tracking and compression. A face tracker tracks the user's head location and sends the probability distribution of the face locations as one or many particles. The server then applies motion model to the particles and compresses the multiview video accordingly in order to improve the expected rendering quality of the viewer. Experimental results show significantly improved robustness against tracking errors.
© (2010) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Cha Zhang, Cha Zhang, Dinei Florêncio, Dinei Florêncio, "Joint tracking and multiview video compression", Proc. SPIE 7744, Visual Communications and Image Processing 2010, 77440P (30 July 2010); doi: 10.1117/12.863066; https://doi.org/10.1117/12.863066


Back to Top