In immersive communication applications, knowing the user's viewing position can help improve the efficiency of
multiview compression and streaming significantly, since often only a subset of the views are needed to synthesize the
desired view(s). However, uncertainty regarding the viewer location can have negative impacts on the rendering quality.
In this paper, we propose an algorithm to improve the robustness of view-dependent compression schemes by jointly
performing user tracking and compression. A face tracker tracks the user's head location and sends the probability
distribution of the face locations as one or many particles. The server then applies motion model to the particles and
compresses the multiview video accordingly in order to improve the expected rendering quality of the viewer.
Experimental results show significantly improved robustness against tracking errors.