Omnidirectional videos are widely used in Virtual Reality applications. Omnidirectional videos are sphere in origin and need to be projected to a 2D-plane before coding and transmission. Common projection methods suffer from the overstretching in the polar areas which leads to the enormous decreasing of the omnidirectional video quality. In this paper, we proposed a novel representation based on pseudo-cylindrical projection. The representation is then reshaped and rearranged in the considering of several constraints including saving the pixel area and increasing viewing quality. The generation of our representation is formed as a multi dimension optimization problem. Our results across the test video sequences show significant coding gains over standard representations.
This paper described implementing the shadowless space by two kinds of methods. The first method will implement the shadowless space utilizing the semblable principles used in the integrating sphere. The rays from a built in light source will eventually evolve into a uniform lighting through diffuse reflections for numerous times, consider that the spherical cavity structure and the inner surface with high reflectivity. There is possibility to create a shadowless space through diffuse reflections. At a 27.4m<sup>2</sup> area, illuminance uniformity achieved 88.2% in this model. The other method is analogous with the method used in medical shadowless lamps. Lights will fall on the object in different angles and each light will generate a shadow. By changing the position distribution of multiple lights, increasing the number of light sources, the possibility of obtaining shadowless area will gradually increase. Based on these two approaches, two simple models are proposed showing the optical system designed for the shadowless space. By taking simulation software TracePro as design platform, this paper simulated the two systems.