In this paper, we present a method to reconfigure 3D movies in order to minimize distortion when seen on a
different display than the one it has been configured for. By their very nature, 3D broadcasts come with a
stereoscopic pair to be seen by the left and right eyes. However, according to reasons that we ought to explain
in the paper, the cameras used to shoot a movie are calibrated according to specific viewing parameters such
as the screen size, the viewing distance and the eye separation. As a consequence, a 3D broadcast seen on a
different display (say a home theater or a PC screen) than the one it has been configured for (say an IMAX R
screen) will suffer from noticeable distortions. In this paper, we describe the relationship between the size of
the 3D display, the position of the observer, and the intrinsic and extrinsic parameters of the cameras. With
this information, we propose a method to reorganize the stereoscopic pair in order to minimize distortion when
seen on an arbitrary display. In addition to the raw video pair, our method uses the viewing distance, a rough
estimate of the 3D scene, and some basic information on the 3D display. An inpainting technique is used to fill
We present a comparative study of several state-of-the-art background subtraction methods. Approaches ranging from simple background subtraction with global thresholding to more sophisticated statistical methods have been implemented and tested on different videos with ground truth. The goal is to provide a solid analytic ground to underscore the strengths and weaknesses of the most widely implemented motion detection methods. The methods are compared based on their robustness to different types of video, their memory requirements, and the computational effort they require. The impact of a Markovian prior as well as some postprocessing operators are also evaluated. Most of the videos used come from state-of-the-art benchmark databases and represent different challenges such as poor SNR, multimodal background motion, and camera jitter. Overall, we not only help to better understand for which type of videos each method best suits but also estimate how, sophisticated methods are better compared to basic background subtraction methods.
Network video cameras, invented in the last decade or so, permit today pervasive, wide-area visual surveillance. However, due to the vast amounts of visual data that such cameras produce human-operator monitoring is not possible and automatic algorithms are needed. One monitoring task of particular interest is the detection of
suspicious behavior, i.e., identification of individuals or objects whose behavior differs from behavior usually observed. Many methods based on object path analysis have been developed to date (motion detection followed by tracking and inferencing) but they are sensitive to motion detection and tracking errors and are also computationally complex. We propose a new surveillance method capable of abnormal behavior detection without explicit estimation of object paths. Our method is based on a simple model of video dynamics. We propose one practical implementation of this general model via temporal aggregation of motion detection labels. Our method requires
little processing power and memory, is robust to motion segmentation errors, and general enough to monitor humans, cars or any other moving objects in uncluttered as well as highly-cluttered scenes. Furthermore, on account of its simplicity, our method can provide performance guarantees. It is also robust in harsh environments
(jittery cameras, rain/snow/fog).
In this paper, we show how Markovian strategies used to solve well-known segmentation problems such as motion estimation, motion detection, motion segmentation, stereovision, and color segmentation can be significantly accelerated when implemented on programmable graphics hardware. More precisely, we expose how the parallel abilities of a standard graphics processing unit usually devoted to image synthesis can be used to infer the labels of a segmentation map. The problems we address are stated in the sense of the maximum a posteriori with an energy-based or probabilistic formulation, depending on the application. In every case, the label field is inferred with an optimization algorithm such as iterated conditional mode (ICM) or simulated annealing. In the case of probabilistic segmentation, mixture parameters are estimated with the K-means and the iterative conditional estimation (ICE) procedure. For both the optimization and the parameter estimation algorithms, the graphics processor unit's (GPU's) fragment processor is used to update in parallel every labels of the segmentation map, while rendering passes and graphics textures are used to simulate optimization iterations. The hardware results obtained with a mid-end graphics card, show that these Markovian applications can be accelerated by a factor of 4 to 200 without requiring any advanced skills in hardware programming.
In this contribution, we present an optimal halftoning algorithm that uniformly distributes pixels over a hexagonal grid. This method is based on a slightly modified error-diffusion approach presented at SIGGRAPH 2001. Our algorithm's parameters are optimized using a simplex downhill search method together with a <i>blue noise </i>based cost function . We thus present a mathematical basis needed to perform spectral and spatial calculations on a hexagonal grid. The proposed algorithm can be used in a wide variety of printing and visualization tasks. We introduce an application where our error-diffusion technique can be directly used to produce clustered screen cells.
In this contribution, a new error-diffusion algorithm is presented, which is specially suited for intensity levels close to 0.5. The algorithm is based on the variable-coefficient approach presented at SIGGRAPH 2001. The main difference with respect to the latter consists of the objective function that is used in the optimization process. We consider visual artifacts to be anomalies (holes or extra black pixels) in an almost regular structure such as a chessboard. Our goal is to achieve blue-noise spectral characteristics in the distribution of such anomalies. Special attention is paid to the shape of the anomalies, in order to avoid very common artifacts. The algorithm produces fairly good results for visualization on displays where the dot gain of individual pixels is not large.