12 March 2013 Smarter compositing with the Kinect
Author Affiliations +
Abstract
A image processing pipeline is presented that applies principles from the computer graphics technique of deferred shading to composite rendered objects into a live scene viewed by a Kinect. Issues involving the presentation of the Kinect's output are addressed, and algorithms for improving the believability and aesthetic matching of the rendered scene against the real scene are proposed. An implementation of this pipeline using GLSL shaders to perform this pipeline at interactive framerates is given. The results of experiments with this program are provided that show promise that the approaches evaluated here can be applied to improve other implementations.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
A. Karantza, A. Karantza, R.L. Canosa, R.L. Canosa, } "Smarter compositing with the Kinect", Proc. SPIE 8650, Three-Dimensional Image Processing (3DIP) and Applications 2013, 86500Y (12 March 2013); doi: 10.1117/12.2004183; https://doi.org/10.1117/12.2004183
PROCEEDINGS
9 PAGES


SHARE
RELATED CONTENT

High accuracy hole filling for Kinect depth maps
Proceedings of SPIE (October 31 2014)
Comparison of color median filters
Proceedings of SPIE (September 06 1998)
Real-time fruit size inspection based on machine vision
Proceedings of SPIE (November 18 2004)
Correcting projection display nonuniformity using a webcam
Proceedings of SPIE (January 16 2005)

Back to Top