Dr. Todor G. Georgiev
at Qualcomm Inc
SPIE Involvement:
Conference Co-Chair | Conference Chair | Author | Editor | Instructor
Publications (15)

SPIE Journal Paper | February 6, 2018
JEI Vol. 27 Issue 01
KEYWORDS: Modulation transfer functions, Sensors, Image sensors, Interferometry, Interferometers, Cameras, Spatial frequencies, Fourier transforms, Shape analysis, Computational imaging

PROCEEDINGS ARTICLE | February 27, 2015
Proc. SPIE. 9404, Digital Photography XI
KEYWORDS: Optical filters, Microlens array, Polarization, Cameras, Sensors, Glasses, Photography, Microlens, High dynamic range imaging, Geometrical optics

SPIE Conference Volume | March 19, 2014

SPIE Conference Volume | March 26, 2013

Proc. SPIE. 8667, Multimedia Content and Mobile Devices
KEYWORDS: Mobile devices, Statistical analysis, Microlens array, Cameras, Sensors, Image processing, Photography, Image restoration, Image resolution, Translucency

Proc. SPIE. 8667, Multimedia Content and Mobile Devices
KEYWORDS: Optical signal processing, Microlens array, Computational imaging, Modulation, Cameras, Sensors, Calibration, Image resolution, Microlens, Modulation transfer functions

Showing 5 of 15 publications
Conference Committee Involvement (2)
Digital Photography X
3 February 2014 | San Francisco, California, United States
Mobile Computational Photography
4 February 2013 | Burlingame, California, United States
Course Instructor
SC980: Theory and Methods of Lightfield Photography
Lightfield photography is based on capturing discrete representations of all light rays in a volume of 3D space. Since light rays are characterized with 2D position and 2D direction (relative to a plane of intersection), lightfield photography captures 4D data. In comparison, conventional photography captures 2D images. Multiplexing this 4D radiance data onto conventional 2D sensors demands sophisticated optics and imaging technology. Rending an image from the 4D lightfield is accomplished computationally based on creating 2D integral projections of the 4D radiance. Optical transformations can also be applied computationally, enabling effects such as computational focusing anywhere in space. This course presents a comprehensive development of lightfield photography, beginning with theoretical ray optics fundamentals and progressing through real-time GPU-based computational techniques. Although the material is mathematically rigorous, our goal is simplicity. Emphasizing fundamental underlying ideas leads to the development of surprisingly elegant analytical techniques. These techniques are in turn used to develop and characterize computational techniques, model lightfield cameras, and analyze resolution. The course also demonstrates practical approaches and engineering solutions. The course includes a hands-on demonstration of several working plenoptic cameras that implement different methods for radiance capture, including the micro-lens approach of Lippmann, the mask-enhanced "heterodyning" camera, the lens-prism camera, multispectral and polarization capture, and the plenoptic 2.0 camera. One section of the course is devoted specifically to the commercially available Lytro camera. Various computational techniques for processing captured data are demonstrated, including basic rendering, Ng's Fourier slice algorithm, the heterodyned light-field approach for computational refocusing, glare reduction, super-resolution, artifact reduction, and others.
  • View contact details

Is this your profile? Update it now.
Don’t have a profile and want one?

Back to Top