Plenoptic cameras, constructed with internal microlens arrays, capture both spatial and angular information, i.e., the full 4-D radiance, of a scene. The design of traditional plenoptic cameras assumes that each microlens image is completely defocused with respect to the image created by the main camera lens. As a result, only a single pixel in the final image is rendered from each microlens image, resulting in disappointingly low resolution. A recently developed alternative approach based on the focused plenoptic camera uses the microlens array as an imaging system focused on the image plane of the main camera lens. The flexible spatioangular trade-off that becomes available with this design enables rendering of final images with significantly higher resolution than those from traditional plenoptic cameras. We analyze the focused plenoptic camera in optical phase space and present basic, blended, and depth-based rendering algorithms for producing high-quality, high-resolution images. We also present our graphics-processing-unit-based implementations of these algorithms, which are able to render full screen refocused images in real time.