Real time imaging applications such as interactive rendering and video conferencing face particularly challenging bandwidth problems, especially as we attempt to improve resolution to perceptual limits. Compression has been an amazing enabler of video streaming and storage, but in interactive settings, it can introduce application-killing latencies. Rather than synthesizing or capturing a verbose representation and then immediately converting it into its succinct form, we should generate the concise representation directly. Our research is inspired by human vision, which as Hoffman (1998) notes, constructs "continuous lines and surfaces...from discrete information." Our adaptive frameless renderer uses gradient samples and steerable filters to perform spatiotemporally adaptive reconstruction that preserves both edges and occlusion boundaries. Resulting RMS qualities are equivalent to traditionally synthesized imagery with 10 times more samples. Nevertheless in dynamic scenes, producing pleasing edges with so few samples is challenging. We are currently developing methods for reconstructing imagery using color samples supplemented with sparse edge information. Such higher-order representations will be a crucial enabler of interactive, hyper-resolution image synthesis, capture and display.