Most image sensors mimic film, integrating light during an exposure interval and then reading the "latent" image as a complete frame. In contrast, frameless image capture attempts to construct a continuous waveform for each sensel describing how the Ev (exposure value required at each sensel) changes over time. This is done using an array of on-sensor nanocontrollers, each independently and asynchronously sampling its sensel to interpolate a smooth waveform. Still images are computationally extracted after capture using the average value of each sensel’s waveform over the desired interval. Thus, image frames can be extracted to represent any interval(s) within the captured period. Because the extraction of a frame is done using waveforms that are continuous time-varying functions, an Ev estimate is always available, even if a particular sensel was not actually sampled during the desired interval. The result is HDR (high dynamic range) with a low and directly controllable Ev noise level. This paper describes our work toward building a frameless imaging sensor using nanocontrollers, basic processing of time domain continuous image data, and the expected benefits and problems.