The development of sensors based on no-moving-parts two-beam interferometers has been progressing for many years. These sensors can produce an interferogram sampled simultaneously rather than sequentially, as is done in Michelson-type interferometers in more common use. While similar to Michelson-type interferometers, the rectification of interferograms from digital array scanned interferometers (DASIs) requires somewhat different methods to achieve optimal results. We consider the problem of calibrating a DASI-like sensor using known optical sources. A simple model for a two-beam interferometer is proposed that illustrates the dominant sensor effects that can be present in more complex devices. These effects are (1) the nonlinear relationship between the spatial sampling in the instrument and the temporal sampling of the optical autocorrelation function and (2) the apodization or nonuniform gain, which is applied across the autocorrelation function. The resulting measurement, the interferogram, is a windowed, time-warped version of the true optical autocorrelation function. A method quantifying the time-warping effect using a narrowband optical source is developed. This method treats the interferogram as a sinusoid subject to phase modulation, and uses quadrature demodulation to recover the nonlinear phase that characterizes the time warping. The apodization curve is estimated using simple polynomial curve-fitting. Methods for correcting the interferogram, thus enabling the power spectral density to be computed via standard Fourier transform techniques, are presented.