To generate realistic synthetic IR images, required for training in mission rehearsal simulators, image acquisition by IR sensors must be reproduced. In this paper, we propose a general framework for IR sensor modeling which provides a physical basis for describing the geometric and radiometric relationship between the points in the observed scene and the corresponding pixels on the IR sensor output image. This framework is based on the combination of current camera models and draws upon both post-processing and ray tracing techniques. It thus offers more capabilities than standard IR sensor model structures based only on post-processing techniques: better accordance with sensor physics, higher modularity, more accurate sampling, computations suitable for parallelization and connection with any rendering algorithm. The framework enables development of modeling algorithms for each component of the IR image chain (optics, scanner, detector, electronics, signal and image processing) to match the system technology and the desired precision. The IR sensor model developed from this structure allows simulation of a wide range of technologies including staring and scanning systems based on thermal and photon detectors. It can also account for the variations in many physical magnitudes through spatial, spectral and temporal dimensions.