A two-stage approach is introduced to improve the accuracy of compact patterning models used in large-scale computational lithography. Each of the two stages uses a separate empirically calibrated regression model whose accuracy at predicting printed feature dimensions has been proven in the usual standalone (single stage) mode. For the first model stage, we choose an established regularized regression model of the kind that accounts for resist non-idealities by suitably modifying the pre-thresholded exposing dose pattern, with the model basis functions taking the form of modified convolutions of adjustable kernels with the optical image. A different class of regression model is used in the second stage, namely a model that accounts for resist non-idealities by making a pattern-dependent local adjustment in the develop threshold, with the model basis functions being characteristic traits of the image trace along feature cutlines. However, rather than applying this second model in the usual mode where it adjusts the develop threshold applied to the exposing optical image, we use it to adjust a threshold that is applied to the improved effective dose distribution provided by the first-stage model. The effectiveness of the proposed method is verified by modeling pattern transfer of critical layers in 14- and 22-nm complementary metal–oxide–semiconductor (CMOS) technology. In our experience, little accuracy improvement is gained by expanding the complexity of standard single-stage models beyond the level of empirically proven model forms. However, even in a basic implementation, inclusion of a second stage of modeling will in itself reduce RMS error by ∼45 % in our 14-nm example. Moreover, accuracy improvement is further boosted to ∼55 % by adopting a minimax strategy in which the model is conservatively regularized according to the worst-case outcome in cross-validation tests but is calibrated according to the best-case outcome.