Using 20 time slices of imagery from 1972 to 2010 (e.g., 1972, 1977, 1980, etc.), the relationship between the preclassification probability of a pixel being forest—p(Forest)—and its classification correctness (forest versus nonforest) was examined. Classification accuracy was determined by human interpretation of high resolution IKONOS and SPOT digital imagery. The study area comprised the approximately one-third of the Australian continent that is relatively densely populated—i.e., not the interior. Preclassification p(Forest) values for individual pixels averaged over three, nine, and all 20 time slices were considered. Postclassification lineages—e.g., “always forest, regeneration”—for the 10-year period (2000 to 2010) covered by nine time slices were also examined. Chi-square analysis showed that regardless of the time period over which p(Forest) was calculated, a statistically significant relationship exists between p(Forest) and the classification accuracy, and between the 2000 and 2010 lineage and classification accuracy. Pixels having an intermediate p(Forest) value (0.20 to 0.80) were more likely to be erroneously classified than pixels having higher (greater than 0.80) or lower (less than 0.20) p(Forest) values. Pixels with a 10-year lineage of “always forest” were most likely to be correctly classified while those having experienced land cover change (deforestation or regeneration) were the least accurately classified. Logistic regression models reinforced these findings, although the relationship between preclassification p(Forest) values and classification correctness was weak.