In this study, we investigated differences in detection performance for twelve observers who each
generated a CT contrast detail curve. An anthropomorphic newborn phantom's abdomen was imaged using
a GE Light Speed CT scanner (4-slice). Alternate Forced Choice (AFC) experiments were performed with
lesions sizes ranging from 2.5 to 12.5 mm to determine the intensity needed to achieve 92% correct (I92%).
Following training, twelve readers consisting of (2 technologists, 4 college students, 4 medical students,
and 2 radiology residents) generated a single contrast detail curve. Eight readers produced approximately
linear contrast detail curves while the remaining four readers required a second order polynomial fit
because of reduced performance when detecting the largest (i.e., 12.5 mm) lesion. For the three smallest
lesions, the coefficient of variation between the twelve readers was ~12%, which increases with increasing
lesion size to ~23% for 12.5 mm lesion size. The ratio of the maximum I92% to minimum I92% values was
~1.6 for the smallest lesions, which increased to a factor of ~2.1 for the 12.5 mm lesion. Our results show
that minimizing inter-reader variability in our AFC experiments could be achieved by eliminating the
largest lesion that cause detection problems in one third of observers. The combined experimental data
showed that the slope of the contrast detail curve was -0.42, lower than the value of -1.0 predicted by the
Rose model, suggesting that the noise texture in CT associated with both quantum mottle and anatomic
structure is an important factor affecting detection of these lesions.