An interpretation report, generated with an electronic viewbox, is affected by two factors: image quality, which encompasses what can be seen on the display, and computer human interaction (CHI), which accounts for the cognitive load effect of locating, moving, and manipulating images with the workstation controls. While a number of subject experiments have considered image quality, only recently has the affect of CHI on total interpretation quality been measured. This paper presents the results of a pilot study conducted to evaluate the total interpretation quality of the FilmPlane2.2 radiology workstation for patient folders containing single forty-slice CT studies. First, radiologists interpreted cases and dictated reports using FilmPlane2.2. Requisition forms were provided. Film interpretation was provided by the original clinical report and interpretation forms generated from a previous experiment. Second, an evaluator developed a list of findings for each case based on those listed in all the reports for each case and then evaluated each report for its response on each finding. Third, the reports were compared to determine how well they agreed with one another. Interpretation speed and observation data was also gathered.