This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.
The authors have an in-kind grant from NASA to investigate the application of the Advanced Communications Technology Satellite (ACTS) to teleradiology and telemedicine using the Jet Propulsion Laboratory developed ACTS Mobile Terminal (AMT) uplink. We have recently completed three series of experiments with the ACTS/AMT. Although these experiments were multifaceted, the primary objective was the determination and evaluation of transmitting real- time compressed ultrasound video imagery over the ACTS/AMT satellite link, a primary focus of the author's current ARPA Advanced Biomedical Technology contract. These experiments have demonstrated that real-time compressed ultrasound video imagery can be transmitted over multiple ISDN line bandwidth links with sufficient temporal, contrast and spatial resolution for clinical diagnosis of multiple disease and pathology states to provide subspecialty consultation and education at a distance.
The authors have an in-kind grant from NASA to investigate the application of the Advanced Communications Technology Satellite (ACTS) to teleradiology and telemedicine using the JPL developed ACTS Mobile Terminal (AMT) uplink. This experiment involves the transmission of medical imagery (CT, MR, CR, US and digitized radiographs including mammograms), between the ACTS/AMT and the University of Washington. This is accomplished by locating the AMT experiment van in various locations throughout Washington state, Idaho, Montana, Oregon and Hawaii. The medical images are transmitted from the ACTS to the downlink at the NASA Lewis Research Center (LeRC) in Cleveland, Ohio, consisting of AMT equipment and the high burst rate-link evaluation terminal (HBR-LET). These images are then routed from LeRC to the University of Washington School of Medicine (UWSoM) through the Internet and public switched Integrated Serviced Digital Network (ISDN). Once images arrive in the UW Radiology Department, they are reviewed using both video monitor softcopy and laser-printed hardcopy. Compressed video teleconferencing and transmission of real-time ultrasound video between the AMT van and the UWSoM are also tested. Image quality comparisons are made using both subjective diagnostic criteria and quantitative engineering analysis. Evaluation is performed during various weather conditions (including rain to assess rain fade compensation algorithms). Compression techniques also are tested to evaluate their effects on image quality, allowing further evaluation of portable teleradiology/telemedicine at lower data rates and providing useful information for additional applications (e.g., smaller remote units, shipboard, emergency disaster, etc.). The medical images received at the UWSoM over the ACTS are directly evaluated against the original digital images. The project demonstrates that a portable satellite-land connection can provide subspecialty consultation and education for rural and remote areas. The experiment is divided into three phases. Using the ACTS fixed-hopping beam, phase one involves testing connection of the AMT to medical imaging equipment and image transmission in various climates in western and eastern Washington state. The second phase involves satellite relay transmissions between the Inmarsat satellite and the ACTS/AMT through a ground station in Hawaii for medical imagery originating from either Okinawa, Japan or Kwajalein, in the Pacific. The third phase involves extended use of the ACTS steerable beam in Washington state, Idaho, Montanan and Oregon.
While specialized phantoms for quality assurance have been provided with CT scanners since these devices were first marketed to radiology departments, there has been little in the way of integrated software and procedures to use these phantoms on an on-going basis. Typically, they are used initially when the scanner is installed, and then used only very intermittently thereafter, usually by the vendors' service personnel. Although calibration scans are performed routinely, these typically only establish the baseline for the accuracy and uniformity of CT numbers, and do not actually measure the resolution which the images are capable of achieving. Over the last four years, a software package to automatically analyze images from CT scanners has been developed, and this was adapted to use with MRI scanners in 1993. An additional software package has been developed to handle the results of the individual quality assurance scans in a data base, and allow for easy analysis and graphing of the results.
We have developed and subjectively evaluated a lossy classified vector quantization (CVQ) using a subsampling and prediction decollation scheme. Both interframe and intraframe codings were evaluated using a sequence of x-ray CT images. Both 10:1 and 15:1 compression ratios were evaluated using nine head images from three patients. Thirteen radiologists evaluated the quality of images by viewing them on film, and comparing them to the original images on film. Although there are large variations in individual evaluations of image quality, there was overall agreement among all readers to a statistically significant level. With the proposed algorithm, the interframe coding approach produces better quality than the intraframe at the same level of data compression. Even though some data compressions were not statistically significant from the originals, the average responses were slightly worse than those for the original image. The effect of data compression on diagnostic accuracy was not evaluated.
In this paper, a lossy image compression algorithm based on a prediction and classification scheme is discussed. The algorithm decomposes an image into four subimages by subsampling pixels at even and odd row and column locations. Since the four subimages have strong correlations to one another, one of them is used in predicting all the others and the resulting differences between the predicted subimages and the original subimages are encoded. Estimated differences tend to be large in a region where pixel values change rapidly, while the differences are small in a monotonous region. This redundancy is explored by dividing the estimated differences into subsets based on the slope of pixel changes, the basis for which is found in some human perception models used to measure the visibility of distortion. The resulting classified estimated differences having different visibilities are encoded with classified vector quantizers.
We evaluated a prototype free-space pointing device with a medical image display workstation. The target environment is the Intensive Care Unit (ICU), where there is very little counter space available, and image workstations are used intermittently and for short periods of time. Managers of the typical ICU do not want to dedicate space to PACS, but would rather mount the image monitors through the wall at eye level, so they can be viewed from the hallway. The hallway image viewing location allows use by a large number of people, as when making morning ward rounds or teaching rounds. Because many physicians are accustomed to graphical user interfaces and pointing devices, the transition to the free- space mouse is an easy and natural one. The use of a free-space mouse allows a very flexible interaction and intuitive graphical user interface, but does not require a horizontal surface, and is easily operated with one hand from the standing position.
A number of user-centered methods for designing radiology workstations have been described by researchers at Carleton University (Ottawa), Georgetown University, George Washington University, and University of Arizona, among others. The approach described here differs in that it enriches standard human-factors practices with methods adapted from ethnography to study users (in this case, diagnostic radiologists) as members of a distinct culture. The overall approach combines several methods; the core method, based on ethnographic ''stream of behavior chronicles'' and their analysis, has four phases: (1) first, we gather the stream of behavior by videotaping a radiologist as he or she works; (2) we view the tape ourselves and formulate questions and hypothesis about the work; and then (3) in a second videotaped session, we show the radiologist the original tape and ask for a running commentary on the work, into which, at the appropriate points, we interject our questions for clarification. We then (4) categorize/index the behavior on the ''raw data'' tapes for various kinds of follow-on analysis. We describe and illustrate this method in detail, describe how we analyze the ''raw data'' videotapes and the commentary tapes, and explain how the method can be integrated into an overall user-centered design process based on standard human-factors techniques.
While acceptance of standards for digital image transfer may make video image capture obsolete, this technique of getting an image from a device such as a CT scanner will be in use for many years. Because the devices are inherently analog, these circuits are susceptible to errors in image capture, which can lead to degradation in image quality. We have designed a series of phantom images and used them to periodically measure the quality of captured images. The CT images are displayed at specific window width and window level settings, so that the value of each pixel is known, and is analyzed automatically by a computer program. The procedure involves capturing each of the four quality assurance (QA) images and storing them on the image capture computer. The QA software may be run immediately or it may be run at a later date, when it will analyze images collected over a period of time. The results of the analysis are stored in the computer in a database. This allows displays of the captured image quality, including tables, graphs, charts, and trend plots. A video frame grabber was connected to a CT advantage computed tomography independent console. The images were captured once per week over a period of three months to determine the range of variation which could be expected in the first part of the device's useful life.
Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.
Workstations are becoming more commonly used in medical environments, and are being used increasingly for viewing medical images. In most clinical environments, counter and wall space is not readily available, and there is a strong motivation to make the equipment small, while making the displayed images as large as possible to preserve image detail. This precludes the use of a separate text monitor for user interaction, and any menus or displays on the image monitor use valuable space -- pixels backed up with 256-shade grayscale capability. We have developed a method for user interaction which requires essentially no screen area for permanent menus, but uses much of the image screen for `invisible' menus -- menus which are in windows which are always open (active) but only obscure the underlying image for the small portion of time that they are actually in use. These invisible menus respond to movements of the mouse, and become visible when the mouse is moved into the window which holds the menu. The menu becomes invisible again after a period of mouse inactivity. Because these windows are always active, a given item may be selected multiple times by simply pressing the mouse button repeatedly. This `type-ahead' capability is not normally available on systems which do not include a keyboard, and may be easily used for common repetitive functions, analogous to pressing the NEXT IMAGE key multiple times. This invisible window concept can also be used to display analysis results, so that the results do not cover any of the active image area, but are immediately available for on-screen viewing.
Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.
Many issues must be addressed and resolved in order to bring a complete imaging workstation into everyday use by radiologists and medical researchers. Important design issues for developing an imaging workstation include image quality, system response time, the user interface and image storage. The Image Computing Systems Laboratory (ICSL) at the University of Washington has been developing a series of inexpensive graphics and image processing workstations with high performance by taking advantage of a sharp decrease in hardware costs, increasingly more powerful VLSI chips, and versatile personal computers and workstations. After gaining experience with two previous image processing systems, UWGSP3 (University of Washington Graphics System Processor #3), a third-generation workstation based on the NeXT Computer and UWGSP3-HI, a host-independent version, that can work with any host computer via an interface card, were developed. UWGSP3, a highly integrated, low-cost workstation, is a complete image display and computing system capable of meeting many of the requirements of a medical imaging workstation provided that a suitable user interface is developed. To demonstrate this capability, RadGSP, a prototype user interface and application software for radiologist use, has been developed. This paper will first describe the UWGSP3-HI system for background information before describing the implementation and evaluation of RadGSP, and current radiology imaging workstation research in progress at ICSL.
Quality Assessment or Quality Assurance (QA) in PACS has its roots in QA procedures which have been developed in the course of many years of radiology practice. The need for QA in all aspects of radiology is being escalated by more complex technology, administrative controls, and economic factors. Growth in PACS is leading to increased demand for QA at the system level, as well as for individual PACS components and modalities.
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. A series of digital phantoms has been developed for display on either a CT9800 or Hilite Advantage scanner. The phantom images have been stored on magnetic tape in the standard tape archive format used by General Electric, so that the images may be loaded onto the scanner at any time. These images are then captured using a commercial video image capture board in a PC/286 computer, where the images are not only to be displayed, but also analyzed with the use of an automated process implemented in a computer program on the same PC. Results of the analyses are saved, together with the data and time of image acquisition, so that the results can be displayed graphically, as trend plots.
Images stored in the central database of a picture archiving and communications system (PACS) must be identified
and described with textual information such as patient name, exam procedures, and diagnostic reports. Most
radiology departments already use a radiology information system (RIS) for departmental administrative functions,
such as exam scheduling, film tracking, billing etc., so this information is readily available. Ideally, the PACS and
the RIS should be combined into one complete system, which is not easily achievable due to costly investment of
replacing the current RIS and slow clinical acceptance of the PACS. Alternatively, for a successful PACS operation
using the existing RIS, the two systems should be linked together so that the PACS can freely retrieve the RIS
information whenever the need arises.
As a part of the PACS evaluation project, an interface system to link our RIS (DECrad) to our PACS (AT&T
CommView) has been implemented using an external microcomputer system running UNIX. The interface system
handles proprietary communication protocols on each side, translating the information from one format to the
other. Although the interface system translated the RIS transactions and transferred them to the PACS, they
were not always associated with the image data at the PACS. This paper presents the design concept and the
implementation details of the interface system we have developed. The performance and the problem areas of the
interface system are also investigated, as well as suggesting future directions for a better implementation.
As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a
common way of capturing digital images from many modalities. While digital interfaces, such as those which use the
ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image
transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time
because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use
methods of digitizing the video signal that is produced for display on the scanner viewing console itself.
A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These
images have been captured with each of five commercially available image capture systems, and the resultant images
digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images
can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the
images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew
rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or
absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and
noise filters can also add or modify artifacts seen in the captured images, often giving unusual results.
Each image is described, together with the tests which were performed using them. One image contains alternating black
and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast
enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or
vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical
implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.