We present two methods for protecting a region of interest (ROI) in a compressed medical image transmitted across a lossy packet network such as the Internet or a wireless channel. We begin with a high quality wavelet-based coder, the Set Partitioning in Hierarchical Trees (SPIHT) algorithm, which orders data progressively by coding the globally important information first. We then compress the ROI to a higher quality than the rest of the image by scaling the wavelet coefficients corresponding to the ROI. This approach moves ROI information earlier in the bit stream. Finally, we add more redundancy to the ROI than to the rest of the image by two techniques. With MD-SPIHT, we repeat wavelet coefficient trees corresponding to the ROI and code them to higher bit rates than the background trees. With ULP-FEC, we use forward error correction (FEC) in an unequal loss protection framework. We find that both methods increase the probability of receiving high quality ROI in the presence of packet loss.
This paper presents a methodology for model based restoration of degraded document imagery. The methodology has the advantages of being able to adapt to nonuniform page degradations and of being based on a model of image defects that is estimated directly from a set of calibrating degraded document images. Further, unlike other global filtering schemes, our methodology filters only words that have been misspelled by the OCR with a high probability. In the first stage of the process, we extract a training sample of candidate misspelled word subimages from the set of calibration images before and after the degradation that we wish to undo. These word subimages are registered to extract defect pixels. The second stage of our methodology uses a vector quantization based algorithm to construct a summary model of the defect pixels. The final stage of the algorithm uses the summary model to restore degraded document images. We evaluate the performance of the methodology for a variety of parameter settings on a real world sample of degraded FAX transmitted documents. The methodology eliminates up to 56.4% of the OCR character errors introduced as a result of FAX transmission for our sample experiment.
Adaptive histogram equalization is a contrast enhancement technique in which each pixel is remapped to an intensity proportional to its rank among surrounding pixels in a selected neighborhood. We present work in which adaptive histogram equalization is performed on the codebook of a tree-structured vector quantizer so that encoding with the resulting codebook performs both compression and contrast enhancement. The algorithm was tested on magnetic resonance brain scans from different subjects and the resulting images were significantly contrast enhanced.
Image compression at rates oflO:1 orgreatercould make picture archiving and communication systems (PACS) much more responsive and economically attractive. A protocol is described for subjective and objective evaluation of the fidelity of compressed/decompressed images compared to originals. The results of its application to four representative and promising compression methods are presented. The four compression methods examined are predictive pruned tree-structured vector quantization, fractal compression, the full-frame discrete cosine transform with equal weighting of block bit allocation, and the full-frame discrete cosine transform with human visual system weighting of block bit allocation. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 x 1024 computed radiography (CR) images andtwo 512 x 512 x-ray computed tomography(CT) images were viewed at six bit rates by nine radiologists at the University of Washington Medical Center. The radiologists' subjective evaluations of image fidelity were compared to calculations of mean square error for each decompressed image.
Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This
paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the
originals and presents the results ofits application to four representative and promising compression methods. The methods
examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal
weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit
Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to
effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table.
Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum
performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression
technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and
image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by
the need for intensive computing to compress and decompress images.
A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024
CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9,
1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of
Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images
on a Sony (1280 X 1024) monitor.
The radiologists' subjective evaluations of image fidelity were compared to calculations of mean square error (MSE),
normalized mean square error (NMSE), percentage mean square error (PMSE), and fractal normalized mean square error
(FMSE) for each compression method and bit rate.