We propose a joint acquisition-processing solution to the problem of field map estimation. Our acquisition method captures
data at three echo times carefully chosen to yield optimal field map estimation using the corresponding algorithm. We show
that, over an arbitrary spectral range of inhomogeneity values, our method is not subject to the traditional noise-bandwidth
tradeoff. The resulting implications include: improved robustness, enhanced spectral estimation and eliminating the need
for spatial phase unwrapping. Our simulations show factors of improvement in the quality of field map estimates, as
compared to existing methods. Our phantom data confirms these impressive gains.
The widespread use of multi-detector CT scanners has been associated with a remarkable increase in the number of CT slices as well as a substantial decrease in the average thickness of individual slices. This increased number of thinner slices has created a marked increase in archival and network bandwidth requirements associated with storage and transmission of these studies. We demonstrate that although compression can be used to decrease the size of these image files, thinner CT slices are less compressible than thicker slices when measured by either a visual discrimination model (VDM) or the more traditional peak signal to noise ratio. The former technique (VDM) suggests that the discrepancy in compressibility between thin and thick slices becomes greater at greater compression levels while the latter technique (PSNR), suggests that this is not the case. Previous studies that we and others have performed suggest that the VDM model probably corresponds more closely with human observers than does the PSNR model. Additionally we demonstrated that the poor relative compressibility of thin sections can be substantially negated by the use of JPEG 2000 3D compression which yields superior image quality at a given level of compression in comparison with 2D compression. Additionally, thin and thick sections are approximately equally compressible for 3D compression with little change with increasing levels of compression.
Hyperspectral images are acquired incrementally in a “push-broom” fashion by on-board sensors. Since these images are highly voluminous, buffering an entire image before compression requires a large buffer and causes latency. Incremental compression schemes work on small chunks of raw data as soon as they are acquired and help reduce buffer memory requirements. However, incremental processing leads to large variations in quality across the reconstructed image. The solution to this problem lies in using carefully designed rate control algorithms. We propose two such “leaky bucket” rate control algorithms that can be employed when incrementally compressing hyperspectral images using the JPEG2000 compression engine. They are the Multi-Layer Sliding Window Rate Controller (M-SWRC) and the Multi-Layer Extended Sliding Window Rate Controller (M-EWRC). Both schemes perform rate control using the fine granularity afforded by JPEG2000 bitstreams. The proposed algorithms have low memory requirements since they buffer compressed bitstreams rather than raw image data. Our schemes enable SNR scalability through the use of quality layers in the codestream and produce JPEG2000 compliant multi-layer codestreams at a fraction of the memory used by conventional schemes. Experiments show that the proposed schemes provide significant reduction in quality variation with no loss in mean overall PSNR performance.
We develop novel methods for compressing volumetric imagery that has been generated by single platform (mobile) range sensors. We exploit the correlation structure inherent in multiple views in order to improve compression efficiency. We evaluate the performance of various two-dimensional (2D) compression schemes on the traditional 2D range representation. We then introduce a three-dimensional (3D) representation of the range measurements and show that, for lossless compression, 3D volumes compress more efficiently than 2D images by a factor of 60%.
Streaming media over heterogeneous lossy networks and time-varying communication channels is an active area of research. Several video coders that operate under the varying constraints of such environments have been proposed recently. Scalability has become a very desirable feature in these video coders. In this paper, we make use of a leaky-bucket rate allocation method (DBRC) that provides constant quality video under buffer constraints, and extend it in two advantageous directions. First, we present a rate control mechanism for 3D wavelet video coding using DBRC. Second, we enhance the DBRC so that it can be utilized when multiple sequences are multiplexed over a single communications channel. The goal is to allocate the capacity of the channel between sequences to achieve constant quality across all sequences.
Proc. SPIE. 4472, Applications of Digital Image Processing XXIV
KEYWORDS: Image compression, Detection and tracking algorithms, Video, Distortion, Computer programming, Video surveillance, JPEG2000, Video compression, Digital video recorders, Wireless communications
With the increasing importance of heterogeneous networks and time-varying communication channels, such as packet-switched networks and wireless communications, fine scalability has become a highly desirable feature in both image and video coders. A single highly scalable bitstream can provide precise rate control for constant bitrate (CBR) traffic and accurate quality control for variable bitrate (VBR) traffic. In this paper, we propose two methods that provide constant quality video under buffer constraints. These methods can be used with all scalable coders. Experimental results using the Motion JPEG2000 coder demonstrate substantial benefits.