With the novel advances in wireless communication and personal mobile handheld devices, a newly emerging technology of medical visualization on mobile handheld is believed to provide advance service for physicians, especially in image-based diagnosis. In this paper, we have implemented an easy-to-use medical visualization system on mobile handheld device through WLAN. The system provides physicians a very convenient way to interactively access image data using their Pocket PC without the restriction of staying in the fixed location. System architecture, technical problems and solutions are discussed. Because of the huge gap between image server and Pocket PC client on processing power, the Pocket PC client used only for display image and interaction editor to change visualization parameters. Most render tasks are moved to the server. Since the wireless bandwidth is limited, we adopted a simple image compression scheme to achieve best trade-off between computational complexity and compression efficiency. A RS code is employed for forward error coding on UDP socket connection. Experimental results are clear enough to show this system to be practical for clinic diagnosis. The frame rate for lossless 256x256 24 bits color images transmission could reach 5 fps under 802.11b. We believe the system will provide advance service for the doctors.
In this paper, we proposed a multiple description distributed image coding system for mobile wireless transmission. The innovations of proposed system are two folds: First, when MDC is applied to wavelet subband based image coding; it is possible to introduce correlation between the descriptions in each subband. At the encoder, the correlation information is encoded by systematic Reed Solomon (RS) encoder. Only the parity check bits are sent to channel. At the receiver, when part of descriptions are lost, however, their correlation information are available, the Wyner Ziv decoder can still recover the description by using the correlation information and the partly received description as noisy version and the side information. Secondly, in each description, we use multiple bitstream image coding to achieve error robust transmission. In conventional entropy subband coding, the first bit error may cause the decoder to discard all subsequent bits. A multiple bitstream image encoding is developed based on the decomposition of images in the wavelet domain. We show that such decomposition is able to reduce error propagation in transmission, thus to achieve scalability on PSNR performance over the changes of BER, Experimental result shows that the PSNR could be improved with the same coding rate.
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions. When these descriptions are transmitted over different channels in packet network or error-prone wireless environment, graceful degradation can be achieved even if part of descriptions are not received by the receiver. When MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. In this paper, we consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available. Therefore, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. High quality reconstruction can still be obtained by combining the decoded descriptions from Wyner Ziv decoder. The proposed scheme takes advantage of an efficient way to use the correlation information, thus makes the system more robust to channel error corruption. Experimental result shows that, comparing to conventional multiple description wavelet based image coding, the PSNR of the received and decoded image could be improved noticeably when coding at the same bit rate.
The imaging sensors are able to provide intuitive visual information for quick recognition and decision. However, imaging sensors usually generate vast amount of data. Thus, processing of image data collected in the sensor network for the purpose of energy efficient transmission poses a significant technical challenge. In particular,
when a cluster of imaging sensors is activated to track certain moving target, multiple sensors may be collecting similar visual information simultaneously. With correlated image data, we need to intelligently reduce the redundancy among the neighboring sensors so as to minimize the energy for transmission, the primary source
of sensor energy consumption. We propose in this paper a novel collaborative image transmission scheme for wireless sensor networks. First, we apply a shape matching method to coarsely register images to find out maximal overlap in order to exploiting the spatial correlation between images acquired from neighboring sensors.
A transformation is generated according to the matching results. We encode the original image and the difference between the transformed image and reference image. Then, we transmit the coded bit stream together with the transformation parameters. This will significantly reduce the transmission energy comparing with transmitting two individual images independently. To exploiting the temporal correlation among images in the same sensor, we assume that the imaging sensors and the background scenes remain stationary over the data acquisition period. For a given image sequence, we transmit background image only once. A simple background subtraction
method is employed to detect targets. Whenever targets are detected, only the regions of target and their spatial locations are transmitted to the monitoring center. At the monitoring center, the whole image can be reconstructed by fusing the background and the target image as well as its spatial location to further reduce
energy consumption. Experimental results show that the transmission energy can be greatly reduced.
We present a new proxy-based system to allow public use handheld devices to access instantly to the Low Resolution Picture Taking (LRPT) satellite weather image data and display regional weather image. First the location-based transcoding is done from non frame based satellite image to frame based CIF/QCIF image for handheld device. GPS information from expansion module is used in the location of the region of interest. A robust fixed-length joint source and channel coding scheme is then implemented to achieve robust wireless transmission. Experimental results show that the proposed system is suitable to the time varying, low bandwidth wireless channel and power constraint on wireless handheld devices.
We describe an image processing algorithm that identifies the anatomic landmarks of the cervix on a transvaginal ultrasound image and determines the standard cervical length. The system is composed of four stages: The first stage is adaptive speckle suppression using variable length sticks algorithm. The second stage is the location of the internal cervical opening or 'os' using a region-based segmentation. The third stage is delineation of the cervical canal. The fourth stage uses gray level summation patterns and prior knowledge to first localize the tissue boundary of the external cervix. A template is then used to determine the specific location of the external os. The cervical length is determined and calculated to image scale. For validation, 101 cervical ultrasound images were selected from a series of 37 examinations performed on 17 patients over an 8-month period. Repeated measurements of cervical length using the computer assisted method were compared with those of two experienced sonographers. The mean coefficient of variation for serial measurements was 1.1% for the computer assisted method and averaged 4.7% for the manual method. In a pairwise comparison, the mean cervical length for the computer method was not different from the mean manual cervical length.