TOPICS: Data communications, Control systems, Data storage, Telecommunications, Very large scale integration, Vestigial sideband modulation, Broadband telecommunications, Network architectures, Process control, Magnetism
A high-speed hardware architecture for an experimental
high-definition videotex system for a broadband integrated services digital network is introduced. The key technologies required are high-speed protocol processing, high-speed data transfer, and highspeed
picture readout from disks. High-speed protocol processing using a newly developed virtual memory copy, content rearrangement memory, two-bus architecture, and simultaneous editing and analyzing allows a requested 6-MB picture to be displayed within 3s.
We propose a conditional median filter that is able to
preserve significantly more image detail than the conventional median filter when suppression of impulsive noise is desired. It does so by means of bypassing filtering when the local signal variation
is below an adaptively adjusted threshold. Another advantage of our algorithm is that it is simple to implement and it can be used with any of the available fast median filter algorithms. In addition, it can be easily integrated into existing median filter hardware. We compare the complexity of our algorithm with that of another proposed algorithm with similar filtering objectives. The performance of the new algorithm on real images is compared to that of median and center-weighted median filters.
Thresholding has been extensively used to separate myocardium from two-dimensional echocardiograms. We present a critical review of the existing threshold-based methods, and propose a new interactive method that uses the known average wall thickness at the indicated region to determine a reference global threshold. The thickness of the wall as seen in the thresholded image is important for quantization of cardiac parameters such as ventricular volume, ejection fraction, etc. In echocardiograms, owing to the characteristics of the imaging environment, the cardiac wall thickness depends on the threshold. Many existing methods concentrate on extracting continuous wall regions. In our scheme we select a threshold that yields walls ofproper thickness, and then we attempt to obtain a continuous region. In our scheme, a userpicks two points on a clearly visible section of the wall where the thickness is known. We compute a threshold by analyzing the regionalhistogram at that wall section so that average thickness of the regional thresholded pattern is equal to the known wall thickness. This gives a reference threshold that is varied locally by regional three-dimensional morphology to obtain local thresholds. The thresholding scheme suppresses noise and generates smooth boundaries.
One of the first steps needed to extract information from images for most machine vision applications is the segmentation of the image. We present a new segmentation algorithm for color images that combines both color space and spatial information. The algorithm is oriented to images that should exhibit clustering of the color space data, such as images of paper-based maps. The algorithm separates edge pixels from those in smooth regions and applies different segmentation algorithms to each group. The pixels
in smooth regions are used to segment the color space using a histogram analysis technique. These regions are then grown into the edge regions to classify the edge pixels. The algorithm is robust and fast, as verified by experimental results.
An image compression algorithm is described. The algorithm is an extension of the run-length image compression algorithm and its implementation is relatively easy. This algorithm was implemented and compared with other existing popular compression
algorithms and with the Lempel-Ziv (LZ) coding. The Lempel- Ziv algorithm is available as a utiilty in the UNIX operating system and is also referred to as the UNIX uncompress. Sometimes our algorithm is best in terms of saving memory space, and sometimes one of the competing algorithms is best. The algorithm is lossless, and the intent is for the algorithm to be used in computer graphics animated images. Comparisons made with the LZalgorithm indicate
that the decompression time using our algorithm is faster than that using the LZ algorithm. Once the data are in memory, a relatively simple and fast transformation is applied to uncompress the file.
We propose an algorithm in which a sequence of digital
halftone images is efficiently transmitted using encoding compatible with conventional facsimile devices. To enhance the CCITT standardized coding schemes for Group 3 and Group 4 facsimile apparatus,
pre-encoding is done on each image in the sequence: The image is either pre-encoded as a combination of bit representation of block means and an "error" image, according to the ToneFac algorithm, or it is pre-encoded as an "interimage" in which interframe
redundancy is converted into spatial redundancy, and the least "busy" of the above images is encoded and transmitted. The approach is referred to as the ToneSec algorithm.
An analytical approach is proposed to explain the appearance of unwanted low-frequency artifacts in halftoned images when using random dithering. The research is based on a theorem that relates the correlation of the input (continuous) gray-level signal to the correlation ofthe (halftone) binary output signal. This secondorder statistical analysis alludes to the claim that the introduction of low-frequency artifacts is inevitable, being an intrinsic property of the dithering process rather than of individual images or masks. In addition, high-frequency information in the continuous image is attenuated more than low-frequency information. This effect is enhanced for mean gray levels farther from mid-gray.
Reading performance, measured by lines of text read
during 30-mm sessions, and visual comfort, measured with a questionnaire at the end of each trial, were compared for a group of 15 subjects, with four trials on each of three monitor conditions: a VGA monitor where a mouse and scroll bar were used to advance the
text (VGA Scroll), an experimental VGA condition in which a single keystroke was used to advance the text (VGA Page), and a higher resolution dual-page (HiRes DP) monitor with a single-keystroke text advance. Luminance and text size were matched between the VGA
and HiRes DP. Font types were selected based on the most similar pair available on the two monitors. The primary visual differences in the displays were two pages of text displayed on the HiRes DP compared to one on the VGA, more frequent line wrapping on the
HiRes DP compared to the VGA, a higher dot density on the HiRes DP, a higher refresh rate on the HiRes DP, a difference in the font type, and the VGA was three phosphor, while the HiRes DP was monochrome-both appeared white. Significantly more lines were read with VGA Page compared to VGA Scroll (13.9%, p<0.03), a measure of the advantage of a single-keystroke text advance. In a comparison of the VGA Page to HiRes DP conditions in which only visual display differences existed, significantly more lines (17.4%, p <Ô.O1) were read and the symptom ratings were significantly better (p <0.02) on the HiRes DP monitor.