PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Traditional interpolation techniques consist of direct interpolation of the grey values. When user interaction is called for in image segmentation, as a consequence of these interpolation methods, the user needs to segment a much greater amount of data. To mitigate this problem, a method called shape-based interpolation of dimension that generalized the shape-based method from binary to grey data. We showed preliminary evidence that it produced more accurate results than conventional grey-level interpolation methods. In this paper, concentration on the 3D interpolation problem, we compare statistically the accuracy of eight different methods: nearest-neighbor, linear grey- level cubic spline, grey-level modified cubic spline, Goshtasby et al., and three methods from the grey-level shape-based class. A population of patient MR and CT 3D images are utilized for comparison. Each slice in these data sets is estimated by each interpolation method and compared to the original slice at the same location suing three measures: mean-squared difference, number of sites of disagreement, and largest difference. The methods are statistically compared pairwise based on these measures. The shape-based methods statistically outperformed other methods in all measures in all applications considered here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel direction-based interpolation approach with discrete orthogonal polynomial decomposition is introduced. A 2D digital image is usually regarded as a sampling of an underlying 2D continuous function, which is called an image field. When the image field is considered as a scalar potential field, the interpolation problem is converted to that if the values at some points in a potential field are given, how to estimate the value of any point more accurately. Both the edges of the image and the content of the objects are well preserved if the image is interpolated along the equipotential lines instead of the coordinate axes. In this study, the equipotential direction at each pixel in the interpolated plane is calculated from the partial derivatives of the discrete orthogonal polynomial decomposition of the original image. For each point, the equipotential line through it is searched in a step-by-step way, guided by the equipotential directions. The value of a point is interpolated linearly from the values of points with known values along the equipotential line. Refinement scheme is applied to interpolate the images to the desired scale. Experiments on a set of CT images show that this method not only preserves the shape structure efficiently even for the objects with complicated structures but also has a low time complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distance measurements of the vascular system of the brain can be derived from biplanar digital subtraction angiography (2p-DSA). The measurements are used for planning of minimal invasive surgical procedures. Our 90 degree-fixed-angle G- ring angiography system has the potential of acquiring pairs of such images with high geometric accuracy. The sizes of vessels and aneurysms are estimated applying a fast and accurate extraction method in order to select an appropriate surgical strategy. Distance computation from 2p-DSA is carried out in three steps. First, the boundary of the structure to be measured is detected based on zero-crossings and closeness to user-specified end points. Subsequently, the 3D location of the center of the structure is computed from the centers of gravity of its two projections. This location is used to reverse the magnification factor caused by the cone-shaped projection of the x-rays. Since exact measurements of possibly very small structures are crucial to the usefulness in surgical planning, we identified mechanical and computational influences on the geometry which may have an impact on the measurement accuracy. A study with phantoms is presented distinguishing between the different effects and enabling the computation of an optimal overall exactness. Comparing this optimum with results of distance measurements on phantoms whose exact size and shape is known, we found, that the measurement error for structures of size of 20 mm was less than 0.05 mm on average and 0.50 mm at maximum. The maximum achievable accuracy of 0.15 mm was in most cases exceeded by less than 0.15 mm. This accuracy surpasses by far the requirements for the above mentioned surgery application. The mechanic accuracy of the fixed-angle biplanar system meets the requirements for computing a 3D reconstruction of the small vessels of the brain. It also indicates, that simple measurements will be possible on systems being less accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present anew deformable volume rendering algorithm. The volume deformation is modeled by a landmark- based volume morphing method using Hardy's scattered data interpolation. The algorithm is able to directly render the deformed volume without going through the expensive volume construction process. Piecewise linear approximation of the deformation function by adaptive space subdivision and template-based block projection are used to speed up the rendering process. Our algorithm can render the morphing of a 2563 volume in seconds, instead of minutes and hours with the traditional morphing-rendering pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An advanced image analysis and graphics software is developed to reconstruct and visualize previously images prostate specimens to define tumor volume and distribution and pathways of needle biopsies, thus allowing improved understanding of prostate cancer behavior and current diagnosis-staging methodology. In order to reconstruct an accurate surface model of the surgical prostate, contour interpolation and surface reconstruction are performed on extracted contours of the object of interest. Contour interpolation increases the sample rate in the stacking direction in order to reconstruct sufficiently accurate surfaces of the prostate and its internal anatomical structures. An elastic contour model is developed through computing a force field between adjacent slices to deform the start contour gradually to conform to the target contour. A new finite-element deformable surface-spine model is then developed to reconstruct the computerized prostate model from the interpolated contours. A deformable spine of the prostate model is determined from its contours, and all the surface patches are contracted to the spine through expansion/compression forces radiating form the spine while the spine itself is also confined to the surface. The surface refinement is governed by a second-order partial differential equation from Lagrangian mechanics, and the refining process is accomplished when the energy of this dynamic deformable surface-spine model reaches its minimum. Interactive visualization is achieved by using the state-of- the-art 3D graphics toolkit, OpenInventor, with graphical user interface to visualize the reconstructed 3D prostate model including all internal anatomical structures and their relationships. Finally, an image-guided prostate needle biopsy simulation is implemented to validate current biopsy strategies on tumor detection and tumor volume estimation to improve prostate needle biopsy techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new method for 3D orthodontics treatment simulation developed for an orthodontics planning system (MAGALLANES). We develop an original system for 3D capturing and reconstruction of dental anatomy that avoid use of dental casts in orthodontic treatments. Two original techniques are presented, one direct in which data are acquired directly form patient's mouth by mean of low cost 3D digitizers, and one mixed in which data are obtained by 3D digitizing of hydrocollids molds. FOr this purpose we have designed and manufactured an optimized optical measuring system based on laser structured light. We apply these 3D dental models to simulate 3D movement of teeth, including rotations, during orthodontic treatment. The proposed algorithms enable to quantify the effect of orthodontic appliance on tooth movement. The developed techniques has been integrated in a system named MAGALLANES. This original system present several tools for 3D simulation and planning of orthodontic treatments. The prototype system has been tested in several orthodontic clinic with very good results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a statistically significant master model of localized prostate cancer is developed with pathologically- proven surgical specimens to spatially guide specific points in the biopsy technique for a higher rate of prostate cancer detection and the best possible representation of tumor grade and extension. Based on 200 surgical specimens of the prostates, we have developed a surface reconstruction technique to interactively visualize in the clinically significant objects of interest such as the prostate capsule, urethra, seminal vesicles, ejaculatory ducts and the different carcinomas, for each of these cases. In order to investigate the complex disease pattern including the tumor distribution, volume, and multicentricity, we created a statistically significant master model of localized prostate cancer by fusing these reconstructed computer models together, followed by a quantitative formulation of the 3D finite mixture distribution. Based on the reconstructed prostate capsule and internal structures, we have developed a technique to align all surgical specimens through elastic matching. By labeling the voxels of localized prostate cancer by '1' and the voxels of other internal structures by '0', we can generate a 3D binary image of the prostate that is simply a mutually exclusive random sampling of the underlying distribution f cancer to gram of localized prostate cancer characteristics. In order to quantify the key parameters such as distribution, multicentricity, and volume, we used a finite generalized Gaussian mixture to model the histogram, and estimate the parameter values through information theoretical criteria and a probabilistic self-organizing mixture. Utilizing minimally-immersive and stereoscopic interactive visualization, an augmented reality can be developed to allow the physician to virtually hold the master model in one hand and use the dominant hand to probe data values and perform a simulated needle biopsy. An adaptive self- organizing vector quantization method is developed to determine the optimal locations of selective biopsies where maximum likelihood of cancer detection and the best possible representation of tumor grade and extension can be achieved theoretically, thus allowing a comprehensive analysis of pathological information. The preliminary results show that a statistical pattern of localized prostate cancer exists, and a better understanding of disease patterns associated with tumor volume, distribution, and multicentricity of prostate carcinoma can be obtained from the computerized master model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnetic Resonance Imaging is the accepted method of choice for the diagnosis of central nervous system disorders.' Present neurosurgical planning depends on two-dimensional information obtained from MRI and CT cross-sections.2' 3After sequential reading of a series of two-dimensional images, the information has to be mentally transformed into a virtual threedimensional image of the complex three-dimensional anatomy by the neurosurgeon. These mental transformations are difficult or sometimes even impossible. Nowadays the challenge to the neurosurgeon has changed from the primary succesfull removal of a tumor to a minimal iniasive approach without distruction of normal brain tissue. These minimal invasive strategies need meticulous preoperative planning and often the support of intraoperative navigation.4 For the approach of subcortical lesions the precise pre-operative defmition not only of the target but also of the cortical entrance point is crucial. Therefore exact knowledge about the gyral and sulcal anatomy of the cerebral cortex in relation to the cortical veins is 567,8 We evaluated the impact of a 3D-display of the brain, vasculature and tumor on surgical decisions during planning and executions of operations for intracranial tumors in or near the central region. The 3D reconstruction and display is based on 3D MR data sets and a semiautomatic segmentation technique. Tumors as well as the surrounding or overlying neuronal and neurovascular anatomy are displayed on the 3D computer screen.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the development of an atlas-based system for preoperative functional neurosurgery planning and training, intraoperative support and postoperative analysis. The system is based on Atlas of Stereotaxy of the Human Brain by Schaltenbrand and Wahren used for interactive segmentation and labeling of clinical data in 2D/3D, and for assisting stereotactic targeting. The atlas microseries are digitized, enhanced, segmented, labeled, aligned and organized into mutually preregistered atlas volumes 3D models of the structures are also constructed. The atlas may be interactively registered with the actual patient's data. Several other features are also provided including data reformatting, visualization, navigation, mensuration, and stereotactic path display and editing in 2D/3D. The system increases the accuracy of target definition, reduces the time of planning and time of the procedure itself. It also constitutes a research platform for the construction of more advanced neurosurgery supporting tools and brain atlases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, fluoroscopy using computed tomography (CT) has gained significant attention. This is largely driven by the clinical application of the CT fluoroscope (CTF) is guided biopsy. Many studies have been conducted for the optimal presentation of the information. Little attention has been paid, however, to the temporal response of the CTF. The temporal response is important in understanding the inherent limitations of the CTF and in determining the best guided biopsy procedures. For example, during the biopsy operation, when needle is inserted at a relatively high speed, the true needle position will not be correctly reflected in the displayed fluoroscopy image until sometime later. This could result in an overshot or misplacement of the biopsy needle. In this paper, we perform detailed analysis of the temporal response of the CTF. We first derive a set of equations to describe the average location of a moving object observed by the CTF system. The accuracy of the equations is verified by computer simulations and experiments. We then show that the CT reconstruction process acts as a low pass filter to the motion function. For a general weighting function used in the tomographic reconstruction process, the impact on the observed needle motion depends on the biopsy needle location, the needle motion orientation and speed, and the weighting function itself. As a result, there is an inherent time delay in the CTF process to the true biopsy needle motion and location. This analysis can be used as a useful tool in the optimization of the biopsy procedure parameters. It can also be used as a guidance to the biopsy operator during training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early detection of breast cancer, one of the leading causes of death by cancer for women in the US is key to any strategy designed to reduce breast cancer mortality. Breast self-examination (BSE) is considered as the most cost- effective approach available for early breast cancer detection because it is simple and non-invasive, and a large fraction of breast cancers are actually found by patients using this technique today. In BSE, the patient should use a proper search strategy to cover the whole breast region in order to detect al possible tumors. At present there is no objective approach or clinical data to evaluate the effectiveness of a particular BSE strategy. Even if a particular strategy is determined to be the most effective, training women to use it is still difficult because there is no objective way for them to know whether they are doing it correctly. We have developed a system using vision-based motion tracking technology to gather quantitative data about the breast palpation process for analysis of the BSE technique. By tracking position of the fingers, the system can provide the first objective quantitative data about the BSE process, and thus can improve our knowledge of the technique and help analyze its effectiveness. By visually displaying all the touched position information to the patient as the BSE is being conducted, the system can provide interactive feedback to the patient and create a prototype for a computer-based BSE training system. We propose to use color features, put them on the finger nails and track these features, because in breast palpation the background is the breast itself which is similar to the hand in color. This situation can hinder the ability/efficiency of other features if real time performance is required. To simplify feature extraction process, color transform is utilized instead of RGB values. Although the clinical environment will be well illuminated, normalization of color attributes is applied to compensate for minor changes in illumination. Neighbor search is employed to ensure real time performance, and a three-finger pattern topology is always checked for extracted features to avoid any possible false features. After detecting the features in the images, 3D position parameters of the colored fingers are calculated using the stereo vision principle. In the experiments, a 15 frames/second performance is obtained using an image size of 160 X 120 and an SGI Indy MIPS R4000 workstation. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system is robust and accurate, which confirms the performance and effectiveness of the proposed approach. The system can be used to quantify search strategy of the palpation and its documentation. With real-time visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus visual feedback, it can be used to train both patients and new physicians to improve their performance of palpation and thus improve the rate of breast tumor detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The huge data archive of the UCSF Hospital Integrated Picture Archiving and Communication System gives healthcare providers access to diverse kinds of images and text for diagnosis and patient management. Given the mass of information accessible, however, conventional graphical user interface (GUI) approach overwhelms the user with forms, menus, fields, lists, and other widgets and causes 'information overloading.' This article describes a new approach that complements the conventional GUI with 3D anatomical atlases and presents the usefulness of this approach with a clinical neuroimaging application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PC-based computing is now ubiquitous in common consumer applications. Although PCs have equalled or surpassed engineering workstations in basic computing power and economy, there is still strong workstation dependency for imaging applications. In this paper, we demonstrate that a complete system, based on a Pentium PC and readily available inexpensive software, can be built very economically for effective execution of most of the commonly used 3D imaging operations. For the craniofacial application, the Pentium system offers a twofold speed advantage over a Sparc 20 system, allowing interactive fuzzy volume rendering of complex hard and soft tissue structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the main limitations of current 3D ultrasound (US) systems is that they cannot provide the real-time or interactive feedback that the sonographers and clinicians are accustomed to with conventional 2D clinical US machines. We have developed a low-cost high-performance interactive 3D US workstation that is suitable for use in a research and clinical environment.THis system employs a powerful programmable image processing board, the MediaStation 5000, to perform volume acquisition, reconstruction, and visualization. We have developed efficient reconstruction and visualization algorithms that allow our 3D US system to provide the same immediate feedback advantages as current 2D US technologies with the added advantage of presenting information in three dimensions. For acquired sequences of 512 X 512 US images, volumes can be reconstructed using a forward-mapped low-order interpolation scheme at 15 frame/s. A modified reconstruction algorithm that performs incremental reconstruction was developed for real-time volume visualization during acquisition. US volume visualization using shear-warp factorization and maximum intensity projection operates at 10 frame/s for 128 X 128 X 128 volumes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is considerable interest in using a fluoroscope for accurate three-dimensional measurements for diagnosis and therapy delivery. In this manuscript, we describe the necessary processing used to accurately find three-dimensional points in the field of view of a fluoroscopic imager and experimental results that show the system is capable of submillimetric accuracy. The image produced by a fluoroscope is spatially distorted -- a radial distortion results from the curved geometry of the xray detector and a rotation and translation are caused by an interaction ofthe electrons in the image intensifier tube with the local magnetic fields. Tests indicate that these distortions are significant (on the order of 1 cm) and affect the accuracy with which one can measure objects and distances in the images. By attaching a dense grid of radiopaque beads to the surface of the intensifier, it is possible to measure the amount of distortion present in the final image by comparing the bead positions in the image with the physical position of the beads on the grid. Furthermore, warping parameters can be derived from the bead locations and used to correct the distortion. By tessellating the image into triangular regions and applying a bilinear warping technique, we have been able to dramatically reduce distortion present in the image. Results indicate that after warping, the bead spacing is correct with a median error of 0.03 mm and a maximum error of 0.65 mm; the standard deviation of the distance error is 0.25 mm. Using pairs of images from the fluoroscopes, triangulation techniques are used to find target points in three dimensions assuming a nine-parameter, pinhole camera model. The parameters include the source-to-intensifier distance (SID), the image center (ut, va), the translation of the x-ray source (s, s, si), and the rotation of the fluoroscope about a world coordinate system (O,,y). The world coordinate system is set up using a calibration object that consists of radiopaque beads embedded in a Delrin cylinder. The beads are arranged such that they lie along a helical path; this shape is chosen to help avoid overlap of the beads in each projection image. By placing the calibration object within the field of view, the nine parameters of the model can be determined for each fluoroscope in the biplane system. In vitro experiments were performed using a Philips' Biplane Poly Diagnost I fluoroscope in clinical use at Johns Hopkins Hospital. The system has an SID of approximately 1 m and a 36 cm intensifier diameter. Initial results indicate that the points in space can be found with a high degree of accuracy (within 0.5 mm error) using the fluoroscope. In conclusion, it is possible to use a clinical biplanar fluoroscope for accurately finding three-dimensional points in space which is useful for making anatomical measurements and targeting for therapy delivery.
Keywords: Fluoroscope, Computer Vision, Local Optimization, Distortion Correction, Accuracy, Three-dimensional, Measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cortical mapping is the process by which functional areas are identified on the cortex during neurosurgery. Normally, this can be accomplished on awakened patients by stimulating cortical areas and observing the patient's reaction. In cases involving displaced motor areas or patients not readily responding to stimulation, stimulating electrodes are replaced by recording electrode grids overlying the sensorimotor cortex. The electrode grid records evoked potentials generated in response to stimuli applied to peripheral nerves. Once the sensorimotor cortex is identified, the electrode grid is removed and functional information from the mapping phase is effectively lost. In this paper, we present a system designed to permit the visualization of the electrode grid on a volume-rendered image of the brain, to interactively label each electrode as 'sensory' or 'motor', and to maintain spatial correlation between the rendered images and physical space following cortical mapping. This system fuses anatomic and electrophysiologic information so that once the electrode grid is removed and tumor resection begins, functional information is not lost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The transfer of the result of the surgical planning and simulation performed on an individual anatomical, CT based model onto the operation field is accomplished by fusing a live video image of the situs and a video image of the planning model, recorded rom the same relative camera location towards the anatomical structure. The surgeon can replicate the previously done simulation by guiding instruments like scalpels, bone saws, drills etc. along the markings left in the model. The visual superposition of the real and the synthetic world offers the surgeon an intuitively usable, tight link between the simulation stage and the real surgery. Osteotomy lines, the location of repositioned bone segments, implant positions and other relevant information are easily transferred in a non- invasive, non-tactile manner, thus reducing the risk of infection by reducing the count of foreign matter objects like templates, calipers, etc. brought into the situs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most important issues in neurosurgical lesion resection is margin definition. And while there is still some effort required to exactly determine lesion boundaries from tomographic images, the lesions are at least perceptible on the scans. What is not visible is the location of function. Functional imaging such as PET and fMRI hold some promise for cortical function localization; however, intraoperative cortical mapping can provide exact localization of function without ambiguity. Since tomographic images can provide lesion margin definition and cortical mapping can provide functional information we have developed a system for combining the two in our Interactive, Image-Guided system. For cortical surface mapping we need a surface description. Brain contours are extracted from a MRI volume using a deformable model approach and rendered from multiple angular positions. As the surgeon moves a probe, its position is displayed on the view closes to the angular position of the probe. During functional mapping, positive response to stimulation result in a color overlay 'dot' added to the cortical surface display. Different colored dots are used to distinguish between motor function and language function. And a third color is used to display overlapping functionality. This information is used to guide the resection around functionally eloquent areas of the cortex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method is proposed for a precise planning of autologous bone grafts in cranio- and maxillofacial surgery. In patients with defects of the facial skeleton, autologous bone transplants can be harvested from various donor sites in the body. The preselection of a donor site depends i.a. on the morphological fit of the available bone mass and the shape of the part that is to be transplanted. A thorough planning and simulation of the surgical intervention based on 3D CT studies leads to a geometrical description and the volumetric characterization of the bone part to be resected and transplanted. Both, an optimal fit and a minimal lesion of the donor site are guidelines in this process. We use surface similarity and voxel similarity measures in order to select the optimal donor region for an individually designed transplant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Typical digital x-ray image detectors in current use, or under development, provide a dynamic range of digital values which significantly exceed the actual range of useful diagnostic data for any given exposure condition. It becomes a task of the digital detection system to provide recognition of the useful data values and to provide appropriate post processing to produce diagnostically useful display image. In this paper we describe a technique which was developed to automatically useful display image. In this paper we describe a technique which was developed to automatically define the range of useful image data values. Reference values are derived from a histogram, and it is integral, of the detector's output. These reference values, along with exam specific parameters, are used in exam specific algorithms to define the range of digital values to be included in a diagnostic display image.The display image values are gray scale processed to produce an optimized soft or hard copy image data set. Using the described algorithms we have demonstrated the automated production of hard copy images from a digital detector system which are comparable to properly exposed conventional radiographs. These algorithms automatically compensates for exposure technique and subject contrast characteristics. The algorithms were developed for the Sterling Direct Radiography digital x-ray detector system; however, they are applicable to any digital x-ray image detector system. The technique described is based on the digital input values which represent log exposure values. If linear values are provided, a log conversion must be made preceding the described technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We studied the impact of CRT spot size, phosphor luminance noise and image noise on the specification of high- resolution CRT displays that address the critical needs of general chest radiography. Using Argus CRT simulation software, the design of high-resolution CRTs for the display of adult chest radiographs was studied. The simulated images were printed on a laser printer and evaluated by a board- certified radiologist, RMS. The validity of the Argus simulation was assessed by modeling a 1k X 1k pixels CRT, whose technical parameters were sufficiently well known. Comments from the observer are presented comparing the simulated 2k display and a size-matched replicate of the original screen/film image. Critical parameters like phosphor luminance efficiency and its impact on electron beam size and phosphor luminance noise and its impact on radiographic image noise are discussed. We conclude that Argus CRT simulation software can successfully model the performance of CRTs intended to display medical images permitting consideration of critical parameters without costly manufacturing trials. Based on the 2k CRT simulation results, we suggest that a low luminance noise phosphor such as type p45 be used to ensure that specifying a small spot size would yield the anticipated sharpness improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
No electronic devices are currently available that can display digital radiographs without loss of visual information compared to traditional transilluminated film. Light scattering within the glass faceplate of cathode-ray tube (CRT) devices causes excessive glare that reduces image contrast. This glare, along with ambient light reflection, has been recognized as a significant limitation for radiologic applications. Efforts to control the effect of glare and ambient light reflection in CRTs include the use of absorptive glass and thin film coatings. In the near future, flat panel displays (FPD) with thin emissive structures should provide very low glare, high performance devices. We have used an optical Monte Carlo simulation to evaluate the effect of glare on image quality for typical CRT and flat panel display devices. The trade-off between display brightness and image contrast is described. For CRT systems, achieving good glare ratio requires a reduction of brightness to 30-40 percent of the maximum potential brightness. For FPD systems, similar glare performance can be achieved while maintaining 80 percent of the maximum potential brightness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Soft-copy presentation of medical images is becoming part of the medical routine as more and more health care facilities are converted to digital filmless hospital and radiological information management. To provide optimal image quality, display systems must be incorporated when assessing the overall system image quality. We developed a method to accomplish this. The proper working of the method is demonstrated with the analysis of four different monochrome monitors. We determined display functions and veiling glare with a high-performance photometer. Structure mottle of the CRT screens, point spread functions and images of stochastic structures were acquired by a scientific CCD camera. The images were analyzed with respect to signal transfer characteristics and noise power spectra. We determined the influence of the monitors on the detective quantum efficiency of a simulated digital x-ray imaging system. The method follows a physical approach; nevertheless, the results of the analysis are in good agreement with the subjective impression of human observers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ACR/NEMA has released for public comment the display function standard document. This standard, produced by ACR/NEMA Working Group XI, has been developed in order to solve the problem of standardizing the response of grey scale display systems. This paper presents a methodology proposed in the display function standard for quantitatively calculating the conformance of a display device to the standard display function based on statistical measures, which are referred to as the linearization uniformity measures (LUM). There are two LUM measures, R2, or global uniformity, and root mean square error, or local uniformity. The derivation of both measures is described. Two additional measures that provide a better description of the achievable dynamic range of a display device than simply its luminance range are also described, the theoretical number of just noticeable differences (JNDS) of the display, and the realized number of JNDS of the display. Currently available medical image display systems are analyzed using each of these measures, to examine their shortcomings, and suggest what changes might be desirable in the design of medical image display systems in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three wavelet algorithms and JPEG compression were applied to two adult PA chest radiographs for visual evaluation and study by three board-certified radiologists. The major factors that we found to affect image compression, when employed for interpretation of projection radiographs, were compression signal space, luminance range, presentation method and viewing distance. Image type, luminance noise and CRT MTF are also major factors in compression,but this preliminary study did not explore their effects in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we evaluate a layered coding technique based on subband coding for the purpose of encoding medical images for realtime transmission over heterogeneous networks. The objective of this research is to support a medical conference in a heterogeneous networking scenario. The scalable coding scheme under study in this paper generates a single bit-stream, from which a number of sub-streams of varying bit-rates can be extracted. This makes it possible to support a multicast transmission scenario, where the different receivers are capable of receiving different bit- rate streams from the same source, in an efficient and scalable way. The multirate property also allows us to provide graceful degradation to loss when used over networks which support multiple priorities. This paper evaluates the quality of the video images encoded with the layered encoding technique at different bit-rates in terms of the peak signal to noise ratio for cine-angiogram video. It also describes experiments with the transmission of the video across an asynchronous transfer mode (ATM) local area network, using two layer encoded video stream, and assigning different network service classes to the two layers. We study how the quality of the reconstructed signal changes with the ratio of the bit-rates of the high and low priority layers, for various levels of congestion in the ATM network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite superior performance of vector quantization (VQ) over scalar quantization in providing low bit rates at optimized distortion, the complexities involved in the encoding processes namely search processes and in generation of efficient code books in VQ prohibit the use of VQ in many applications. We introduce a new VQ technique with optimized search processes and distortion measures followed by entropy coding thereby yielding exceptionally low bit rates yet high visual quality for a large class of images including a variety of medical images with potential application in cost-effective telecommunications. A new approach to vector quantization by integrated self-organizing neural networks with fuzzy distortion measures has been applied to generate multiresolution codebooks from wavelet decomposed images. Two adaptive clustering techniques, namely integrated adaptive fuzzy clustering and adaptive fuzzy leader clustering with embedded fuzzy distortion measures, are used for partitioning similar vector groups for generating code books at each resolution level. The lower bound for the bits required to represent an image is given by its entropy. However, if the fundamental limit of compressing a signal can be related to perceptual entropy, then a bit rate lower than the entropy estimate can be achieved. The capability of this new approach to vector quantization resulting in low bit rate at high visual quality will have significant applications in telecommunications and telemedicine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A decomposition method generalized from Haar transform has been derived. This general form can exactly describe dyadic doublet-type transforms such as orthogonal wavelets. Another general form based on the binomial filter can describe dyadic triplet-type transforms such as biorthogonal wavelets. Both systems can be unified by the delta function basis decomposition system. In this paper, (a) the relationship between various types of dyadic transforms are shown; (b) methods of filter design to produce low entropy are suggested; and (c) adaptive decomposition using different transformation kernels is derived through the doublet and triplet systems. The property of low entropy in the decomposed data sequence is used as a major criterion for comparing various methods. Although we provide substantial derivations regarding the predictive approaches, detailed methods are given both in theoretical development and on implementation of dyadic decomposition methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In lossy medical image compression, the requirements for the preservation of diagnostic integrity cannot be easily formulated in terms of a perceptual model. Especially since, in reality, human visual perception is dependent on numerous factors such as the viewing conditions and psycho-visual factors. Therefore, we investigate the possibility to develop alternative measures for data loss, based on the characteristics of the acquisition system, in our case, a digital cardiac imaging system. In general, due to the low exposure, cardiac x-ray images tend to be relatively noisy. The main noise contributions are quantum noise and electrical noise. The electrical noise is not correlated with the signal. In addition, the signal can be transformed such that the correlated Poisson-distributed quantum noise is transformed into an additional zero-mean Gaussian noise source which is uncorrelated with the signal. Furthermore, the systems modulation transfer function imposes a known spatial-frequency limitation to the output signal. In the assumption that noise which is not correlated with the signal contains no diagnostic information, we have derived a compression measure based on the acquisition parameters of a digital cardiac imaging system. The measure is used for bit- assignment and quantization of transform coefficients. We present a blockwise-DCT compression algorithm which is based on the conventional JPEG-standard. However, the bit- assignment to the transform coefficients is now determined by an assumed noise variance for each coefficient, for a given set of acquisition parameters. Experiments with the algorithm indicate that a bit rate of 0.6 bit/pixel is feasible, without apparent loss of clinical information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we assess lossy image compression of digitalized chest x-rays using radiologist assessment of anatomic structures and numerical measurements of image accuracy. Forty chest x-rays were digitized and compressed using an irreversible wavelet technique at 10, 20, 40 and 80:1. These were presented in a blinded fashion with an uncompressed image for subjective A-B comparison of 11 anatomic structures as well as overall quality. Mean error, RMS error, maximum pixel error, and number of pixels within 1 percent of original value were also computed for compression ratios from 10:1 to 80:1. We found that at low compression there was a slight preference for compressed images. There was no significant difference at 20:1 and 40:1. There was a slight preference on some structures for the original compared with 80:1 compressed images. Numerical measures demonstrated high image faithfulness, both in terms of number of pixels that were within 1 percent of their original value, and by the average error for all pixels. Our findings suggest that lossy compression at 40:1 or more can be used without perceptible loss in the demonstration of anatomic structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CT images represent a unique challenge for medical image compression; they have many pixels with very high and very low intensity values, often with sharp edges between the two, and the intensity values have quantitative significance, representing the attenuation coefficient in Hounsfield units (HU). Thus, the intensity ranges which represent bone or various soft tissues are essentially known in advance. When viewing a CT image, different window and level settings for mapping the 12-bit intensity values to an 8-bit display are used, depending on the objects of interest. When viewing objects with very high or low values, large window values are used, so that differences in intensity values on the order of 10 or 20 HU are not significant and are scarcely noticed in practice. Conversely, when viewing soft tissues, small windows are used to capture subtle but important distinction, and an intensity difference of 10-20 HU can be highly significant. CT compression schemes, therefore, should have a mechanism to increase the representation accuracy of intensity values corresponding to soft tissue relative to those corresponding to bone and air. We describe a simple technique to force compression algorithms to assign more importance to specific intensity ranges by transforming the histogram of the image prior to compression, and show sample results. The technique significantly increases the ratio by which the images can be compressed while retaining acceptable image quality at both large and small window settings in common clinical use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new scheme for compression of ultrasound images based on a variant of discrete cosine transform (DCT), called 'shape adaptive DCT' (SA DCT). This method exploits the fact that the clinical information of echo-cardiographic images does not occupy the whole image.SA DCT is a block based scheme that has ben designed specifically for encoding regions of arbitrary shape. Only the pixels belonging to the region of interest (ROI) are considered for coding. For each block, a mask specifying the ROI is also transmitted. Further coding efficiency can be gained by adapting the selection of the quantization table for each block. We report experimental results using eleven different quantization tables including two that are used in MPEG. Results of this scheme applied to echo-cardiographic ultrasound images are compared with those obtained using the classical DCT and the JPEG compression method. We show that the proposed method produces better results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We implemented and compared the performance of three lossy compression techniques for the compression of x-ray radiographs. The quantization matrices for each technique was optimized based on a perceptual model developed by Watson. We examined the ability of the perceptual model to identify the just noticeable difference compression point and compare the performance of the perceptual model to a commonly used metric, signal to noise ratio. We also examined the compression performance of the perceptually optimized techniques at the just noticeable difference level of compression. As a benchmark, the compression performance was compared to the SPIHT wavelet compression algorithm. We found that the wavelet based perceptual model provided the most consistent estimate of the just noticeable difference threshold, the block DCT based technique was next, and signal to noise ratio was significantly worse. The compression performance was similar, at the just noticeable difference point, though block DCT provided slightly better compression than the wavelet technique, which in turn provided slightly better compression than the SPIHT wavelet compression algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variation is considered to the way in which the JPEG standard lossy image compression technique is applied, which allows different DCT block coding characteristics to be chosen for sets of blocks. This allows an appropriate level of fidelity to be achieved in the reconstructed image at particular locations, rather than merely establishing a fidelity bound. Texture information derived from the DCT coefficient values is used to distinguish block classes for which tuned quantization matrices are provided. Effectiveness of the technique is assessed for a set of standard NIH x-ray images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This is a brief report on research on the subject of DCTune optimization of JPEG compression of dental x-rays. DCTune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted some brief psychophysical studies to validate the DCTune quality metric and to demonstrate the visual advantage of DCTune compression over standard JPEG.
Keywords: jpeg, compression, quantization, adaptive, image quality
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High computational and throughput requirements in modern ultrasound machines have restricted their internal design to algorithm-specific hardware with limited programmability. Adding new ultrasound imaging applications or extending and improving machine's internal algorithms can require costly hardware redesigns and replacements of boards or of the entire machine. In an effort to address these problems, we have designed a high-performance programmable ultrasound processing subsystem, the programmable ultrasound image processor (PUIP), to fit within an existing ultrasound machine and support native ultrasound image processing. To utilize the PUIP's computing power and programmability, we have developed several ultrasound image processing applications. Multiple TMS320C80 processors were used to provide the PUIP with a computing power of 4 billion operations per second. Flexibility was achieved by making our system programmable and multimodal, e.g., gray scale, color flow, cine and Doppler data can be processed. To achieve real-time or near real-time performance, each new application to developed on the PUIP was broken down into its component algorithms. Each component algorithm was carefully researched to maximize its use of the multiple processors within the PUIP and each TMS320C80's ability to perform multiple operations in a single cycle. The PUIP enables many real-time ultrasound imaging applications within an ultrasound machine. It provides a platform for rapid testing of new concepts in ultrasound processing and allows software upgrades for future technologies. The PUIP is a significant step in the evolution of ultrasound machines towards more flexible and generalized systems bridging the gap between many innovative ideas and their clinical use in ultrasound machines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a new algorithm for generating color Doppler ultrasound images, i.e., gray-scale images with color velocity overlays. By combining the blood flow velocity computation (VC) and tissue/flow decision (TFD) algorithms into one, our algorithm requires less computing cycles since the time-consuming process of VC is not needed if a pixel is determined to be tissue by the TFD. Our algorithm has been developed so that it can run efficiently on superscalar and very long instruction word processors that can perform multiple operations concurrently. Our algorithm is also designed to handle the tissue and flow characterization and velocity computations concurrently with the input and output (I/O) data loading. Thus, the processing unit's computing power can be dedicated to performing pixel-based operations while the I/O operations are handled by an independent direct memory access controller. Our integrated VC/TFD algorithm was implemented on a multimedia and imaging system based on the Texas Instruments TMS320C80 Multimedia Video Processor. An execution time of 16.6 ms was achieved when the input data are comprised of one 304 X 498 12-bit complex image from the autocorrelator of an ultrasound machine's color flow unit, one 304 X 498 12-bit power image, and one 304 X 498 12-bit gray-scale image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound as a medical imaging modality offers the clinician a real-time of the anatomy of the internal organs/tissues, their movement, and flow noninvasively. One of the applications of ultrasound is to monitor fetal growth by measuring biparietal diameter (BPD) and head circumference (HC). We have been working on automatic detection of fetal head boundaries in ultrasound images. These detected boundaries are used to measure BPD and HC. The boundary detection algorithm is based on active contour models and takes 32 seconds on an external high-end workstation, SUN SparcStation 20/71. Our goal has been to make this tool available within an ultrasound machine and at the same time significantly improve its performance utilizing multimedia technology. With the advent of high- performance programmable digital signal processors (DSP), the software solution within an ultrasound machine instead of the traditional hardwired approach or requiring an external computer is now possible. We have integrated our boundary detection algorithm into a programmable ultrasound image processor (PUIP) that fits into a commercial ultrasound machine. The PUIP provides both the high computing power and flexibility needed to support computationally-intensive image processing algorithms within an ultrasound machine. According to our data analysis, BPD/HC measurements made on PUIP lie within the interobserver variability. Hence, the errors in the automated BPD/HC measurements using the algorithm are on the same order as the average interobserver differences. On PUIP, it takes 360 ms to measure the values of BPD/HC on one head image. When processing multiple head images in sequence, it takes 185 ms per image, thus enabling 5.4 BPD/HC measurements per second. Reduction in the overall execution time from 32 seconds to a fraction of a second and making this multimedia system available within an ultrasound machine will help this image processing algorithm and other computer-intensive imaging applications become a practical tool for the sonographers in the feature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The narrow fields of view obtained from real-time ultrasound transducers, especially with linear array transducers, allow focused evaluation of a specific site but often without any anatomic reference. To allow medical ultrasound imaging to be used in more diverse clinical settings, we have created a new acquisition and display process that allows extended field of view (XFOV) imaging. To produce an XFOV image, extended acoustic slices are obtained by maneuvering the transducer along the body surface or inside. As the images are acquired, they are correlated, aligned, and spliced together into a long composite view, all without the use of a position sensor. This computationally intensive process involves image registration, geometric image transformation, panoramic image construction, and image display. The XFOV process executes in real-time on our programmable ultrasound processing subsystem, the programmable ultrasound image processor, which fits within an existing ultrasound system and supports native ultrasound signal and image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an efficient parallel architecture for the real-time multimedia platform using multiple multimedia video processors, which are fully programmable general- purpose digital signal processors. We have implemented an efficient parallel system, called the KAIST Image Computing System (KICS) for a multimedia platform and an image processing system. The main architecture of the KICS is a message passing model with hierarchically segmented buses. There are tow parallel clusters in which two PE's (processing elements) are pipelined and the master PE of each cluster can access common global memory at high speed. The applications of the KICS to the MPEG-2 encoder and the volume rendering are introduced. The implemented algorithms are functionally or spatially partitioned and assigned to each PE in consideration with the load balancing and the required data traffic between PE's. The performance analysis for the applications and the general image processing functions are performed. The programmability and the high- speed data-access capability of the KICS are its most important features as a high-performance system for real- time multimedia data processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A default display protocol (DDP) has been designed for reading magnetic resonance (MR) images on PACS workstations. This MR DDP is very flexible allowing any MR series from the current or previous studies to be simultaneously viewed and juxtaposed for direct comparison. There is a choice of image display format, selected by the radiologist from an icon bar, comprising four linked image series stacks, two linked image series tile clusters and linked tile mode. The selection and resultant monitor display arrangement is made by dragging and dropping token images into the chosen display format icon. The radiologist selects the best reversibly locked together so they can be scrolled in synchrony.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this project was the development of a workstation-user interface for evaluating computer assisted diagnosis (CAD) methods for digital mammography in receiver operating characteristic (ROC) experiments. Digital mammography poses significant and unique difficulties in the design and implementation of such an interface because multiple, large size images need to be handled at high- speeds. Furthermore, controls such as contrast, pan and zoom, and tools such as reporting forms, case information, and analysis of results need to be included. The software and hardware used to develop such a workstation and interface were based on Sun platforms and the Unix operating system. The software was evaluated by radiologists, and found to be user friendly, and comparable to standard mammography film reading in terms of display layout and speed. The software, as designed, will work on entry level workstations as well as high-end workstations with specialized hardware, thus being usable in either an educational, training, or clinical environment for annotation purposes using CAD techniques as well as primary diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We implemented a high resolution display system for viewing digitized mammograms at real-time speeds. This display system has been utilized at the UCSF to develop a digital breast imaging teaching file. The mammography display station is built on a Sun workstation and Pixar processing hardware. It is capable of real-time 2K image display and manipulation, and serves as a basic platform for our digital mammographic teaching file. The teaching file is designed on a sophisticated computer-aided instruction (CAI) model, which simulates the work-up sequences used in imaging interpretation. Our CAI model not only provides answers to questions, but also allows user's detection of imaging abnormalities by pointing at the image. We also developed a software tool with an easy-to-use interface to manage patient images and related information, and manipulate the large quantity of digital mammograms. The display station is found to be adequate for fast display of high resolution digital mammograms. Our sophisticated CAI model integrates the vast image and textual data with visualization software into an interactive mammographic teaching file. This teaching file can be used as a real teaching tool for training radiology residents in mammography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biomedical researchers require effective tools to manage large numbers of 2D and 3D images. Image BOSS is designed to provide a database management system that is easy to use, flexible enough to support varied research disciplines, and powerful enough to handle large sets of images. Researchers organize and select images based on research topics, image metadata, and thumbnails of the images. Within this system, image information is captured from existing images in a Unix-based filesystem, stored in an object oriented database, and presented to the user in a familiar laboratory notebook metaphor. Built upon a commercial object data manager, Image BOSS captures image metadata from the file- system through a scavenging program. The image files remain intact in the filesystem, permitting ordinary access by any other image software. Through the novel use of lab notebook windows, the user is presented with a collection of image thumbnails representing the contents of the filesystem. Based upon an object oriented version of AVW, a comprehensive library of image processing and analysis functions, a wealth of image processing and visualization algorithms are available in the system to build intelligent selection mechanisms. This image database system is being used in preliminary evaluation for several projects at Mayo Foundation. In addition, Image BOSS has been integrated with the ANALYZE software package providing researchers with 'Drag-and Drop' features between their database and image processing/analysis functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this research is to implement an extensible, user-friendly, portable and affordable medical image analysis environment that frees the analyst from having to deal with low-level programming issues. With this environment, the user can rapidly prototype, test, validate and statistically analyze new medical image analysis algorithms. The environment tightly integrates a medical image database that supports image content and metadata based querying; visualization and browsing of query results; S-Plus, a powerful object-oriented, interpreted programming environment, with a large suite of built-in visualization, data analysis, statistics and image processing tools. The system interfaces with any SQL-based relational database management system. The database schema is based on the DICOM standard. Using MedPlus, the user can develop image analysis procedures that integrate database queries, interactive image analysis, statistical inference and database archival of results using ether a scripting language.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the telemedicine application through visual communication systems, the reproduction of color is quite important. The purpose of this work is to develop a method for the reproduction of the natural color of the object in the TV system for telemedicine. When the illumination of observation environment is different form the object illumination, the change in color perception is caused by the color adaptation of human vision. In the method presented in this paper, the difference of illumination condition is corrected by using multispectral information captured by a multispectral camera. This paper shows the methods and the basic results for the reproduction of two types of color; the reproduction of color image as if the object is directly observed, and the reproduction of color appeared when the object is placed under the observation condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present the design architecture and the implementation status of WebPresent - a world wide web based tele-presentation tool. This tool allows a physician to use a conference server workstation and make a presentation of patient cases to a geographically distributed audience. The audience consists of other physicians collaborating on patients' health care management and physicians participating in continuing medical education. These physicians are at several locations with networks of different bandwidth and capabilities connecting them. Audiences also receive the patient case information on different computers ranging form high-end display workstations to laptops with low-resolution displays. WebPresent is a scalable networked multimedia tool which supports the presentation of hypertext, images, audio, video, and a white-board to remote physicians with hospital Intranet access. WebPresent allows the audience to receive customized information. The data received can differ in resolution and bandwidth, depending on the availability of resources such as display resolution and network bandwidth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kenneth M. Kempner, David Chow, Peter L. Choyke, Jerome R. Cox Jr., Jeremy E. Elson, Calvin A. Johnson, Paul Okunieff, Harold Ostrow, John C. Pfeifer, et al.
The radiology consultation workstation is a multimedia, medical imaging workstation being developed for use in an electronic radiology environment, utilizing a prototype asynchronous transfer mode telemedicine network, in support of radiotherapy treatment planning. A radiation oncologist in the radiation oncology department, and a radiologist in the Diagnostic Radiology Department, will be able to consult, utilizing high-quality audio/video channels and high-resolution medical image displays, prior to the design of a treatment plan. Organ and lesion contouring is performed via a shared-cursor feature, in a consultation mode, allowing medical specialists to fully interact during the identification and delineation of lesions and other features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of high speed computers and high data storage capacities has facilitated the development of practical virtual reality (VR) systems. YR uses computer modeling and simulation to enable human interactions with artificial three-dimensional visual or other sensory environments. In medicine, YR systems have the potential to significantly advance the practice of surgery. Surgery planning and rehearsal can be effectively carried out on YR systems, eliminating or significantly reducing the need for exploratory surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, neurosurgeons have access to 3D multimodal imaging when planning and performing surgical procedures. 3D multimodal registration algorithms are available to establish geometrical relationships between different modalities. For a given 3D point, most multimodal applications merely display a cursor on the corresponding point in the other modality. The surgeon needs tools allowing the visual fusion of these heterogeneous data in the same coordinate system but also in the same visual space in order to facilitate comprehension of the data. This problem is particularly crucial when using these images in the operating room. The goal of this paper is to analyze different methods to obtain this visual fusion between real images and virtual images. We discuss the relevance of different solutions depending on (1) the type of information shared between these different modalities and (2) the hardware location of this visual fusion. Two new approaches are presented to illustrate our purposes: a neuro- navigational microscope which provides an augmented reality feature through a microscope and a new technique for matching 2D real images with 3D virtual data sets. We introduce this second technique illustrated by the mapping of a 2D intra-operative photograph of the patient's anatomy onto 3D MRI images. Unlike other solutions which display virtual images in the real worked, our method involves ray traced texture mapping in order to display real images in a computed world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual endoscopy techniques have significant clinical promise for patient screening and may replace some real endoscopic examinations. Visualizations that mimic reality to an acceptable degree are a key factor for physician acceptance of virtual endoscopy. Clinicians must be able to interact with and quickly understand the visualizations. We are studying image generation paradigms and parameters within each paradigm to evaluate the important of various factors in the clinical utility of virtual endoscopy. By involving clinicians at each step of the study, we seek to better understand the parameters that clinicians find most important, and to evaluate the strengths and weaknesses of each image generation paradigm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A number of image registration algorithms are currently used to register both intra-modality and inter-modality image data sets. These algorithms, although tested, have not always been optimized or evaluated in relationship to other registration algorithms available. Analysis of registration algorithms is often difficult, due to the fact that the registration components are integrated with each other and cannot be easily isolated form the rest of the algorithm. This study outlines some important features of registration algorithms and describes a method by which these algorithms may be analyzed. This method was used to analyze (1) two cost functions, a normalized standard deviation function and a conditional entropy function, (2) two search strategies, Powell's method and a derivative method, and (3) a number of iterative relaxation/data reduction techniques. These registration components were tested with a number of intra- modal and inter-modal data sets. The cost function analyses suggest that segmentation is necessary for some inter-modal registration problems. Multi-resolution techniques are also discussed with regard to registration effectiveness. A Euclidean distance measure was used to conditional entropy combined with a derivative search strategy may be the most robust approach to image registration, whereas the normalized standard deviation with Powell's search strategy produces the worst results. Generalization and further work are indicate in the discussion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The maximum-intensity projection (MIP) method has been proposed to fast construct an interested 2D blood flow image form volume data of magnetic resonance angiography. However, the step of resampling along each ray is a major component of the computational costs associated with the MIP algorithm. We present an index-based interpolation method to improve the cost of resampling step and make the MIP image quality as good as the first-order interpolation. The experimental result reveal that he index-based interpolation method leads to significant improvement over the conventional MIP algorithm in both computational speed and image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are several different approaches to reconstruct and display 3D object from its serial cross section. Unfortunately, most previous methods do not yield satisfactory results with the object of concave shape.Besides, the constructed surface will usually be coarse if the degree of dissimilarity among the contours is high. To improve the smoothness, some 3D smoothing operators may be applied to the reconstructed surface. An enhanced 3D interpolation technique using the co-matching correspondence finding is introduced. It improves the previous surface reconstruction methods and involves the insertion of additional points for better matching. In order to obtain a more realistic 3D effect, triangular facets are generated between contours. The triangular facets are then painted with grey level. The intensity is defined by a synthetic light source and the observation point. Finally, modified Phong shading is employed to provide a smooth and continuous surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents four methods for enhancing the visual appearance of surface-rendered anatomical objects while preserving the original geometry and topology of the underlying model. The methods consist of generating normal vectors for each vertex in a surface mesh by image gradient and cross-product operations. THe normal vectors are then modified to alter lighting effects and make the surfaces appear smoother. This is accomplished using various filtering techniques that replace each original vertex normal by a weighted value. Normal vectors affect the visualization of the surface mesh because lighting models utilize the vertex normals when generating an image from a given viewpoint. Our methods alter the normal vectors in order to smooth the surfaces for easier diagnosis using medical images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work aims at the construction of an extendable brain atlas system which contains: (i) 3D models of cortical and subcortical structures along with their connections; (ii) visualization and exploration tools; and (iii) structures and connections editors. A 3D version of the Talairach- Tournoux brain atlas along with 3D Brodmann's areas are developed, co-registered, and placed in the Talairach stereotactic space. The initial built-in connections are thalamocortical ones. The structures and connections editors are provided to allow the user to add and modify cerebral structures and connections. Visualization and explorations tools are developed with four ways of exploring the brain connections model: composition, interrogation, navigation and diagnostic queries. The atlas is designed as an open system which can be extended independently in other centers according to their needs and discoveries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Combined emission/absorption and reflection/transmission volume rendering is able to display poorly segmented structures from 3D medical image sequences. Visual cues such as shading and color let the user distinguish structures in the 3D display that are incompletely extracted by threshold segmentation. In order to be truly helpful, analyzed information needs to be quantified and transferred back into the data. We extend our previously presented scheme for such display be establishing a communication between visual analysis and the display process. The main tool is a selective 3D picking device. For being useful on a rather rough segmentation, the device itself and the display offer facilities for object selection. Selective intersection planes let the user discard information prior to choosing a tissue of interest. Subsequently, a picking is carried out on the 2D display by casting a ray into the volume. The picking device is made pre-selective using already existing segmentation information. Thus, objects can be picked that are visible behind semi-transparent surfaces of other structures. Information generated by a later connected- component analysis can then be integrated into the data. Data examination is continued on an improved display letting the user actively participate in the analysis process. Results of this display-and-interaction scheme proved to be very effective. The viewer's ability to extract relevant information form a complex scene is combined with the computer's ability to quantify this information. The approach introduces 3D computer graphics methods into user- guided image analysis creating an analysis-synthesis cycle for interactive 3D segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The system is based on the premise that, in the two related tasks required for object definition, namely object recognition and delineation, human operators usually outperform computer algorithms in recognition and it is vice versa for delineation. In our system, some global recognition help is therefore taken from operators, while delineation is done automatically using fuzzy topological algorithms. The lesion quantification methods for the various protocols differ somewhat but follow this general framework. All objects are extracted as 3D fuzzy connected objects. For combining inter- and intra-protocol longitudinal information, fuzzy object registration algorithms are developed and incorporated into the system. A variety of validation studies have been conducted for all protocols to test inter- and intra-operator variations, repeat scan variations, and accuracy in terms of false positive and false negative volume fractions. They all indicate a value of less than 1.5 percent for these factors. THe operator time taken per 3D study varies between 1-20 minutes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a suite of high performance medical visualization tools implemented using a networked computing configuration. The tools are designed to supply interactive and near-real-time visualization capability for volumetric data to assist in diagnosis, monitoring, and surgical planning for kidney disorders, especially the Von Hippel Lindau Syndrome. The networked configuration combines the computing power of a vector-parallel supercomputer with the interactive graphics capability of a high-end workstation. In this paper, our focus is on the image rendering and interactive data exploration functional units of the system. One computationally intensive feature extraction and image rendering function - including the marching cubes, volume ray casting, and surface ray tracing - have been vectorized and are discussed. We also present interactive exploration tools for viewing arbitrary orthogonal sets of planes and a probing tool for 3D measurements. These latter tools were implemented on the workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast ultrasound is a valuable adjunct to mammography but is limited by a very small field of view, particularly with high-resolution transducers necessary for breast diagnosis. We have been developing an ultrasound system based on a diffraction tomography method that provides slices through the breast on a large 20-cm diameter circular field of view. Eight to fifteen images are typically produced in sequential coronal planes from the nipple to the chest wall with either 0.25 or 0.5 mm pixels. As a means to simplify the interpretation of this large set of images, we report experience with 3D life-sized displays of the entire breast of human volunteers using a digital holographic technique. The compound 3D holographic images are produced from the digital image matrix, recorded on 14 X 17 inch transparency and projected on a special white-light viewbox. Holographic visualization of the entire breast has proved to be the preferred method for 3D display of ultrasound computed tomography images. It provides a unique perspective on breast anatomy and may prove useful for biopsy guidance and surgical planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In interactive, image-guided surgery (IIGS) we use stacked slice tomographic image sets as 3D maps of the patient's anatomy. Such sets can provide exquisite information on bony anatomy, soft tissue structure and lesion definition or function. However, none of these tomographic sets clearly show the location and extent of vascular structures, which may be of critical importance to the surgical process. Conventional x-ray angiography (XRA) provides a high degree of spatial resolution but compresses 3D of tissue into a 2D image. New imaging modalities such as magnetic resonance angiography or computed tomography angiography can provide 3D information about the vascular location but in a format which makes vascular identification and location of vascular extremely difficult. Maximum intensity projection (MIP) is a post-processing technique which produces projection images using the maximum intensity value encountered on the projection line, as contrasted to the XRA process which uses the integral of line values for its projection. While each of the images are 2D representations of 3D volumes, the 3D ambiguities may be resolved by creating multiple projections angles and using motion parallax. Conventional MIP projections were developed for diagnostic purposes and radiology users. Intraoperative use in surgery brings with it a different set of constraints. In surgery, the surgeon cares about blood vessels in or near the surgical site. Projection images containing information from the contralateral side provide a superfluous distraction for the surgeon. We have developed a MIP procedure which is more appropriate for surgical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With increasing availability of medical image communication infrastructures, medical images are more and more displayed as soft-copies rather than as hard-copies. Often however, the image viewing environment is characterized by high ambient light, such as in surgery rooms or offices illuminated by daylight. We are describing a very-high- brightness cathode-ray-tube (CRT) monitor which accommodates these viewing conditions without the typical deterioration in resolution due to electron focal spot blooming. The three guns of a standard color CRT are used to create a high brightness monochrome monitor. The CRT has no shadow-mask, and a homogeneous P45 phosphor layer has been deposited instead of the structured red-green-blue color phosphor screen. The electron spots of the three guns are dynamically matched by applying appropriate waveforms to four additional multiple magnetic fields around the gun assembly. We evaluated the image quality of the triple-gun CRT monitor concerning parameters which are especially relevant for medical imaging applications. We have measured characteristic curves, dynamic range, veiling glare, resolution, spot profiles, and screen noise. The monitor can provide a high luminance of more than 200 fL. Due to nearly perfect matching of the three spots, the resolution is mainly determined by the beam profile of a single gun and is remarkably high even at these high luminance values. The P45 phosphor shows very little structure noise, which is an advantage for medical desktop applications. Since all relevant monitor parameters are digitally controlled, the status of the monitor can be fully characterized at any time. This feature particularly facilitates the reproduction of brightness and contrast values and hence allows easy implementation of a display function standard or to return to a desired display function that has been found useful for a given application in the past.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Investigations in the area of digital mammography have been limited by the resolution of the sensor devices employed. We have proposed a multiple camera or mosaic architecture in which adjacent sensors observe an overlapping field of view. Such a technique can deliver extremely high resolution while simultaneously maintaining a moderate cost for the resultant instrument. However, this technique's clinical efficacy will be limited by the ability to accurately and precisely reconstruct a single continuous image from multiple CCD sensors. We present an integrated algorithm which will correct distortions introduced by the camera while addressing the problem of image reconstruction or 're- stitching.' Such a technique will minimize pixel loss by limiting image re-sampling to a single incident. Custom designed calibration screens were employed for the calculation of camera distortion and intra-camera disparity. A parallel digital signal processor architecture has been developed to accelerate system performance when employing a large number of camera inputs. We present a quantitative evaluation of our reconstruction technique and an analysis with respect to similar methods of image reconstruction. We have previously constructed and presented a prototype imager for digital radiography based upon a similar sensor architecture. The algorithm presented will significantly enhance the feasibility of our multiple camera architecture for both digital radiography and mammography. We believe that such a methodology will enhance diagnostic accuracy at a moderate cost when compared with system of similar imaging resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This Poster Exhibit examines some of the criteria used to evaluate high resolution display systems. These systems have several applications including medical imaging, prepress, and image exploitation. Criteria examined are number of shades of gray, bandwidth, focus, smearing, and ghosting. These each constitute a way of characterizing the quality of the image, allowing us to go beyond an expression of "looks pretty nice." The display systems used are 2048x2560 grayscale monitors, driven by a graphics card that plugs into one of the PCI slots in a computer or workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new dry system based on a direct thermal technology with diagnostic properties has been developed. This paper discusses the major improvements in printer and film required to achieve the image quality needed to meet the diagnostic requirements. The concept of the printer and the film rate both explained. The image quality achieved in terms of contrast, resolution, noise and other parameters is discussed. The results of archivability and shelf life testing and the physical properties of the material are also presented. To validate the technology for diagnostic purposes, a hospital test has been performed for ultrasound, CT, MRI, R and F and vascular studies. The method and results of this testing are presented in the paper. The hospital tests showed that the images obtained with the dry system materials can be used for diagnostic purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electrical impedance tomography, is a non-invasive imaging technique with widespread applications in medicine and industry. It aims to image the conductivity distribution within a test column by making electrical measurements on the surface of the volume. Reconstruction of the conductivity distribution in electrical impedance tomography is under-determined and ill-posed problem, typically requiring simplifying assumptions or regularization and least squares with linear inequality constraints solutions to improve the image quality. The ability to consider least squares problems with linear inequality constraints solution allows us in particular to have such constraints on the solution as non-negativity conductivity values. A new method concerning multi-frequency electrical impedance tomography has been developed and the full information contained in the complex tissue conductivity can be obtained. The results using least squares problems with linear inequality constraints solution are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many techniques for image compression do exist and are well described in the literature. Lossless image compression is for digital coronary angiograms limited to compression ratios in the order of 3-4. THe purpose of this work is about the assessment of the diagnostic image quality of lossy compressed coronary angiograms by means of quantitative coronary angiography (QCA). We measure in the compressed images the diameter of the vessel at several places as a function of the compression ratio and compare this with the original image. The set of representative images is compressed with the ratios 4, 8, 12 and 16. The selected compression algorithms are JPEG, lapped orthogonal transform (LOT) and modified fast lapped transform (MFLT). The obtained quantitative diameter values start to deviate at image representations down at 0.5 bit per pixel with the JPEG giving the greatest differences. The results of LOT and MFLT are performing better with respect to the criterion vessel size of diagnostic relevant vessels. At the greater compression ratios some blocking artifacts or ringing starts to become visible. Somewhat to our surprise in our comparison study we have found that there are no great deviations in measured vessel diameter for the compression ratios 4, 8, 12 and 16. At compression ratio 16 JPEG has the largest deviation. According to the changes in the quantitative data higher compression ratios are certainly feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A tree-structured vector quantization system employing conditional arithmetic coding is introduced to encode wavelet coefficients of medical image. The proposed scheme efficiently reduces bit rate by exploiting inter- and intra- band correlation and effectively approximates the embedded scheme by utilizing sequential bit allocation results of the nested quantizers. The proposed scheme provides good bitrate-PSNR performance and subjective reconstruction quality with lower encoding complexity than the wavelet full-search vector quantization systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study attempts to assess the feasibility and application of an algorithm combining noise-reduction and lossy compression to electronic portal images in radiation therapy. The basis of the algorithm is universal hard- thresholding in the wavelet-domain. The approach for the assessment is two-fold: (1) an observer study using an image of a contrast-resolution phantom with different levels of compression; (2) a quantitative study using control phantom to determine contrast-to-noise (CNRs) and relative modulation transfer functions. All results are analyzed as a function of a quality factor which is linked to the universal threshold level and expresses the lossiness of the compression. Results show that there exists an optimal quality factor for which the noise-reduction is maximal. The contrast resolution study indicates that the noise reduction is relevant for the observer. Furthermore it is possible to determine the optimal level from the behavior of the CNR as a function of all quality factor in regions with essentially no contrast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A numerical observer (JPEGNO) to evaluate the influence of JPEG compression on the diagnostic quality of CT image is proposed. JPEGNO is based on the grey scale histogram of the image and is defined as the inverse of the sum of the difference between successive grey levels in the histogram of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The present investigation reports the results of entropy coding for lossless compression of the visible human (VH) color data set for archival purposes where any loss of information is not desirable. One of the main objectives of this investigation was to determine the role of a unique feature i.e. high correlation between adjacent VH slices in designing a lossless compression algorithm. This study demonstrates that lossless JPEG provides better compression of individual slices than the difference images despite low entropy content of the difference images. This may be attributed to abrupt variation in gray levels in difference images rather than smoothly varying individual images which are more suitable for lossless JPEG format. Huffman coding of the difference image frames provides a general idea of the compressibility of 3D predictive coding. However, a combination of binary arithmetic and predictive coding that takes advantage of the similarity between adjacent frames may yield the most efficient, lossless compression scheme for the VH slices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a lossless image compression method, which provides progressive transmission property and scalability. The proposed method enables the codec to have an asymmetric structure by employing a fast adaptive subband decomposition and the optimization of the quantization parameter during arithmetic coding. This method provides a higher compression ratio than JPEG lossless mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression ratio hierarchical tree to code these coefficients. Experimental results indicate that our perceptually lossless coder achieves a compression ratio 2-5 times higher than typical lossless compression schemes while producing perceptually identical image content on the target display device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Alternative strategies used for wavelet-based lossy image compression can affect lesion detection differently at higher compression ratios. These effects were studied using three variants of a wavelet-based image compression algorithm: (1) unified quantization, (2) truncation of all coefficients at all subbands, and (3) truncation of coefficients subband by subband. The nonprewhitening- matched-filter-derived da, a deductibility index, was used to quantify the changes in detection performance as a function of compression ratio for each strategy. Based on this approach, the optimal compression strategy was determined. Two classes of images were generated to simulate signal-present and signal-absent cases for a liver imaged by CT. For each strategy, the performance in discriminating between the signal-present class and signal-absent class was quantified by da for varying compression ratios. Among the three strategies studied, truncation of all coefficients is the least desirable strategy for preserving small, low contrast signals; truncation of coefficients subband by subband yields the best result for subtle signals, but distorts high frequency edges between tissues; unified quantization is the best strategy if both low contrast objects and high frequency edges are to be preserved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the efficient algorithm mapping for the real-time MPEG-2 encoding on the KAIST image computing system (KICS), which has a parallel architecture using five multimedia video processors (MVPs). The MVP is a general purpose digital signal processor (DSP) of Texas Instrument. It combines one floating-point processor and four fixed- point DSPs on a single chip. The KICS uses the MVP as a primary processing element (PE). Two PEs form a cluster, and there are two processing clusters in the KICS. Real-time MPEG-2 encoder is implemented through the spatial and the functional partitioning strategies. Encoding process of spatially partitioned half of the video input frame is assigned to ne processing cluster. Two PEs perform the functionally partitioned MPEG-2 encoding tasks in the pipelined operation mode. One PE of a cluster carries out the transform coding part and the other performs the predictive coding part of the MPEG-2 encoding algorithm. One MVP among five MVPs is used for system control and interface with host computer. This paper introduces an implementation of the MPEG-2 algorithm with a parallel processing architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel method for selective compression of medical images. In our scheme, regions of interest (ROIs) are differentiated from the background by detecting relevant features such as edges, texture and clusters. A detection map is created for ROIs so that the coding system can use it to assign proper bit rates in the process of quantization. The localized, multiresolution representation of the wavelet coefficients makes it easy to allocate different bit rates to complex, adjacent ROIs in the process of successive approximation quantization of wavelet coefficients. For maximum flexibility and efficiency in selective allocation of bit rates, we use an intraband coding scheme, compact quadtree, instead of the conventional interband approaches. By adjusting the amplitudes of the wavelet coefficients in ROIs, one can smoothly preserve multiple ROIs with virtual no coding overhead. Experimental results on digital mammographical images show that minute microcalcifications can be perfectly preserved while their background is compressed at very high ratios, i.e., over 100 to 1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of ongoing research on hardware and software technologies for telemedicine, we have explored compression algorithms for ultrasound. Efficient compression is essential to PACS and telemedicine systems supporting ultrasound video sequences due to limited system resources such as archival storage and bus and network bandwidths. We have studied MPEG compression of ultrasound sequences and the impact of varying encoding parameters such as quantization scale on B and BC mode sequences. Our results indicate that standard interframe MPEG coding is not optimal for all echocardiographic sequences. Adjustment of MPEG quantization and sequencing parameters can provide improvements in compression ratio and/or image quality. Three areas for improvement in MPEG compression of ultrasound sequences have been identified and methods are suggested by which these issues may be addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to display medical images on electronic medical record (EMR) workstations appears to be an important step for both widespread implementation of the EMR and of picture archival and communications systems. We describe a system which we have implemented for achieving image display on EMR workstations, which is integrated with the EMR. Some of the important questions we hope to answer at the end of this pilot project are also outlined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the telepathology, rendering devices significantly influence the perceived image quality. If the resolution and color depth are reduced beyond a certain point, however, it is not possible to obtain images which can be used in telepathology even in an ideal situation. With this in mind, we evaluated image quality, compression, size and rates of data exchange with several histological cases on several kinds of systems for our International Consortium for Internet Telepathology (ICIT) project. The ICIT network uses widely available nonpropriety hardware and software with the Internet as a means of communication.In this study, we discuss the effective image acquisition methods for telepathology. To evaluate microscopic images, various resolution size were used. The images were also evaluated at different JPEG compression ratio, including zero compression, and different format. To evaluate an entire glass slide image, a scanner in transparency mode and an NTSC camera were used. Every case showed similar results. For he microscopic image, although the high resolution images, such as 2k X 1.5k or higher, contain more diagnostic information than lower resolution images; sufficient data was retained in the latter that it does not appear to negatively effect diagnosis. The circumstance and condition for image acquisition, such as specimen thickness or dast of glass slide, are most influenced on the highest image resolution. Usually, we use 5-10 images/case for a telepathology conference. To see all images of a case at a glance before detailed observation, or to switch to the other images immediately, a lower resolution,such as 1k X 0.7k is useful. For the entire glass slide, the reviewer could select the desired area by scanner; however, selecting it by the NTSC camera, was not easy to do. On the monitor, the scanned image has almost the same information as the microscopic image captured by the NTSC camera with 2x objective lens. To ge ta high enough quality image, the important factors are correct usage of the microscope and the condition of glass slide, not only high performance equipment.Since we have been using the Internet as the communication medium, we selected 1024 X 774 and 640 X 480 with 1/7-1/15 compressed image for microscopic image and 2700 dpi scanned image for entire glass slide. For the static image telepathology, the most important image is the low power image such as the entire specimen. High resolution images such as 3k X 2k are also useful for different purpose such as publication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a multimedia, ATM network based approach to generating and transmitting imaging procedure multimedia (MmR) reports in emergency situations. This approach was applied to V/P lung scintigrams in our institution. The architecture of our multimedia reporting system consists of a (gamma) -camera providing V/P lung scintigram as Interfile formatted data, a workstation in which MmRs can be generated and from which they can be accessed, a set of low cost workstations where MmR can be displayed, and an ATM network running throughout our hospital and connecting the above stations. The main features of the MmR are detailed in the paper and are assessed from a physician point of view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the methods and results of a workflow study of a multi-specialty cardiology conference, preliminary design concepts for a digital cardiac conference room, and a component that is anticipated for a complete implementation. Workflow studies at the University of California, San Francisco (UCSF) Medical Center were performed to understand its traditional catheterization conference work procedures and processes. These studies involved observing and interviewing people that prepare, present, and attend the conference. The workflow investigation gave insight into current drawbacks. Scenarios were then generated that described potential new designs of the cardiac catheterization conference.Knowledge gained from the workflow studies, and feedback from UCSF physicians whose reviewed the digital conference room scenarios led to the final system design. We have prototype one of the components of the design: a software tool for improved presentation of dynamic images. This tool has been implemented in Java and is therefore platform independent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel image compression scheme is presented, which is specially suited for image transmission over a narrow-band network typically required for telemedicine to remote regions. A wavelet compression algorithm is enhanced with the feature of dynamically compressing different regions of the image. This feature is provided while keeping the algorithm's embedding ability, which leads to an 'importance' embedding rather than the traditional 'energy' based embedding. To incorporate regions in a wavelet-based compression algorithm the region edges were carefully tuned to eliminate the negative influence that the wavelet transform has on the region algorithm. Test of this new algorithm on standard test images and ultrasound images showed that both the dynamic and region-based features could be incorporated into the wavelet algorithm with only a small overhead.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.