PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Implantation of radioactive isotopes within the prostate for the treatment of early stage localized prostate cancer is becoming a popular treatment option. Postoperative calculation of the dose delivered to the prostate requires accurate verification of the number and location of seeds within the prostate. Current post operative dosimetry technique requires the dosimetrist to manually count and record the position of each seed from x-ray computed tomography (CT) images. This procedure is operator-dependent and time-consuming, thus limiting the ability of different brachytherapy centers to compare results and create a standard methodology. Seed identification is performed by thresholding the CT images interactively, using a graphical user interface, followed by mathematical morphology to remove noise. Segmented seeds are grouped into regions via connected-component analysis. Regions are then classified into seeds using a prior knowledge of the seed dimensions and their relative positions in the consecutive CT images. Unresolved regions, which can indicate the presence of more than one seed, are corrected manually. The efficiency of this tool was evaluated by comparing the time to manually count the seeds to the time required to do the same task using the automated program. For 15 sets of images from 15 patients, the average time for manually counting the seeds was 45 minutes per patient versus 6.4 minutes on average per patients, the average time for manually counting the seeds was 45 minutes per patient versus 6.4 minutes on average per patient when the software was used to perform the same task. Using the interactive visualization and segmentation algorithm, the time required to count the seeds during post- implant dosimetry has been reduced by a factor of 7 compared to the existing manual technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many medical fields, volume rendering is beneficial. For the effective surgery planning, interior objects as well as the surface must be rendered. However, direct volume rendering is computationally expensive and surface rendering cannot represent inside objects of the volume data. In addition, surface rendering has a disadvantage that the huge amount of generated polygons cannot be easily managed even on high-end graphics workstations. This paper presents a way of generating multi-planar imags efficiently. Multi-planar rendering consists of two parts, surface and cutting plane. To efficiently generate surface, our algorithm uses image- based rendering. Image-based rendering generates an image in constant time regardless of the complexity of the input scene. To speed up the performance, our algorithm uses intermediate image space instead of final image space. To reduce the space complexity , we use a new data structure that is based on delta-tree to represent volume. The algorithm was implemented on a Silicon Graphics Indigo 2 workstation with a single 195 MHz R10000 processor and 192MB Main Memory. For the experiments, we use 3 volume data sets, UNC head, engine and brain. Our algorithm takes 5-20 milliseconds to project reference images to the desired view. Including the warping time, 40 milliseconds are required to generate an image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Decision making in the treatment of scoliosis is typically based on longitudinal studies that involve the imaging and visualization the progressive degeneration of a patient's spine over a period of years. Some patients will need surgery if their spinal deformation exceeds a certain degree of severity. Currently, surgeons rely on 2D measurements, obtained from x-rays, to quantify spinal deformation. Clearly working only with 2D measurements seriously limits the surgeon's ability to infer 3D spinal pathology. Standard CT scanning is not a practical solution for obtaining 3D spinal measurements of scoliotic patients. Because it would expose the patient to a prohibitively high dose of radiation. We have developed 2 new CT-based methods of 3D spinal visualization that produce 3D models of the spine by integrating a very small number of axial CT slices with data obtained from CT scout data. In the first method the scout data are converted to sinogram data, and then processed by a tomographic image reconstruction algorithm. In the second method, the vertebral boundaries are detected in the scout data, and these edges are then used as linear constraints to determine 2D convex hulls of the vertebrae.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an isosurface extraction algorithm which can generate low resolution isosurfaces, with 4 to 25 times fewer triangles than that generated by marching cubes algorithm, in comparable running times. The key idea is to partition the volume in to variable-sized rectangular boxes and extract isosurface for each box. The flexibility of forming rectangular boxes instead of square boxes improves the triangle reduction ratio. It is faster than postprocessing triangle reduction algorithms to generate low resolution mesh though the triangles in the mesh may not be optimally reduced. The generated mesh also preserve the geometry details of the true isosurface. By climbing from vertices to edges to faces, the algorithm constructs boxes which adapt to the geometry of the true isosurface. Unlike previous adaptive cubes algorithms, the algorithm does not suffer from the gap filling problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to shade a volumetric object, we should estimate a normal vector for each point on its surfaces. Normal estimation methods can be classified into object-space approaches and image-space ones. Image-space methods that exploit the distance from a viewer to visible surface points are more advantageous than object-space ones, since they require less computation and shade volumes rapidly even though the volumes are deformed frequently. However, they might produce topological relationship of visible surface and hidden ones. Also we devise the extended depth buffer for representing the topology of surfaces, which stores the distance to visible surfaces as well as that to hidden ones. It improves the performance of conventional discontinuity detector and computes normals on surface boundary with accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate brachytherapy is a treatment procedure for localized prostate cancer. It involves placing needles and subsequently radioactive seeds under ultrasound guidance into predetermined targets within the prostate. As the pubic arch can be a barrier to successful placement of the needles, preoperative assessment requires visualization of the pubic arch with respect to the prostate. Current CT- based techniques to assess pubic arch interference (PAI) are expensive and time-consuming. This paper describes a new technique using transrectal ultrasound that enables the visualization of the pubic arch bone and the prostate gland simultaneously. The technique involves speckle suppression in the pubic arch ultrasound image and contrast enhancement of the pubic bones using sticks algorithm. This step is followed by noise filtering using percentile thresholding and curve fitting. The detected arch is superimposed on the transverse cross-sectional image of the prostate at its largest position predicted by the algorithm was compared with the 'true' pubic arch position determined at surgery by placing needles into multiple coordinates corresponding and adjacent to the predicted arch position. The accuracy of the algorithm in detecting the pubic arch was tested on 50 patients. Of 1030 points tested, the algorithm prediction was correct at 932 points. The mean Type II error, i.e., the algorithm predicted soft tissue while bone was encountered during needle insertion, was 2.9 percent, which corresponds to less than 1 out of 22 test points along the predicted pubic arch. The accuracy of our algorithm is good and the errors are within clinically-acceptable limits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The common approach for artery-vein separation applies a presaturation pulse to obtain different image intensity representations in MRA data for arteries and veins. However, when arteries and veins do not run in opposite directions as in the brain, lungs, and heart, this approach fails. This paper presents an image processing approach devised for artery-vein separation. The anatomic separation utilizes fuzzy connected object delineation. The first step of this separation method is the segmentation of the entire vessel structure from the background via absolute connectedness by using scale-based affinity. The second step is to separate artery from vein via relative connectedness. After 'seed' points are specified inside artery and vein in the vessel- only image, the operation is performed in an iterative fashion. The small regions of the bigger aspects of artery and vein are separated in the initial iteration. Further regions are added with the subsequent iterations so that the small aspects of artery and vein are included in alter iterations. Shell rendering is used for 3D display. Combining the strengths of fuzzy connected object definition, object separation, and shell rendering, high- quality volume rendering of vascular information in MRA data has been achieved. MS-325 contrast-enhanced MRA were used to illustrate this approach. Several examples of 3D display of arteries and veins are included to show the considerable promise of this new approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Femoral anteversion is regarded as the extent of the torsion of the femoral head. Several methods for measuring femoral anteversion using CT image have been described since 1980. We introduce the 3D modeling method for measuring femoral anteversion in this paper. In this method, femoral anteversion is determined by calculating all required parameters with the boundary data of femur. To evaluate the 3D modeling method, the result by this method are compared with those by the conventional method - the 2D CT method and the 3D imagine method. The data from the direct method is used as reference values. The average error of the 3D modeling method, the 2D CT method and the 3D imaging method was 1.1 degree, 5.33 degree, and 0.45 degrees, respectively. In conclusion, the 3D modeling method proved to be a better technique since the procedures reduced the processing time and eliminates manual errors, thus provided accurate data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a step toward understanding complex spatial distribution patterns of prostate cancers, a 3D master model of the prostate, showing major anatomical structures and probability maps of the location of tumors, has been pilot developed. A virtual environment supported by the 3D master model and in vivo imaging features, will be used to evaluate, simulate, and optimize the image guided needle biopsy and radiation therapy, thus potentially improving the efficacy of prostate cancer diagnosis, staging, and treatment. A deformable graphics algorithm has been developed to reconstruct the graphics models from 200 serially sectioned whole mount radical prostatectomy specimens and to support computerized needle biopsy simulations. For the construction of a generic model, a principal-axes 3D registration technique has been developed. Simulated evaluation and real data experiment have shown the satisfactory performance of the method in constructing initial generic model with localized prostate cancer placement. For the construction of statistical model, a blended model registration technique is advanced to perform a non-linear warping of the individual model to the generic model so that the prostate cancer probability distribution maps can be accurately positioned. The method uses a spine- surface model and a linear elastic model to dynamically deform both the surface and volume where object re-slicing is required. For the interactive visualization of the 3D master model, four modes of data display are developed: (1) transparent rendering of the generic model, (2) overlaid rendering of cancer distributions, (3) stereo rendering, and (4) true volumetric display, and a model-to-image registration technique using synthetic image phantoms is under investigation. Preliminary results have shown that use of this master model allows correct understanding of prostate cancer distribution patterns and rational optimization of prostate biopsy and radiation therapy strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3D visualization of intracranial vasculature can facilitate the planning of endovascular therapy and the evaluation of interventional result. To create 3D visualizations, volumetric datasets from x-ray computed tomography angiography (CTA) and magnetic resonance angiography (MRA) are commonly rendered using maximum intensity projection (MIP), volume rendering, or surface rendering techniques. However, small aneurysms and mild stenoses are very difficult to detect using these methods. Furthermore, the instruments used during endovascular embolization or surgical treatment produce artifacts that typically make post-intervention CTA inapplicable, and the presence of magnetic material prohibits the use of MRA. Therefore, standard digital angiography is typically used. In order to address these problems, we developed a visualization and modeling system that displays 2D and 3D angiographic images using a simple Web-based interface. Polygonal models of vasculature were generated from CT and MR data using 3D segmentation of bones and vessels and polygonal surface extraction and simplification. A web-based 3D environment was developed for interactive examination of reconstructed surface models, creation of oblique cross- sections and maximum intensity projections, and distance measurements and annotations. This environment uses a multi- tier client/server approach employing VRML and Java. The 3D surface model and angiographic images can be aligned and displayed simultaneously to permit better perception of complex vasculature and to determine optical viewing positions and angles before starting an angiographic sessions. Polygonal surface reconstruction allows interactive display of complex spatial structures on inexpensive platforms such as personal computers as well as graphic workstations. The aneurysm assessment procedure demonstrated the utility of web-based technology for clinical visualization. The resulting system facilitated the treatment of serious vascular malformations by enabling better visualization of aneurysms. In addition, the improved understanding of complex vasculature may shorten angiographic sessions and reduce patient exposure to x-rays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical imaging of transmembrane potentials in cardiac tissue is a rapidly growing technique in cardiac electrophysiology. Current studies typically use a monocular imaging setup, thus limiting investigation to a restricted region of tissue. However, studies of large-scale wavefront dynamics, especially those during fibrillation and defibrillation, require visualization of the entire epicardial surface. We have developed a panoramic cardiac visualization system using mirrors which performs two tasks: (1) reconstruction of the surface geometry of the heart, and 2) representation of the panoramic fluorescence information as a texture mapping onto the geometry that was previously created. This system permits measurements of epicardial electrodynamics over a geometrically realistic representation of the actual heart being studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a step toward understanding the complex information from data and relationships, structural and discriminative knowledge reveals insight that may prove useful in data interpretation and exploration. This paper reports the development of an automated and intelligent procedure for generating the hierarchy of minimize entropy models and principal component visualization spaces for improved data explanation. The proposed hierarchical mimimax entropy modeling and probabilistic principal component projection are both statistically principles and visually effective at revealing all of the interesting aspects of the data set. The methods involve multiple use of standard finite normal mixture models and probabilistic principal component projections. The strategy is that the top-level model and projection should explain the entire data set, best revealing the presence of clusters and relationships, while lower-level models and projections should display internal structure within individual clusters, such as the presence of subclusters and attribute trends, which might not be apparent in the higher-level models and projections. With may complementary mixture models and visualization projections, each level will be relatively simple while the complete hierarchy maintains overall flexibility yet still conveys considerable structural information. In particular, a model identification procedure is developed to select the optimal number and kernel shapes of local clusters from a class of data, resulting in a standard finite normal mixtures with minimum conditional bias and variance, and a probabilistic principal component neural network is advanced to generate optimal projections, leading to a hierarchical visualization algorithm allowing the complete data set to be analyzed at the top level, with best separated subclusters of data points analyzed at deeper levels. Hierarchial probabilistic principal component visualization involves (1) evaluation of posterior probabilities for mixture data set, (2) estimation of multiple principal component axes from probabilistic data set, and (3) generation of a compete hierarchy of visual projections. With a soft clustering of the data set ti via the EM algorithm, data points will effectively belong to more than one cluster at any given level with posterior probabilities denoted by zik. Thus, the effective input values are zikti for an independent visualization space k in the hierarchy. Further projections can again be performed using the effective input values zikzj\kti for the visualization subspace j. The complete visual explanation hierarchy is generated by performing principal projection and model identification in two iterative steps using information theoretic criteria, EM algorithm, and probabilistic principal component analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CTA and MRA have established themselves as important complementary techniques to conventional angiography. Both of these techniques require advanced vascular visualization as part of their clinical protocols. Among the important requirements for vascular visualization are the detection of small pathologies and the perception of the relationship between vascular structures. Depending on the acquisition modality and the location of the pathology, different 3D visualization techniques are used. With any of these techniques, interactivity significantly improves the clinical understanding of the study. We will show in this case study, how IAP, a platform product specifically designed for medical imaging applications, addresses these requirements. Specifically, we will analyze two studies involving two different types of vascular structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many surgical procedures, ultrasound is used for real- time visualization in order to minimize invasion of healthy tissue. Unfortunately, the exact location of soft tissues and the composition of tissues of interest may be difficult to determine using ultrasound. In interactive image guided surgery (IIGS), the display of present surgical position on preoperative tomographic imags enhances the surgeons locational awareness and provides knowledge of surgical anatomy. However, changes in the anatomy during surgery are not realized by the current IIGS techniques. This manuscript details initial experiments conducted to merge the strengths of intraoperative ultrasound imaging with IIGS. This includes: 1) developing a technique for accurately tracking an ultrasound probe in physical space and 2) determining a transformation to map ultrasound image space into physical space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Case-based reasoning (CBR) which involves the representation of prior experience as cases, provides a natural approach for developing a medical diagnosis support system because medical practitioners usually solve new problems by comparing them to previously seen cases. We propose a general framework for such a system with the aim to assess the normality and abnormality of the cervical spine. Two distinct types of visual features are used for indexing the cases: a small set of basic features that is known to be useful by radiologists for diagnosis and a minimal set of salient generic visual features. The latter is obtained by using knowledge discovery from databases (KDD) techniques. Standard image analysis techniques are used to automatically extract both of these types of features from the images. The efficiency and adaptiveness of the system can be further improved by using KDD techniques to reduce the set of cases to the minimum as well as to extract appropriate adaptation rules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been recognized early on that digitizing medical information makes diagnostic technology more advanced and efficient. In order to convert image information, which comprises the majority of all medical information, into digital data, various technologies including those for input, processing, transmission storage, and display need to develop at roughly the same pace. To data, there have been few cases where this has been done. However, recent major advances in high-resolution image input/output, image encoding, super-fast transmission, high-capacity storage, and other technologies have intensified the drive towards digitizing and networking all medical information. This paper will show that the spread of super-high-speed networks capable of transmitting large amounts of data in a short time is indispensable for accurate medical diagnosis, and that this will make it possible to realize an integrated medical information syste. A target application for the medical image diagnosis of the Super High Definition imags being developed by the authors of this paper is telepathology, which particularly demands high-quality images. In this paper, we will study, among other things, the concrete issues crucial to building and networking a digital system and the approach to resolving such issues. We will also report on the building of our experimental system that fulfills such demands as well as discuss a pathological microscopic image transmission system with image quality that will not lower diagnostic accuracy and fast response and good operability that will not make diagnosticians feel impatient. Finally, we will discuss a test in which we remotely operated a microscope over an ATM line to prove that it is possible to capture, transmit, and display a still super-high-definition digital image with a resolution of 2,048 X 2,048 pixels in about 5 seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cardiac patients may undergo a range of diagnostic examinations including angiography, echocardiography, nuclear medicine, x-ray, ECG and blood pressure measurement. Cine angiograms are reviewed at cardiac case conferences. Other data types are not typically exhibited due to the incompatibility of display devices. The aim of this study was to evaluate a workstation developed for multimodality reporting in cardiac case conferencing. A PC based system was developed as part of an EU project AMIE enabling all patient data to be viewed and manipulated on a large screen display using a high resolution video projector. The digital data was acquired using a variety of methods compatible with the systems involved. A technical evaluation of the projected imagery was performed by the grading of phantom test objects. A limited clinical evaluation was also performed whereby a panel of 10 consultant radiologists and cardiologists reported on angiography and x-ray images from 50 patients. Several months later the original data sets were reported and the result compared. Results of the clinical and technical evaluations indicate that the systems is satisfactory for the primary diagnosis of all data types with the exception of x-ray. The projected x-ray imagery is satisfactory for reference and teaching purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image and video compression standards such as JPEG, MPEG, H.263 are highly sensitive to error during transmission. Among typical error propagation mechanisms in video compression schemes, loss of block synchronization produces the worst image degradation. Even an error of a single bit in block synchronization may result in data to be placed in wrong positions that is caused by spatial shifts. Our proposed efficient block error concealment code (EBECC) virtually guarantees block synchronization and it improves coding efficiency by several hundred folds over the error resilient entropy code (EREC), proposed by N. G. Kingsbury and D. W. Redmill, depending on the image format and size. In addition, the EBECC produces slightly better resolution on the reconstructed images or video frames than those from the EREC. Another important advantage of the EBECC is that it does not require redundancy contrasting to the EREC that requires 2-3 percent of redundancy. Our preliminary results show the EBECC is 240 times faster than EREC for encoding and 330 times for decoding based on the CIF format of H.263 video coding standard. The EBECC can be used on most of the popular image and video compression schemes such as JPEG, MPEG, and H.263. Additionally, it is especially useful to wireless networks in which the percentage of image and video data is high.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last few years, there has been a great deal of effort invested in the fields of discrete wavelet transform (DWT) by the scientific community. DWT associated with vector quantization has been proved to be an invaluable tool for image compression. The DWT, however, is very computationally intensive process. There is a need to investigate innovative and computationally efficient architectures to obtain the image compression in real time. In this paper, we present a novel, robust, and regular architecture to implement the DWT for image compression. Beside the performance, the architecture takes into account data format, power, hardware cost and scalability issues rising form realistic operating conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this work is to compare the speed of isosurface rendering in software with that using dedicated hardware. Input data consists of 10 different objects form various parts of the body and various modalities with a variety of surface sizes and shapes. The software rendering technique consists of a particular method of voxel-based surface rendering, called shell rendering. The hardware method is OpenGL-based and uses the surfaces constructed from our implementation of the 'Marching Cubes' algorithm. The hardware environment consists of a variety of platforms including a Sun Ultra I with a Creator3D graphics card and a Silicon Graphics Reality Engine II, both with polygon rendering hardware, and a 300Mhz Pentium PC. The results indicate that the software method was 18 to 31 times faster than any hardware rendering methods. This work demonstrates that a software implementation of a particular rendering algorithm can outperform dedicated hardware. We conclude that for medical surface visualization, expensive dedicated hardware engines are not required. More importantly, available software algorithms on a 300Mhz Pentium PC outperform the speed of rendering via hardware engines by a factor of 18 to 31.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3D image processing algorithm for separating vessels in datasets from Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) has been developed and tested on clinical MRA data. Relevant and irrelevant vessels are marked interactively by the user. The algorithm them processes the data, ideally yielding a 3D dataset representing only vessels of interest, while removing other structures. The result is projected to 2D images for visualization. In contrast to traditional segmentation methods, little greyscale information is lost in the process, and the amount of interaction required is relatively small. The classification of voxels utilizes a novel greyscale connectivity measure. A comparison based on the greyscale connectivity values with marked regions is made to decide whether a voxel is of interest for visualization or not. In the projection, those voxels are excluded where the connectivity value is smaller for the relevant vascular structure than for the irrelevant ones. In cases of ambiguity, morphological operations applied to unambiguously classified regions may be used as an additional criterium. In the implementation of the connectivity computation, an iterative propagation scheme is used, similar to that used in chamfer algorithms for distance transforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-performance mediaprocessors that aim oat high-level language programming without sacrificing much would be very desirable in many applications including medical imaging. Media Accelerated Processor (MAP) 1000 that is jointly being developed by Hitachi, Ltd. and Equator Technologies, Inc. is one of such next-generation mediaprocessors. We present the two main issues in programming these mediaprocessors, i.e., using C intrinsics and data flow control, which still requires a high degree of expertise in the detailed architectural features including many low-level instructions, handling input/output data transfers, and in- depth understanding of the algorithm. To ease the programming burden and allow flexible and efficient deployment of the MAP-based target system, we have developed the MAP University of Washington Image Computing Library (UWICL) for the MAP1000. Currently, it consists of 105 functions. The UWICL functions effectively decouple the data flow control and data processing in a flexible two-layered software structure, where the upper layer is responsible for the data transfer between on-chip cache and off-chip memory by utilizing the on-chip DMA controller in a double- buffering scheme and the lower layer performs the data processing. This hierarchy allows the flexibility to use the UWICL modules depending on how the application is implemented and/or the level of user's experience in programming the MAP1000 mediaprocessor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Application accuracy is a crucial factor for stereotactic surgical localization systems. The different qualities of medical images can produce different influences on the application accuracy. However there are a lot of factors that can have an effect on the application accuracy. In this study, we compared the influences of different section thickness of MRI and CT images on the application accuracy during image-guided surgery. An implantable frameless marker system was used. CT scans were taken using 2 kinds of thickness, 1 and 2 mm, and with 2 resolution 256 X 256, 512 X 512. T1 weighted MRI images were used with 3 kinds of thickness, 1 and 3 and 10mm. The IR tracking systems and the Neurosurgery Planning System software were used to do image registration and intraoperative digitization. The differences among the mechanical measurements, image digitization from the computer and the measurement through the tracking systems were compared. The mechanical measurement was used as the most accurate measurement. A statistical study was used to analyze the results. In the CT group, there was a significant difference between 1mm and 2mm sections. There was also a difference found between the 256 X 256 and 512 X 512 image quality groups. In the MRI group, there was a significant difference between 10mm and 1mm or 3mm sections, but no difference between 1mm and 3mm sections. When compares CT and MRI with 1mm thickness, there was no significant difference. For the evaluation of the application accuracy during image-guided surgery, the quality of the medical image is an important factor to be dealt with. The thickness of section is usually used factor to analyze the influence. It is commonly accepted that the thinner the thickness, the better the application accuracy. However there is a limitation. Our results showed that for CT images the thickness when reduced from 2mm to 1mm can still significantly improve the application accuracy. But in MRI image when the thickness were reduced from 3mm to 1mm, the application accuracy remained the same. These results may reflect the difference of machines producing these medical images. The quality of medical images do have influences on the application accuracy during image guided surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While laparoscopes are used for numerous minimally invasive procedures, minimally invasive liver resection and ablation occur infrequently. the paucity of cases is due to limited field of view and difficulty in determination of tumor location and margins under video guidance. By merging minimally invasive surgery with interactive, image-guided surgery, we hope to make laparoscopic liver procedures feasible. In previous work, we described methods for tracking an endoscope accurately in patient space and registration between endoscopic image space and physical space using the direct linear transformation (DLT). We have now developed a PC-based software system to display up to four 512 Χ 512 images indicating current surgical position using an active optical tracking system. We have used this system in several open liver cases and believe that a surface-based registration technique can be used to register physical space to tomographic space after liver mobilization. For preliminary phantom liver studies, our registration error is approximately 2.0mm. The surface-based registration technique will allow better localization of non-visible liver tumors, more accurate probe placement for ablation procedures, and more accurate margin determination for open surgical liver cases. The surface-based registration technique will allow better localization of non-visible liver tumors, more accurate probe placement for ablation procedures, and more accurate margin determination for open surgical liver cases. The surface-based/DLT registration methods, in combination with the video display and tracked endoscope, will hopefully make laparoscopic liver cryoablation and resection procedures feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessing the image quality of display devices is becoming an important concern for radiology departments with large numbers of widely distributed image displays. Methods commonly used for laboratory measurements are too costly and cumbersome for routine quality assessment, however, methods that rely on visual assessment of currently available test targets may not have adequate sensitivity. The purpose of this paper is to quantify the sensitivity of commonly used test targets for visual assessment of medical display devices with well-defined changes in sharpness and nose. Two test targets methods were selected form those that have been used for visual assessment of image displays. For each, the assessment is a measure of the size and contrast of the smallest visible pattern in the target. Computer simulation was used to produce images of each of the targets having known sharpness and nose degradation. Viewers were trainee in the use of each target, then asked to score a randomly ordered set of simulation-degraded target images. These data were approximately analyzed for each method and the result evaluated with standard statistical methods. Assessments were found to correlate with sharpness and noise. However, the sensitivity of both targets for single-stimulus assessment was found to be adequate. The practical utility of these methods must therefore be questioned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the near future, we will see the introduction of full field digital mammography system replacing conventional film-screen mammography. For image display and diagnosis, these digital mammography system are likely to interface with high resolution laser imagers, which can produce high quality hardcopy film output. We have developed a high resolution imager based on photothermographic dry media. Inputs from both modality manufacturers and radiologists determined the design characteristics of the imager. General features of the imager, specific features pertaining to current digital mammo modalities and user needs are presented. Additionally, we present image quality results such as contrast transfer function, grayscale reproduction, noise in the printed dry media and media and image quality control in the imager. Suggestions for quality control of the modality and the imager are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If medical images are presented on a general-purpose workstation or PC, and applied display is most commonly a color monitor. The majority of the displayed images, however, is monochrome. In the present paper, measurement and modeling procedures for the characterization of color monitors for monochrome applications are described. The luminance and spatial resolution behavior for gray-scale presentation is assessed by judging the contribution of the three color channels red, green, and blue according to the spectral sensitivity of the human eye. A series of color CRT- and active matrix LCD- monitors for an image matrix of 1280 X 1024 and screen diagonals of 18 inch-20 inch is analyzed. As compared with monochrome CRTs, the color CRT- monitors generally have much lower luminance and a reduced ntra-scene dynamic range. Although the image quality of LCD- monitors has been significantly improved in the last years, they still have problems in gray-scale of LCD-monitors rendition and viewing angle. Due to the limited luminance and dynamic range of color display systems, the calibration and control of their luminance curve is a very important task if they are widely used for medical reviewing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to explain and discuss the technological realizations in direct thermal printing to achieve the requirements to perform medical diagnosis on film.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Urologists routinely use the systematic sextant needle biopsy technique to detect prostate cancer. However, recent evidence suggests that this technique has a significant sampling error. We have developed a novel 3D computer assisted prostate biopsy simulator based upon 201 whole- mounted step-sectioned radical prostatectomy specimens to compare the diagnostic accuracy of various prostate needle biopsy protocols. Computerized prostate models have been developed to accurately depict the anatomy of the prostate and all individual tumor foci. We obtained 18-biopsies of each prostate model to determine the detection rates of various biopsy protocols. As a result, the 10- and 12- pattern biopsy protocols had a 99.0 percent detection rate, while the traditional sextant biopsy protocol rate was only 72.6 percent. The 5-region biopsy protocol had a 90.5 percent detection rate. the lateral sextant pattern revealed a detection rate of 95.5 percent, whereas the 4-pattern lateral biopsy protocol had a 93.5 percent detection rate. Our results suggest that all the biopsy protocols that use laterally placed biopsies based upon the five region anatomical model are superior to the routinely used sextant prostate biopsy pattern. Lateral biopsies in the mid and apical zones of the gland are the most important.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Realistic visualization is one of the key components for effective computer-assisted surgery simulations. Objects in the simulated environment should always provide correct visual appearance. In addition to properly changing the model geometry or topology in response to externally applied forces, such as cutting and dragging actions, the correct object appearance must be displayed after the geometry or topology is modified. Geometric surface representations of objects provide a direct and intuitive form for use in simulation systems, and they are relatively easy to implement. The disadvantage of geometric surface representation is that the interior content of the object is missing. When object cutting is performed that alters the object topology involving this interior, the correct appearance cannot be provided. Volumetric data representations retain objects in the volume element format and have the advantage of preserving volume content. A potential disadvantage of manipulating such volumes is reduced speed and flexibility. This paper describes an approach which combines geometric and volumetric representations to provide a real-time and realistic interactive surgery simulation system. This system users geometric representation for efficiency and volume content for appearance. Physics-based object deformation and 3D texture mapping provide an effective means of interactive volume visualization for realistic data-specific surgery simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most fractures of the long bones are displaced and need to be surgically reduced. External fixation is often used but the crucial point of this technique is the control of reduction, which is effected with a brilliance amplifier. This system, giving instantly a x-ray image, has many disadvantages. It implies frequent irradiation to the patient and the surgical team, the visual field is limited, the supplied images are distorted and it only gives 2D information. Consequently, the reduction is occasionally imperfect although intraoperatively it appears acceptable. Using the pains inserted in each fragment as markers and an optical tracker, it is possible to build a virtual 3D model for each principal fragment and to follow its movement during the reduction. This system will supply a 3D image of the fracture in real time and without irradiation. The brilliance amplifier could then be replaced by such a virtual reality system to provide the surgeon with an accurate tool facilitating the reduction of the fracture. The purpose of this work is to show how to build the 3D model for each principal bone fragment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our main research work is the realization of a completely computer-based maxillofacial surgery planning system. An important step toward this goal is the availability of virtual tools for the surgeon to interactively define bone segments from skull and jaw bones. The easy-to-handle user interface employs visual and force-feedback devices to define subvolumes of a patient's volume dataset. The defined subvolumes together with their spatial arrangements lead to an operation plan. We have evaluated modern low-cost, force- feedback devices with regard to their ability to emulate the surgeon's working procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For interactive diagnostics or therapy planning on tomographic images hard- and software as well as concept specific dependencies should be avoided to reduce development costs and to increase portability to other platforms. A set of object-oriented class libraries was designed to work on several operating system and respects actual programming standards. The conceptual link between pixels, slice orientation, and geometry was incorporated to a topology class. It manages the spatial image orientation and handles regularly and irregularly arranged image volumes. An image analysis library includes numerous operators applicable to single-channel and multi-channel images. A 3D model and visualization library allows to reconstruct anatomical structures, which can be interactively measured and virtually cut. Due to the topology management a Cartesian re-arrangement of irregularly oriented images with increasing storage demands can be avoided. Because the libraries are implemented using C++ and Open Inventor, applications can be easily prototyped and identical visualization and planning features can be used independent of the operating system. By the use of platform independent concepts, programing languages and standards, the program libraries could be used in multiple environments. Application development time was drastically reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High performance image processing is an essential component of diagnostic quality medical imaging workstations. The technology challenge is to provide acceptable performance for high resolution imaging at a reasonable cost. Commercial off the shelf personal computers can now compete with solutions traditionally based on expensive workstations or specialized DSP equipment. This paper describes the development of a new library for medical imaging applications which makes use of the extended single instruction multiple data (SIMD) instruction set in the Intel Pentium MMX architecture. Image processing is an ideal application for the use of parallel computing. Typically, multi-processor based solutions use a large grain approach to parallelism. The images are divided into large sections which are processed simultaneously. Additional processing is often required to solve boundary problems between adjacent parts of the data. SIMD is another form of processing which can apply parallelism at the pixel level. This method is suitable for imaging operations where there is no dependency on the result of previous operations. The use of SIMD algorithms provides multiprocessing without the overheads of synchronization and control normally associated with parallel computing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new formalism for the treatment and understanding of multispectral imags and multisensor fusion based on first order contrast information. Although little attention has been paid to the utility of multispectral contrast, we develop a theory for multispectral contrast that enables us to produce an optimal grayscale visualization of the first order contrast of an image with an arbitrary number of bands. In particular, we consider multiple registered visualization of multi-modal medical imaging. We demonstrate how our methodology can reveal significantly more interpretive information to a radiologist or image analyst, who can use it in a number of image understanding algorithms. Existing grayscale visualization strategies are reviewed and a discussion is given as to why our algorithm performs better. A variety of experimental results from medical imagin and remotely sensed data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DICOM is widely accepted today as the standard protocol for medical image interchange. DICOM offers very flexible data structures allowing to easily encode a wide variety of modality specific image formats. This flexibility, however, causes problems in image display: most DICOM viewers are limited to the 'most usual' subset of the standard's specification which makes up the vast majority of images but covers only a very small part of DICOM's capabilities. Our approach was to design a toolkit which attempts to overcome the common restrictions of existing DICOM implementations. Apart from completeness, i.e. the support for all image formats defined in DICOM, the main objectives were: efficiency, extensibility and portability. The resulting toolkit is built on a well-designed C++ class hierarchy which makes extensive use of template classes and inline methods for best performance. Two applications based on this toolkit have been developed: a conversion and manipulation tool and a small DICOM viewer. In addition to the freely available DICOM test images distributed by vendors of medical equipment we used artificially created images covering all aspects and options of the DICOM image models for testing. Our implementation demonstrates that it is both possible and practicable to support the entire DICOM image format without sacrificing efficiency. Nevertheless the question remains whether DICOM's present complexity is really necessary. An introduction of useful restrictions would make it much easier for DICOM implementers to support the standard's image format in its entirety and thus increase the software's robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the presentation of MRI on a computer screen. In order to understand the issues involved with the diagnostic-viewing task performed by the radiologist, field observations were obtained in the traditional light screen environment. Requirement issues uncovered included: user control over grouping, size and position of images; navigation of imags and image groups; and provision of both presentation detail and presentation context. Existing presentation techniques and variations were explored in order to obtain an initial design direction to address these issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the advantages of digital mammography is the ability to acquire a mammography image with a larger contrast range. With this advantage comes the tradeoff of how to display this larger contrast range. Laser printed film, and video display both have smaller dynamic ranges than standard mammography film-screen systems. This work examines performance and preference studies for display processing methods for digital mammograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a wavelet-based video codec based on a 3D wavelet transformer, a uniform quantizer/dequantizer and an arithmetic encoder/decoder. The wavelet transformer uses biorthogonal Antonini wavelets in the two spatial dimensions and Haar wavelets in the time dimensions. Multiple levels of decomposition are supported. The codec has been applied to pre-scan-converted ultrasound image data and does not produce the type of blocking artifacts that occur in MPEG- compressed video. The PSNR at a given compression rate increases with the number of levels of decomposition: for our data at 50:1 compression, the PSNR increases from 18.4 dB at one level to 24.0 dB at four levels of decomposition. Our 3D wavelet-based video codec provides the high compression rates required to transmit diagnostic ultrasound video over existing low bandwidth links without introducing the blocking artifacts which have been demonstrated to diminish clinical utility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an image compression algorithm that can improve the compression efficiency for digital projection radiographs over current lossless JPEG by utilizing a quantization companding function and a new lossless image compression standard called JPEG-LS. The companding and compression processes can also be augmented by a pre- processing step to first segment the foreground portions of the image and then substitute the foreground pixel values with a uniform code value. The quantization companding function approach is based on a theory that relates the onset of distortion to changes in the second-order statistics in an image. By choosing an appropriate companding function, the properties of the second-order statistics can be retained to within an insignificant error, and the companded image can then be lossless compressed using JPEG-LS; we call the reconstructed image statistically lossless. The approach offers a theoretical basis supporting the integrity of the compressed-reconstructed data relative to the original image, while providing a modest level of compression efficiency. This intermediate level of compression could help to increase the conform level for radiologists that do not currently utilize lossy compression and may also have benefits form a medico-legal perspective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a lossless volumetric data compression method using a procedure for decomposition based upon region growing (DBR) in the 3D space combined with conventional lossless data compression techniques. We provide a comparative analysis of lossless volumetric data compression using the 3D DBR and 2D DBR methods with commonly-used data compression tools such as gzip and compress, as well the more recent tool bzip2. The best results were obtained with DBR combined with bzip2 or bzip2 on its own. The minimum entropy obtained after applying DBR is much smaller than the best compression results showing that better compression techniques can be explored to obtain higher compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the effects of lossless and lossy wavelet image compression methods on digital angiogram images. The requirements of suitable quality assessment methods for decompressed images are considered and the need for diagnostically relevant quality metrics in discussed. The results of experimental quality investigations, including expert cardiologists identifications of diagnostically acceptable quality boundaries, are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When radiographic images are displayed using cathode-ray tubes (CRTs), the perception of low-contrast, clinically relevant details can be hindered by veiling glare. To reduce the effect of glare, high performance CRTs typically have an absorptive faceplate. Other approaches that have been implemented in color tubes include filtered and pigmented phosphor grains, and the use of a black absorptive matrix between phosphor dots. In this work, we present results on experimental measurements and computational predictions of the veiling glare for a medical imagin CRT of recent design. Experimental measurements were performed using a collimated conic probe with a high gain detector, and test patterns having a dark circular spot of varying diameter in a bright circular field. Computational and experimental results were obtained before and after application of an AR coating to the surface of the monitor. The glare ratio for a 1 cm diameter dark spot was measured to be 138 without AR coating, and 241 with coating. The results establish a bright ring in the PSF at about 45 mm distance. The predictions from the computational model agree well with the measured ring response functions for radii less than 50 mm. We speculate that the veiling glare response for large radii is greatly influenced by electron backscattering processes. While primarily designed to decrease specular reflections, the AR structure also affects the glare characteristics of an emissive display by increasing the probability that a light photon will exit the structure through the outer surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The trade-off between contrast and latitude is a well-known and long-standing image quality limitation in projection radiography. With the introduction of electronic acquisition, it is possible to capture a wide range of x-ray exposures in a single image. However, the problem of rendering and displaying images, such that information outside the normal expected latitude is visualized while maintaining adequate contrast for diagnostic details, has remained. The enhanced visualization processing (EVP) algorithm described in this paper addresses this problem. A clinical study is reported utilizing 70 images, including 5 images of each of 14 examination types selected from a large data base of diagnostic computed radiography images. For each image, a control rendering was produced by means of a state-of-the-art automated tone scaling algorithm. A test rendering was produced by applying EVP to the control image. Ten radiologists independently rated the 140 resulting images on a 9-point diagnostic quality scale. EVP provided increased exposure latitude without objectionable loss of detail contrast. In many images, EVP substantially reduced the loss of information in normally under-penetrated areas, while reducing the need for 'bright lighting' in over- penetrated areas. The diagnostic quality ratings of both EVP and control images were, on average, high. However, the median rating for EVP images was one full rating category higher than that of the control images. Viewing the images as pairs, 76 percent of the EVP images were rated 1 or more categories superior to the corresponding control image while only 6 percent of the control images were rated as superior to the corresponding EVP image. Similar results were obtained across all 14 examination-types studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Display systems relying on computer graphics techniques usually create 2.5D image display on a 2D screen. To obtain 3D image display, most system uses auxiliary devices or viewing tricks, such as polarized glasses, virtual reality helmet, detection of observer's location, divergent viewing, etc. We call these systems stereoscopic. A system that can display 3D images in a natural way is called a self- stereoscopic system. We know that stereoscopic system do not have horizontal parallax such as seen in holograms, which display continuum parallax. In this paper, we introduce a new technique based on shell rendering to discretize horizontal parallax by coding several views of the object forming a holographic-like stereogram and a new self- stereoscopic 3D display system to visualize holographic stereograms on a holographic screen. We also demonstrate the new system using medical image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet compression has been shown to give exceptional subjective image quality with high compression ratios for medical imaging. In an effort to effect real-time wavelet compression of digitized ultrasound video for low bandwidth networks, Fourier domain subsampling may demonstrate reduced computational overhead compared to convolution methods. The anticipated benefit is dependent on: the size of the mother wavelet used, data dimensions along each axis, and available Fourier processing power. The process of wavelet compression is comptutationally expensive, requiring multiple convolutions with similar mother wavelets at different resolutions. In contrast, Fourier domain subsampling states that if an image is downsampled by a factor of two,the spatial frequencies of the image all increase by a factor of two. This allows the use of only one forward FFT on the data at run time, and only one inverse FFT at the time of each filter application, significantly reducing the computational load. A wavelet transform in the third dimensions takes advantage of the high correlation between adjacent frames in ultrasound video. Our presentation will demonstrate a comparison of benchmarks for both wavelet transform methods and analyze the advantage with respect to mother wavelet size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Antiscatter grids are commonly used in projection radiography to reduce scattered x-rays and improve image contrast and signal-to noise ratio. In digital radiography, because of spatial sampling, stationary girds usually cause aliasing and sometimes result in moire patterns. We investigated the impact of stationary grids in computed radiography (CR) for images viewed both on film and soft- copy display devices. First, the relationship of various grid factors as they relate to the problem of aliasing and moire artifacts is presented, as well as recommendations of grid usage with CR systems. Next, because ultimately one would like the freedom to use nay grid configuration with their digital imaging system, we present an automated image processing method to detect and adaptively suppress the grid to reduce aliasing artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the real world, a doctor can use a knife to cut along any path on the body, unveil the skin and investigate the internals. It would be ideal if the same thing can be done on the volume data with a virtual knife. With this metaphor in mid, we develop a freehand volume cutting tool that allows the doctor to cut the volume in freehand. The cutting path on a volumetric data surface is created with the help of Intelligent Scissor, which is an interactive technique for 2D image segmentation. Our proposed segmentation tool for volume data tends to place the curve/path along the feature lines/curves, hence freeing the doctor from fine tuning the cutting path. Once a closed cutting path is established on the extracted 3D surface volumes along the cutting path. Since the internal cutting surface can be any arbitrary surface, we use a cost minimization technique to make the surface as smooth as possible. Once the volume is partitioned, we can display the cut volumes using a 3D- texture based volume rendering algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the characterization of a unique thin- film ultrasound phantom. The phantom consist of a film with controllable acoustic properties immersed in an ultrasonically transparent material. The placement of scattering sites on the film creates an image when scanned with a clinical instrument.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In medical imaging, many methods of irreversible compression have been studied. However, none of them has been adopted as a standard, probably because of the lack of agreement regarding what is meant by 'acceptable' for an irreversible compression scheme. Among the numerous evaluation studies which have been performed, few only involved a psychovisual approach, and it is unsafe to draw definite conclusion from the reported result. The purpose of this work was to determine the importance of intra and inter observer variability in the psychovisual evaluation of irreversible compression methods for a very specific evaluation protocol involving digitized images of breast parenchyma and imags simulating breast parenchyma using a fractal model. the compression methods to be evaluated were JPEG and Embedded Zerotree Wavelet. Six radiologists had to determine whether each image had been compressed and decompressed. ROC analysis was performed to characterize the performance in compression detection. The large intra and inter observer variability observed in the detection of compression underlines the difficulty of determining an 'acceptable' scheme of irreversible compression in medical imaging in general, and bring the approval of irreversible compression schemes into question.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convenient software tool for performing the kinetic modeling in PET quantitative studies has been desired, since a number of mathematical modeling and parameter estimation methodologies emerged. In addition, the WWW has become a new computing platform for the various kinds of applications. In this work, we have developed a Web-based tool for estimation of the physiological parameter, e.g., glucose metabolism, in dynamic PET studies using tracer kinetic modeling methodology. The tool was developed by use of the novel Java technology and tested in the major Web browsers. A set of patient data was used to evaluate the numerical accuracy of the present tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visualization of 3D data from ultrasound images is a challenging task due to the noisy and fuzzy nature of ultrasound image and large amounts of computation time. This paper presents an efficient volume rendering technique for visualization of 3D ultrasound image and large amounts of computation time. This paper present an efficient volume rendering technique for visualization of 3D ultrasound image using the noise-reduction filtering and extraction of the boundary surface with good image quality. The truncated- median filter in 2D ultrasound image is proposed for reducing speckle noise within acceptable processing time. In order to adapt the fuzzy nature of boundary surface of ultrasound image, an adaptive thresholding is also proposed. The decision of threshold is based on the idea that effective boundary surface is estimated by the gray level above an adequate noise threshold and width along the pixel ray. The proposed rendering method is simulated with 3D fetus ultrasound image of 148 X 194 X 112 voxels. Several preprocessing methods were tested and compared with respect to the computation time and the subjective image quality. According to the comparison study, the proposed volume rendering method shows good performance for volume visualization of 3D ultrasound image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of contrast agents to enhance ultrasound imaging of blood flow has resulted in applications not only in echocardiography but in peripheral organs such as the prostate as well. Especially the association of tumor development with changes in blood flow patterns has a potential to improve differential diagnosis. Theoretical modeling of the behavior of contrast bubbles after injection can be used to increase the understanding of the images of prostatic blood supply resulting after enhancement. This knowledge might assist in the development of sophisticated software for objective interpretation of the improved blood flow display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite the many advantages of MR images, they lack a standard image intensity scale. MR image intensity ranges and the meaning of intensity values vary even for the same protocol (P) and the same body region (D). This causes many difficulties in image display and analysis. We propose a two-step method for standardizing the intensity scale in such a way that for the same P and D, similar intensities will have similar meanings. In the first step, the parameters of the standardizing transformation are 'learned' from an image set. In the second step, for each MR study, these parameters are used to map their histogram into the standardized histogram. The method was tested quantitatively on 90 whole brain FSE T2, PD and T1 studies of MS patients and qualitatively on several other SE PD, T2 and SPGR studies of the grain and foot. Measurements using mean squared difference showed that the standardized image intensities have statistically significantly more consistent range and meaning than the originals. Fixed windows can be established for standardized imags and used for display without the need of per case adjustment. Preliminary results also indicate that the method facilitates improving the degree of automation of image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stereotactic systems are based on preoperative tomographic images for assisting neurosurgeons to accurately guide the surgical tool into the brain. In addition, intraoperative ultrasound (US) images are used to monitor the brain in real time during the surgical procedure. The main disadvantage of stereotactic system is that preoperative images can become outdated during the course of the surgery. The main disadvantage of intraoperative US is the low signal-to-noise ratio that prevents the surgeon from appreciating the contents and orientation of the US images. A system that combines preoperative tomographic with intraoperative US imags could overcome the above-mentioned disadvantages. We have successfully developed and implemented a new PC-based system for interactive 3D registration of US and magnetic resonance images. Our software is written in Microsoft Visual C++ and it runs on a Pentium II 450-MHz PC. We have performed an extensive analysis of the errors of our system with a custom-built phantom. The registration error between US and MR space was dependent on the depth of the target within the US image. For a 3.5-MHz phased 1D array transducer and a depth of 6 cm, the mean value of the registration error was 2.00 mm and the standard deviation was 0.75 mm. The registered MR images were reconstructed using either zero-order or first-order interpolation. The interactive nature of our system demonstrates its potential to be used in the operating room.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When a volume structure is embedded in another volume structure, it is difficult to see the inner structure without destroying the view on the outer structure. We present a solution by letting the volume structures scatter light with a different wavelengths. The outer volume structure only absorbs light scattered by the structure itself and does not absorb the light scattered by the inner structure, by which the inner structure is visible without destroying the view on the outer structure. Examples are shown of vascular imaging, functional MRI, and CT imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.