PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The knowledge of thresholding and gradients at different object interfaces is of paramount interest for image segmentation
and other imaging applications. Most thresholding and gradient optimization methods primarily focus on image histograms
and therefore, fail to harness the information embedded in image intensity patterns. Here, we investigate the role of a
recently conceived object class uncertainty theory in image thresholding and gradient optimization. The notion of object
class uncertainty, a histogram-based feature, is formulated and a computational solution is presented. An energy function is
designed that captures spatio-temporal correlation between class uncertainty and image gradient which forms objects and
shapes. Optimum thresholds and gradients for different object interfaces are determined from the shape of this energy
function. The underlying theory behind the method is that objects manifest themselves with fuzzy boundaries in an
acquired image and, in a probabilistic sense, intensities with high class uncertainty are associated with high image
gradients generally appearing at object interfaces. The method has been applied on several medical as well as natural
images and both thresholds and gradients have successfully been determined for different object interfaces even when
some of the thresholds are almost impossible to locate in respective histograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 3-D moment invariants are introduced to solve the problem of recognition of 3-D objects independent of size,
position, and orientation. In this paper, based on our previous systolic array for fast computation of 3-D moment, we
propose a new global systolic array for fast computation of 3-D moment invariants. The systolic sarray mostly consists
of adders with area complexity O(n) and are highly regular and structurally very simple, resulting in simple hardware
implementation. The method is suitable for both binary images and gray level images and is also suitable for image
sequence moments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Retrieval of images containing a given object from a large image database has attracted much attention from various
researchers. These images may have been taken from different viewpoints under various imaging parameters with
different backgrounds, and the target objects might be occluded. In order to solve the problem, a new method has been
proposed in this paper. Firstly, some preliminary works of our method have been introduced. After features extracted
from images, a recursive self-organizing mapping tree can be used to save the similarities between them. And the classspecific
hyper-graphs have been used to index images belonging to same objects. When retrieving images, the RSOM
tree can be used to learn how many common features there are between the querying image and the images in the
database, and the response images could be sorted by these similarities. Using the response images from earlier queries
as the new querying inputs, all responses can be retrieved with the generic query expansion strategy recursively.
Subsequent re-queries can be simplified with all images indexed by CSHGs. There is innate concurrency in the method
proposed, which makes it suitable for large scale applications such as Internet services.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presented improved Sparse A-Star Search (SAS) algorithm to pursue a fast route planner for Unmanned
Aerial Vehicles (UAVs) on-ship applications. Our approach can quickly produce 3-D trajectories composed by a set of
successive navigation points from certain known initial locations to predetermined target locations. The result routes are
not only ensuring collision avoidance with the environmental obstacles, but also satisfying specific routes constraints and
objectives. The experiment results demonstrated the feasibility of the method, which makes our route planner be more
useful in real systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital change detection (CD) is the computerized process of identifying changes in the state of an object, or other earthsurface
features, between different dates. During the last years, a large number of change detection methods have been
proposed for change detection of multiple-temporal remote sensing images. Among these, change vector analysis (CVA)
is a very important and widely used method. The key of CVA is to determine change detection threshold. Change
detection threshold is a very valuable key for change detection precision. In the literature, many techniques to determine
change detection threshold have been proposed. However, most of them are not robust and operational since images are
diverse and complex, especially to very high resolution (VHR) data (e.g. images acquired by QuickBird, IKONOS,
SPOT5 and WorldView satellites). Such discrimination is usually performed by using empirical strategies or manual
trial-and-error procedures, which affect both the accuracy and the reliability of the change-detection process. In this
paper, we analyze the algorithm based on minimal classifying error, the algorithm based on OTSU and the algorithm
based on EM. To eliminate the complexity of VHR data, an improved algorithm based on EM is proposed. Suppose the
difference image meets the Mixed Gaussian distribution model. First, the grey histogram of the difference image is fitted
to the Mixed Gaussian Distribution Model (MGM). Then the change detection threshold is determined by the MGM
graph combing the Bayesian Criterion and the actual situation. In experiment, the semi-automatic method is effective and
operational.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Selection of ground control points (GCP) for remote sensing image geometric rectification is an important step. The
number, distribution and accuracy of GCP have a direct impact on the effect of geometric rectification. With the
development of remote sensing technology, GCP auto-matching algorithm automatically has access to many high
precision GCP. However, very few studies of the distribution of GCP, which are also important to geometric
rectification, are carried on. GCP should be evenly distributed in the whole image. However the understanding of evenly
distribution has high subjectivity. In this paper, the method based on cluster analysis has been proposed to optimize the
distribution of GCP. A subset of appropriate, even and high-precision GCP was filtered from a large number of GCP.
Through the introduction of the concept of the monopolized circle, the uniform index was put forward to measure the
uniformity of GCP pattern quantitatively. This paper also studied the relationship between number and precision of GCP.
It is proved by experiment that the rest GCP after the algorithm of optimization were evenly distributed and achieved
good results. At the same time, the efficiency and accuracy of image geometric rectification could be improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the demand of higher image quality and greater processing capabilities are growing, obtaining higher data bandwidth
for on-chip processing is becoming a more and more important issue. DMA (Direct Memory Access) component, as the
key element in stream processing SoC (System on Chip) [1], should be deeply researched and designed to satisfy the
high data bandwidth requirement of processing units. In this paper, we introduce a scalable high-performance DMA
architecture for complex SoC to satisfy rigorous high sustained bandwidth and versatile functionality requirements.
Several techniques and structures are proposed in this paper. An integrated verification environment is also built for our
design to fully verify its functionality. And the performance improvement by using our architecture is analyzed. At the
end of the paper, the post-simulation and tape-out results are provided. The whole implementation has been silicon
proven to be functional and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of Digital Photogrammetric System is forward to distributed and parallel processing. There are many
researches on distributed and parallel Digital Photogrammetry System and a lot of researches are carried out on High
Performance Computing systems (i.e. Blade). But there are few distributed and parallel research performed in the context
of PC clusters. According to the principles of distributed systems, a middleware-based distributed system of
orthorectification for high resolution satellite imagery is proposed in the context of PC clusters. The paper emphasizes
the descriptions of the components in the system and discusses the corresponding strategies of task scheduling and
performance in the module. The feasibility of the system is proved in the practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Super-resolution image restoration is often known to be an ill-posed inverse and large scale problem. The
regularization parameter plays a crucial role in the quality of the restored image. Although generalized cross-validation is
a popular tool for computing a regularized parameter, it has been rarely applied to super-resolution image restoration
problems until recently. A major difficulty lies in the implementation of generalized cross-validation which requires the
costly computation and the evaluation of the trace of an inverse matrix. In this paper numerical approximate techniques
are used to reduce the computational complexity. We employ Gauss quadrature to compute approximately the
cross-validation function. The evaluation of the trace of the inverse matrix is replaced by stochastic trace so as to
alleviate the problem. Further, Lancros algorithm and Galerkin equation is used to evaluate the stochastic trace. Our
results show that the method is an effective and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on quantum-behaved particle swarm optimization (QPSO), a novel path planner for unmanned aerial vehicle
(UAV) is employed to generate a safe and flyable path. The standard particle swarm optimization (PSO) and
quantum-behaved particle swarm optimization (QPSO) are presented and compared through a UAV path planning
application. Every particle in swarm represents a potential path in search space. For the purpose of pruning the search
space, constraints are incorporated into the pre-specified cost function, which is used to evaluate whether a particle is
good or not. As the system iterated, each particle is pulled toward its local attractor, which is located between the
personal best position (pbest) and the global best position (gbest) based on the interaction of particles' individual
searches and group's public search. For the sake of simplicity, we only consider planning the projection of path on the
plane and assume threats are static instead of moving. Simulation results demonstrated the effectiveness and feasibility
of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, the need for professional fast batch processing of remote sensing images becomes more and more urgent.
Production and application units of remote sensing data require the processing software not only to process images by
operating menu, but also to customize different processing flows rapidly according to different needs. Processing flow
needs to have reusability so that either the existing flow or new customized flow is available to professional processing,
even to non-professionals. And extendibility is demanded to allow newly-added function algorithm modules or a mature
flow library can be used for flow customization directly. However, the needs mentioned above can not be satisfied by the
existing remote sensing software. Consequently, this paper puts forward a visible flow customization approach and its
implementation mechanism. This approach has fully considered features of professional image processing such as
visuality, variety, complexity, reusability, expansibility and high efficiency. With this approach, an efficient and
integrated experimental system is established, which has obtained expected performance in practical application of
remote sensing image processing (for example, change detection).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The intra prediction algorithm is one of the key algorithms provided by H.264, which contributes to high compression
ratio. Unfortunately it considerably increases the complexity of the encoder. A fast intra prediction algorithm is proposed
based on the existing algorithms, in which last prediction mode is the prior consideration for current predict mode. It has
been simulated with Matlab. Experimental results show, comparing to the search algorithm presented by JM code, the
proposed algorithm can reduce approximately 41 and 59 percent of the cost of mode search and achieve about 93 and 63
percent of precision on the average for 4×4 and 16×16 prediction mode, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed a new way of implementing non-uniformity correction (NUC) techniques by using NVIDIA
CUDA parallel programming model. Non-uniformity and bad cells problems generally exist in Infrared Focal Plane
Array (IRFPA) Sensors and real-time correction is needed before further processing. With the intrinsic parallel nature of
most non-uniformity correction algorithms, the ever popular multi-core multi-thread processor architecture is a suitable
match. We'll investigate several non-uniformity correction techniques on CUDA, CPU and ASIC platforms and compare
the results in processing power, latency, costs and etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive Parallel Genetic Algorithm adjusts the genetic parameters and operators dynamically during the iterations of
evolution in order to accelerate the convergence and avoid the premature. By using the concept of coarse-grained
parallelization, the population is divided into a few large subpopulations. These subpopulations evolve independently
and concurrently on different processors. After a predefined period of time, some selected individuals are exchanged via
a migration process. In this paper, a parallel multi-population adaptive genetic algorithm is proposed by adjusting the
size of sub-population. The sub-population size is dynamically varied based on the fitness of the best individual of that
sub-population compared with the mean fitness of the total population. The relevant migration strategy including
synchronous and asynchronous migration is also put forward to avoid the work load imbalance in parallel genetic
algorithm. Then, the convergence analysis based on schema theory is given to certify the efficiency of the Sub-
Populations size adjustment in the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aim at the disadvantages of super computer and cluster parallel image processing
system: hardware is expensive and software development processing is also complex, propose
the image parallel processing scheme based on GPU under CUDA in this paper, making use
of the normal PC's GPU to realize algorithms' fine-grained parallel processing, what's more,
this scheme can easily integrate with Cluster to improve efficiency. At last, we make image
fusion experiments to invalidate this parallel scheme
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parallel rendering based on a cluster is an effective method to improve the performance and resolution of graphics
system, while frame synchronization is the core issue during parallel rendering. This paper puts forward a type of frame
synchronization method applicable for large-scale dynamic complex scene parallel rendering system, which adopts
dynamic adaptive control of frame frequency to realize AIAMD concept of congestion control, namely mixed additive
increase and multiplicative decrease (i.e. AIMD), as well as additive increase and additive decrease (i.e. AIAD). Then it
achieves synchronous output and display of cluster computer frame according to the conditions of network between
rendering nodes and display nodes, as well as scene rendering of rendering nodes. The experimental results show that,
this frame synchronization method is better than some other program-level synchronous control methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The restoration of rotational motion blurred image involves a lot of interpolations operators in rectangular-to-polar
transformation and its inversion of polar-to-rectangular. The technique of interpolation determines the quality of
restoration and computational complexity. In this paper, we incorporate orthogonal chebyshev polynomials interpolations
into the processing of restoration of rotational motion blurred image, in which the space-variant blurs are decomposed
into a series of space-invariant blurs along the blurring paths, and the blurred gray-values of the discrete pixels of the
blurring paths are calculated by using of orthogonal chebyshev polynomials' interpolations and the space-variant blurs
can be removed along the blurring paths in the polar system. At same way, we use orthogonal chebyshev polynomials'
interpolations to perform polar-to-rectangular transformation to put the restored image back to its original rectangular
format. In order to overcome the interference of noise, an optimization restoration algorithm based on regularizations is
presented, in which non-negative and edge-preserving smoothing are incorporated into the process of restoration. A
series of experiments have been performed to test the proposed interpolation method, which show that the proposed
interpolations are effective to preserve edges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a newly developed hardware used in multi-projector displays built with a network of PCs, commodity graphics accelerators, multiple projectors, cameras, and large-scale screens. In order to completely remove the photometric seams or geometric discontinuities on projector boundaries in such a multi-projector display, the frame-buffer content for each projector has to be warped and attenuated properly. So the system must include a high-performance and scalable pixel routing subsystem in order to create seamless imagery. Here we propose to develop a flexible pixel router to solve this problem. Our router is capable of performing an arbitrary mapping of pixels from any input frame to any output frame, and executing typical composition operations at the same time. The main challenge of developing the hardware is to maintain the refresh rate. We analyse hardware pixel routing for multi-projector displays.
We demonstrate the construction of large-scale multi-projector display systems using this flexible pixel router that
bridges the image generator and the projectors. We also present an initial hardware prototype and some preliminary results in the hardware development. Experimental results show that our approach is both feasible and robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A radial sine phase filter is introduced in optical systems to control and alter the optical system focusing properties with
incident Gaussian beam. It was found that the radial sine phase filter can induce tunable multiple foci in focal region,
which means that the several field distances can be imaged clearly simultaneously with this kind of filter in imaging
optical system. And the point of absolute maximum intensity does not coincide with the geometrical focus but shifts
along the optical axis; this phenomenon is referred to as focal shift. Focal shift distance and direction can be altered by a
sine parameter in the sine term of the phase distribution function. In addition, focal shift may be accompanied by an
effective permutation of the focal point, namely, maximum intensity can jump from one position to other position for
certain sine parameter, this effect is referred to as focal switch, which also can also be used to adjust imaging clearly
field distance in-continuously. The radial sine phase filter can adjust and optimizing the focusing and imaging properties
of the optical systems considerably, waist width and section shape of incident Gaussian beam also affect multiple foci.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The optimal control problem for switched linear systems with internally forced switching has more constraints than with
externally forced switching. Heavy computations and slow convergence in solving this problem is a major obstacle. In
this paper we describe a new approach for solving this problem, which is called Migrant Particle Swarm Optimization
(Migrant PSO). Imitating the behavior of a flock of migrant birds, the Migrant PSO applies naturally to both continuous
and discrete spaces, in which definitive optimization algorithm and stochastic search method are combined. The efficacy
of the proposed algorithm is illustrated via a numerical example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a fast algorithm for computing the two-dimensional (2-D) discrete Hartley transform (DHT). By
using kernel transform and Taylor expansion, the 2-D DHT is approximated by a linear sum of 2-D geometric moments.
This enables us to use the fast algorithms developed for computing the 2-D moments to efficiently calculate the 2-D
DHT. The proposed method achieves a simple computational structure and is suitable to deal with any sequence lengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Asynchronous pipeline structure is adopted for the real-time video decoder design because of its better performance
when the stage processing times are irregular. However, the structure requires a lot of memories, the invaluable resource
on chip, to buffer data and parameters between modules. To solve this problem, a specially designed switching buffer
module is used between stages instead of traditional FIFO, and the module can also take some buffering function in each
stage, which helps to reduce the utilization of memory. An H.264 decoder with the proposed structure was implemented.
Compared to decoder without improved structure, the experimental decoder can save nearly 50% memory and 31% I/O
operations between stages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel DSP/FPGA-based parallel architecture for real-time image processing is presented in this paper, DSPs are the
main processing unit and FPGAs are used to be logic units for image interface protocol, image processing, image display,
synchronization communication portocol of DSPs and DSP's reprogramming interface of 422/485. The presented
architecture is composed of two modules: the preprocessing module and the processing module, and the latter is
extendable for better performance. Modules are connected by LINK communication port, whose LVDS protocol has the
ability of anti-jamming. And DSP's programs can be updated easily by 422/485 with PC's serial port. Analysis and
experiments result shows that the prototype with the proposed parallel architecture has many promising charactersitics
such as powerful computing capability, broad data transfer bandwidth, and is easy to be extended and updated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the application of nondestructive testing and evaluation, this paper mainly deals with the problem of improving the
image reconstruction speed in cone beam computed tomography (CBCT). FDK algorithm is a time costing method for
CBCT image reconstruction, due to the voluminous data and long operating process. With the help of data organization
and task distribution, we improved the SIMD instructions in Z-line data first reconstruction algorithm, which is an
improved method based on the FDK algorithm. And then, we run it parallelized with multi-core technology and a certain
divide-and-conquer strategy to get a fast reconstruction speed in CBCT. Finally, we evaluate the effectiveness of our
method from a numerical test of a blade model on an 8-core computer with four channel memory. Our method has got a
considerable speedup ratio of 217.22 to the FDK algorithm, and implemented the back-projection process of
reconstructing the inscribed cylinder of 5123 reconstruction space in about 30 seconds. It has got the same image quality
with the Z-line data first method, which retains the computational precision with FDK algorithm. Basically, our method
has met the requirement of real-time reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In H.264/AVC, there is only one fixed prediction order, raster scan order, for macroblocks (MBs) to perform intra
prediction. In this paper, a novel intra prediction framework for H.264/AVC using macroblock-groups with optimized
prediction order is proposed. A 33 MB-group is introduced to be coded as a unit in intra prediction. Eight candidate
prediction orders are chosen to perform intra prediction for MBs in one 33 MB-group. To utilize the processing power
of many-core hardware platforms, original frame pixels are adopted as intra predictors and SATD (Sum of Absolute
Transformed Difference) as criteria when calculating the optimized prediction order for one 33 MB-group. The
optimized prediction order with the lowest SATD of 33 MB-group is applied to perform actual intra prediction.
Experimental results show an average of 3.2% BD bit-rate reduction in intra frame coding, while complexity is only
slightly increased for many-core platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image histogram equalization technology is always a very important basic processing technology in image process field.
In infrared image processing system, low contrast and shortage of gray levels often make it very difficult to observe and
recognize targets. In this situation, Real-time image histogram equalization technology is often used to enhance infrared
image contrast, improve infrared image quality and ameliorate image processing system performance. In embedded
system, when real-time image histogram equalization is required, it is very difficult for the typical design methods by
using general purpose DSP or MCU to meet the real-time requirement. It is a very good choice to choose FPGA
hardware logic to implement image histogram equalization. Here in this paper, we introduce an FPGA based image
histogram equalization implementation in detail which can meet real-time requirement very well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of remote sensing technology, the means of accessing to remote sensing data become
increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image
sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat
algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform,
Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which
has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges
can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article
takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS
integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average
spectral error index and usual index to evaluate the quality of image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming
framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses
embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions
etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the
architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from
external memory to internal, and realize data transition and processing at the same time; the architecture level
optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The
whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize
real-time encoding of 768*576, 25 frame/s video images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper research on a high definition Ship-borne radar and video monitoring system which requires multi-channel TV
video and radar video encoding and decoding ability. The real time data transferring is based on RTP/RTCP protocol with
guarantee of QoS. In this paper, we propose an effective Feedback control for real time video stream to combine with
forward error correction (FEC). In our scheme, the server multicasts the video in parallel with FEC packets and adaptive
RTCP feedback control of the video stream. On the server side, we analyze and optimize the number of streams and FEC
packets to meet a certain residual loss requirement. For every RTT round trip time, the sender sends a forward RTCP
control packet. On the receiver side, we analyze the optimal combination of FEC and packets to minimize its loss. Upon
the receipt of a backward RTCP packet with the packet loss ratio from the receiver, the output rate of the source is
adjusted. Additive increase and multiplicative decrease (AIMD) model can achieve efficient congestion preventing
when the accurate available bandwidth is estimated by the backward RTCP packet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we analysis the characteristics of large amount of Remote sensing image firstly. And than we select two
basic algorithms in our experiments. The first one is Harris-Operator and the other is FFT algorithm. According to the
decomposition of these two algorithms, we sum up a set of an effective mass remote sensing image parallel processing
framework on PC cluster. It appears from the experiment results, our algorithm is very effective. And this framework is
also suitable for other remote sensing image processing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scene-based adaptive non-uniformity correction (NUC) of Infrared Focal Plane Array (IRFPA) has been a key
technology. However, all the scene-based correction methods require that the objects be in the state of motion. Once
objects that don't move enough tend to be "melt" into the background, the target fade-out and ghost artifacts will occur in
the corrected image. On the other hand, although some scene-based algorithms eliminate the non-uniformity effectively,
the computations are complicated, and so it is difficult to be implemented by VLSI technique. In this work we propose a
new adaptive algorithm based on motion information and the steep descent method. The simulation shows that the new
algorithm has inhibited target fade-out and eliminated the ghost artifacts effectively. As the proposed algorithm
characterized by inherent parallelism, modularity and regularity, we also propose its parallel VLSI architecture and
implement it using 0.18μm CMOS technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Focal shift plays an important in many optical fusing systems. In this article, focal shift of concentric piecewise
cylindrical vector beam is investigated by means of vector diffraction theory in detail. The section of the beam consists
of three concentric zones. The center circle zone and outer annular zone are radial polarized, and the inner annular zone
is generalized polarized. In addition the wavefront phase distribution of the vector beam is linear function to radial
coordinate. It is found that the parameter in phase distribution induces focal shift and can alter focal shift considerably.
However, radii of the inner annular zone and polarization angle do affect focal shift very slightly. So the phase parameter
can be used to alter big focal shift while the radii and polarization angle may be employed to adjust intensity distribution.
In focusing system, the focal shift and intensity distribution may be controlled separately, which improves the
application freedom of this kind of technique. Focal shift direction can also be altered by change the phase parameter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation match is widely used in the field of target recognition for its stability and adaptability, but traditional
correlation matching does not meet the demand for real time performance. For this reason, this paper presents an
improved recursive pyramid correlation image matching algorithm and accomplishes the algorithm's parallel
implementation on Multi-DSPs interconnected by FPGA with using several effective optimization strategies. Moreover,
we design detailed experiments which give a comparison of different matching speed using optimization methods step
by step. The results prove that our system is the best optimum with high efficiency compared with conventional
methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we aim to design a parallel processing system based on economic hardware environment to optimize
photogrammetric process of Leica ADS40 images considering ideas and methods of parallel computing. We adopt
parallel computing PCAM principle to design and implement a test system for parallel processing of ADS40 images. The
test system consists of common personal computers and local gigabits network. It can make full use of network
computing and storage resources under a economical and practical cost to deal with ADS40 images. Experiment shows
that it achieves significant improvement of processing efficiency. Furthermore, the robustness and compatibility of this
system is much higher than stand alone computer system because of system's redundancy based on network. In
conclusion, parallel processing system based on PC network brings us a much more efficiency solution of ADS40's
photogrammetric production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional path planning methods are too slow to meet the real-time requirement in practical applications. In order to
solve this problem, an idea of path net was proposed in this paper. The path planning procedure is divided into two steps:
network segment planning and segment assembling. The first step was done with Fast Marching Method, including port
selection of segment and network segment planning. Second step, the A* searching method was chosen to select
segments for assembling. Experiments demonstrated that our method can obtain an optimal route in a few of seconds
after the start and goal are given while several minutes are needed for traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On the basis of studying Non-Local Means (NLM) denoising algorithm and its pixel-wise processing algorithm in
Graphics Processing Unit (GPU), a whole image accumulation algorithm of NLM denoising algorithm based on GPU is
proposed. The number of dynamic instructions of fragment shader is effectively reduced by redesigning the data
structure and processing flow, that make the algorithm suitable to the graphic cards supported Shader Model 3.0 and/or
Shader Model 4.0, and so enhance the versatility of the algorithm. Then the continuous and parallel processing method
for 4 gray images based on Multiple Render Target (MRT) and double Frame Buffer Object (FBO) is proposed, and the
whole processing flow with GPU is presented. The experimental results of both simulative and practical gray images
show that the proposed method can achieve a speedup of 45 times while remaining the same accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new change detection approach based on non-parametric density estimation and Markov random fields is proposed in
this paper. As the concrete form of gray statistical distribution of remote sensing images is often difficult to be known,
the non-parameter density estimation method does not need the specific forms in advance, and is especially suitable for
the estimation problem of small samples, so we adopt the non-parametric density estimation method to obtain the precise
estimation of the probability density of statistical distribution of differencing image in the paper, and then perform multitemporal
remote sensing image change detection combining with MRF(Markov random fields)model for spatial
smoothing. The final experimental results show that the proposed method is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discrete wavelet transform (DWT) is an important tool in digital signal processing. In this paper, a new algorithm to
compute DWT is proposed: first, based on the previous work of performing discrete Fourier transform (DFT) via linear
sums of discrete moments, we introduce a multiplierless DFT by performing appropriate bit operations and shift
operations in binary system; then by convolution theorem, the computation is transformed to the computation of DFT. In
addition, a efficient systolic array is designed to implement the DWT which is a demonstration of the locality of dataflow
in the algorithms. The approach is also applicable to multi-dimensional DWT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of High Performance Computing (HPC) technology to remote sensing data processing is one solution to
meet the requirements of remote sensing real- or near-real-time processing capabilities. We presented a cluster-based
parallel processing system for HJ-1 satellites data, named Cluster Pro. This paper presents the basic architecture and
implementation of the system. We did imagery mosaic experiment with the Cluster Pro, where the HJ-1 CCD data in
Beijing city was used. The experiments showed that the Cluster Pro was a useful system to improve the efficiency of data
processing. Further work would focus on the comprehensive parallel design and implementations of remote sensing data
processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel intelligent model for Image Processing (IP) research integrated development environment (IDE) is presented
for rapid converting conceptual model of IP algorithm into computational model and program implementation. Considering psychology of IP and computer programming, this model presents a cycle model of IP research process and establishes an improved expert system prototype. Visualization approaches are introduced into visualizing three phases of IP development. An intelligent methodology is applied to reuse algorithms, graphical user interfaces (GUI) and data visualizing tools. Thus, researchers are allowed to fix more attention only on their own interest algorithm models. Experimental results show that the development based the new model enhances rapid algorithm prototype modeling with great efficiency and speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Processing the massive LiDAR point cloud is a time consuming process due to the magnitude of the data involved and
the highly computational iterative nature of the algorithms. In particular, many current and future applications of LiDAR
require real- or near-real-time processing capabilities. Relevant examples include environmental studies, military
applications, tracking and monitoring of hazards. Recent advances in Graphics Processing Units (GPUs) open a new era
of General-Purpose Processing on Graphics Processing Units (GPGPU). In this paper, we seek to harness the computing
power available on contemporary Graphic Processing Units (GPUs), to accelerate the processing of massive LiDAR
point cloud. We propose a CUDA-based method capable of accelerating processing of massive LiDAR point cloud on
the CUDA-enabled GPU. Our experimental results showed that we are able to significantly reduce processing time of
constructing TIN from LiDAR point cloud with GPGPU based parallel processing implementation, in comparison with
the current state-of-the-art CPU-based algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A double recursive algorithm based on fuzzy entropy for image thresholding is proposed. The inner recursive step is to
calculate the threshold between the given gray level interval, and the outer recursive step is to calculate the optimal for
the image with small objects. Experiments on steel billet image show that the proposed algorithm can threshold image
fast and exactly, which has effectiveness and application for real time image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The field of view of one camera is limited. To monitor a large area, we often need to use multiple cameras together or a
camera network. In general, the networks with more cameras can monitor larger area. However, using more cameras
often means higher cost. In this paper, we study how to improving the coverage of a camera network by only adjusting
the orientations of the cameras, where adding the cameras is not necessary. A particle swarm optimization algorithm was
developed to solve this problem. On an experiment the algorithm improved the coverage about 14%. The result is also
better than the existing approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. Many different types of CAD schemes are being developed for detection and/or characterization of various lesions in medical imaging, including conventional projection radiography, CT, MRI, and ultrasound imaging. Commercial systems for detection of breast lesions on mammograms have been developed and have received FDA approval for clinical use. CAD may be defined as a diagnosis made by a physician who takes into account the computer output as a "second opinion". The purpose of CAD is to improve the quality and productivity of physicians in their interpretation of radiologic images. The quality of their work can be improved in terms of the accuracy and consistency of their radiologic diagnoses. In addition, the productivity of radiologists is expected to be improved by a reduction in the time required for their image readings. The computer output is derived from quantitative analysis of radiologic images by use of various methods and techniques in computer vision, artificial intelligence, and artificial neural networks (ANNs). The computer output may indicate a number of important parameters, for example, the locations of potential lesions such as lung cancer and breast cancer, the likelihood of malignancy of detected lesions, and the likelihood of various diseases based on differential diagnosis in a given image and clinical parameters. In this review article, the basic concept of CAD is first defined, and the current status of CAD research is then described. In addition, the potential of CAD in the future is discussed and predicted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three dimensional renditions of anatomical structures are commonly used to improve visualization, surgical planning,
and patient education. However, such 3D images also contain information which is not readily apparent, and which can be
mined to elucidate, for example, such parameters as joint kinematics, spacial relationships, and distortions of those
relationships with movement. Here we describe two series of experiments which demonstrate the functional application of
3D imaging. The first concerns the joints of the ankle complex, where the usual description of motions in the talocrural joint
is shown to be incomplete, and where the roles of the anterior talofibular and calcaneofibular ligaments are clarified in ankle
sprains. Also, the biomechanical effects of two common surgical procedures for repairing torn ligaments were examined. The
second series of experiments explores changes in the anatomical relationships between nerve elements and the cervical
vertebrae with changes in neck position. They provide preliminary evidence that morphological differences may exist between
asymptomatic subjects and patients with radiculopathy in certain positions, even when conventional imaging shows no
difference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images of the eye ground not only provide an insight to important parts of the visual system but also reflect the
general state of health of the entire human body. Automatic retina image analysis is becoming an important
screening tool for early detection of certain risks and diseases. Glaucoma is one of the most common causes of
blindness and is becoming even more important considering the ageing society. Robust mass-screening may help
to extend the symptom-free life of affected patients. Our research is focused on a novel automated classification
system for glaucoma, based on image features from fundus photographs. Our new data-driven approach requires
no manual assistance and does not depend on explicit structure segmentation and measurements. First, disease
independent variations, such as nonuniform illumination, size differences, and blood vessels are eliminated from
the images. Then, the extracted high-dimensional feature vectors are compressed via PCA and combined before
classification with SVMs takes place. The technique achieves an accuracy of detecting glaucomatous retina
fundus images comparable to that of human experts. The "vessel-free" images and intermediate output of the
methods are novel representations of the data for the physicians that may provide new insight into and help to
better understand glaucoma.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer Aided Visualization and Analysis Software System (CAVASS) is an open-source software system that is
being developed by the Medical Image Processing Group (MIPG) at the University of Pennsylvania. CAVASS is freely
available, open source, integrated with popular toolkits, and runs on Windows, Unix, Linux, and Mac OS. It includes
extremely efficient of the most commonly used image display, manipulation, and processing operations. Parallel
implementations of computationally demanding tasks such as deformable registration are provided as well using the
inexpensive COW (Cluster of Workstations) model. CAVASS also seamlessly integrates and interfaces with ITK and
provides a graphical user interface for ITK as well. CAVASS can easily interface with a PACS by directly reading and
writing medical images in the industry standard DICOM format and can also input and output other common image
formats as well. We describe the key features, general software architecture, interface with ITK, parallel architecture,
and the CAVASS build and test environment. New stereo rendering capabilities are described as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tensor scale (t-scale) is a parametric representation of local structure morphology that simultaneously describes its
orientation, shape, and isotropic scale. At any image point, t-scale represents the largest ellipse (an ellipsoid in threedimension)
centered at that point and contained in the same homogeneous region defined under a given boundary
criterion. Here, we present an improved algorithm for t-scale computation and study its application to medical image
interpolation. Specifically, the t-scale computation algorithm is improved by: (1) enhancing the accuracy of locating
local structure boundary and (2) combining both algebraic and geometric distance errors in ellipse optimization. In the
context of interpolation of grey level images, a new deterministic approach is presented that directly determines the
interpolation line at each image point using local t-scale information on adjacent slices. At each point on an image slice,
the method determines the normal vector derived by its t-scale that yields trans-orientation of the local structure and
points to the closest edge on the local structure interface. Local normal vectors at the matching two-dimensional points
on two adjacent slices are used to estimate the interpolation line using simple vector algebra. The method has been
applied to BrainWeb data sets and also, to several other medical images from different clinical applications and
preliminary results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-aided diagnosis (CAD) provides a computer output as a "second opinion" in order to assist radiologists in the
diagnosis of various diseases on medical images. Currently, a hot topic in CAD is the development of computerized
schemes for detection of lung abnormalities, such as lung nodule and interstitial lung disease, in computed tomography
(CT) images. The author describes in this article the current status of the CAD schemes for the detection of lung nodules
and interstitial lung disease in CT developed by the author and his colleagues at the University of Chicago and Duke
University.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Liver cancer represents a major health care problem in the world, especially in China and several countries in Southeast
Asia. The most effective treatment is through tumor resection. To improve the outcome of surgery, a combination of
preoperative planning and intra operative image guided liver surgery (IGLS) system has been developed at Pathfinder
Therapeutics, Inc. The preoperative planning subsystem (Linasys® PlaniSight®) is user-oriented and applies several
novel algorithms on image segmentation and modeling, which allows the user to build various organ and tumor models
with anticipated resection planes in less than 30 minutes. The surgeons can analyze the patient-specific case and set up
surgical protocols. This information in image space can then be transferred into physical space through our intra
operative image guided liver surgery system (Linasys® SurgSight®) based on modifications of existing surface
registration algorithms, allowing surgeons to perform more accurate resections after preoperative planning. This tool
gives surgeons a better understanding of vessel structure and tumor locations within the liver parenchyma during the
surgery. Our ongoing clinical trial shows that it greatly facilitates liver resection operation and it is expected to improve
the surgery outcome and create more candidates for surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The essence of Rough set theory is a mathematic tool describing imperfection and uncertainty, can effectively analyze
and deal with those imprecise, inconsistent, incomplete or other imperfect information so as to find out the implied
knowledge. The synergetic pattern recognition is a new way of pattern recognition with many excellent features such as
noise resistance, deformity resistance, and better robustness. The selection of prototype patterns is very important to
pattern recognition of synergetic approach. The main research now is focused on prototype modify from eigenvalue
instead of image pixel. Division matrix of rough set can get the best reduce result, and Furthermore dynamic rough set
method is applied and optimal non-linear features are got as prototype patterns. Experiment result on cervical squamous
intraepithelial cell images shows that the new algorithm can effectively search the optimal prototype patterns, the
synergetic recognition method proposed in this paper is more available, and excellent, correct and fast recognition result
has been achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel region-based active surface model in a level set framework is proposed for subthalamic nucleus
segmentation on MRI. The method is an extension of region based active contour in which the joint prior information of
the object shape and the imaging intensity are utilized to drive the surface evolution in a level set formulation for
segmentation. The mean surface area calculated from labeled samples is used as prior constraints of the object to
segment. This feature and the intensity difference between object and background define a region-based force that drives
a set of 3D surfaces towards the optimal segmentation. Specially, the pre-segmentation of visible structure within the
region of interest constitutes an important step of subthalamic nucleus segmentation and the result forms confinement for
the evolution domain of the surface, which enhances the validity of data term in the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The kidney is composed of many structurally and functionally different tissues. These functionally distinct tissues
exhibit different magnetic resonance signal characteristics in typical MR Urography. This work exploits the tissue
functional differences to construct a physiological feature space for renal segmentation, which has the more distinct
meaning for directly functional evaluation, and lower requirements for storage and computation. In this preliminary
research, a segmentation method was developed and investigated to demonstrate its feasibility on images obtained using
a typical MR Urograpy protocol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under the support vector machine framework, the support value analysis-based image fusion has been studied, where the
salient features of the original images are represented by their support values. The support value transform (SVT)-based
image fusion approach have demonstrated some advantages over the existing methods in multisource image fusion. In
this paper, the directional support value transform (DSVT) is applied to the denoising of some standard images
embedded in white noise and the X-ray images. This directional transform is not norm-preserving and, therefore, the
variance of the noisy support values will depend on the scales. And then we use the hard-thresholding rule for estimating
the unknown support values in different scales and the thresholding is scale-dependent. The peak signal noise ratio
(PSNR) is used as an "objective" measure of performance, and our own visual capabilities are used to identify artifacts
whose effects may not be well-quantified by the PSNR value. The experimental results demonstrate that simple
thresholding of the support values in the proposed method is very competitive with techniques based on wavelets,
including thresholding of decimated or undecimated wavelet transforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three Dimensional (3D) ultrasound images can provide spatial information to help doctors locate the needle position
precisely in ultrasound-guided surgery. In this paper, we present a method called "3D Phase-grouping" to segment the
needle inside the patient. The method is an extension from 2D phase-grouping to the 3D case via a new mathematic
model-the outer products of adjacent orientation vectors. Firstly, the voxels with the same outer products of the
gradient orientation vectors of the adjacent points are divided into Line Support Regions (LSR). Then, the needle is
extracted with the 3D Least-Squares line fitting method in the maximal LSR. Synthetic and 3D ultrasound phantom data
were tested. The segmentation results were evaluated by the angular deviation, position deviation and computational
time. Compared with the 3D Hough transform, 3D Phase-Grouping is more accurate and faster without using
pre-information. The computational complexity and robustness of the algorithm remain to be our future research topics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Registration of medical images is an essential research topic in medical image processing and applications, and
especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image
registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational
and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with
different orientations to represent spatial information in medical images. Integrating ordinal features with pixel
intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register
multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results
demonstrate the effectiveness of the proposed registration scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a novel optical molecular imaging modality, Bioluminescence Tomography (BLT) aims at quantitative reconstruction
of the bioluminescent source distribution inside the biological tissue from the optical signals measured on the living
animal surface, which is a highly ill-posed inverse problem. In this paper, with the finite element method solving the
diffusion equation, an iterative regularization algorithm, referred to as least square QR-factorization (LSQR), is applied
to the inverse problem in BLT. The affect of the preconditioning strategy on the LSQR method (PLSQR) for BLT is also
investigated. The Simulations with a heterogeneous mouse chest phantom demonstrate that by incorporating a priori
knowledge of the permissible source region, the LSQR method can reconstruct the source accurately and stably.
Moreover, by employing preconditioning strategy, PLSQR outperforms LSQR in terms of source power density and
convergence speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmentation of mammographic masses is a challenging task since masses on mammograms typically have fuzzy and
irregular edges. In the case of tissue adhesion, the region growing algorithm combined with maximum likelihood
analysis will lead to a problem of over-segmentation. For the reason given above, an improved adaptive region growing
algorithm for mass segmentation is proposed in this paper. In this algorithm, a hybrid assessment function combined
with maximum likelihood analysis and maximum gradient analysis is developed. In order to accommodate different
situations of masses, the likelihood and the edge gradients of segmented masses are weighted adaptively by the use of
information entropy. 40 benign and 37 malignant tumors were tested in this study. Compared with conventional region
growing algorithm, our proposed algorithm is more adaptive and robust, and it could obtain segmentation contour more accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a method called generalized N-dimensional PCA (GND-PCA) is proposed for statistical volume modeling
of medical volumes in this paper. The medical volume is treated as a series of 3rd-order tensors and the bases on each
mode-subspace are calculated in order to approximate the tensors accurately. The GND-PCA is successfully applied into
the construction of the statistical volume models for 3D CT lung volumes. Experiments show that the constructed
models have good performance on generalization even though the training samples are quite few.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cell tracking has been shown to be an important technique in order to obtain cell motility parameters to consider in
various biological and pharmaceutical applications. In order to get statistically reliable data, a lot of tracking procedures
have to be repeatedly performed, which is a tedious task if performed manually. Thus there is a strong interest in the
automation of the tracking. Automatic cell tracking requires the re-identification of a certain cell image in subsequent
video images. This task is very difficult due to the changes the cell undergoes why moving, i.e. stretching, rotation, but
in phase contrast microscopy also intensity changes. Here we evaluate histogram-based cell image identification
techniques, specifically histogram distance measures, regarding their applicability in phase contrast microscopy with
focus on the possibility to successfully deal with the previously mentioned difficulties and propose a procedure that takes
these into consideration and can thus be applied for image based cell re-identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new speckle reduction method for ultrasonic images is presented. The proposed approach exploits the knowledge of
multiplicative speckle model and a regularization scheme is applied to diffusion processing. The nonlinear diffusion is
integrated with dyadic wavelet transform. Experimental results show the new algorithm can not only reduce speckle
effectively, but also preserve and even enhance edge and details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A recently proposed approach for compressed sensing, or compressive sampling, with deterministic measurement
matrices made of chirps is applied to images that possess varying degrees of sparsity in their wavelet representations.
The "fast reconstruction" algorithm enabled by this deterministic sampling scheme as developed by
Applebaum et al. [1] produces accurate results, but its speed is hampered when the degree of sparsity is not
sufficiently high. This paper proposes an efficient reconstruction algorithm that utilizes discrete chirp-Fourier
transform (DCFT) and updated linear least squares solutions and is suitable for medical images, which have
good sparsity properties. Several experiments show the proposed algorithm is effective in both reconstruction
fidelity and speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of medical science, three-dimensional ultrasound and color power Doppler tomography shooting
placenta is widely used. To determine whether the fetus's development is abnormal or not is mainly through the analysis
of the capillary's distribution of the obtained images which are shot by the Doppler scanner. In this classification
process, we will adopt Support Vector Machine classifier. SVM achieves substantial improvements over the statistical
learning methods and behaves robustly over a variety of different learning tasks. Furthermore, it is fully automatic,
eliminating the need for manual parameter tuning and can solve the small sample problem wonderfully well. So SVM
classifier is valid and reliable in the identification of placentas and is more accurate with the lower error rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Evidence from several previous studies indicated that apparent diffusion coefficient (ADC) map was likely to reveal brain
regions belonging to the ischemic penumbra, that is, areas that may be at risk of infarction in a few hours following stroke
onset. Trace map overcomes the anisotropic diffusions of ADC map, so it is superior for evaluation of an infarct involving
white matter. Mean shift (MS) approach has been successfully used for image segmentation, particularly in brain MR
images. The aim of the study was to develop a tool for rapid and reliable segmentation of infarct in human acute ischemic
stroke based on the ADC and trace maps using the MS approach. In addition, a novel method of 3-dimensional visualization
was presented to provide useful insights into volume datasets for clinical diagnosis. We applied the presented method to
clinical data. The results showed that it was consistent, fast (about 8-10 minutes per subject) and indistinguishable from an
expert using manual segmentation when used our tool.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extracting coronary artery is one of the vital steps in the analysis process based on the modality of computed tomography angiography (CTA), the aim of which is to recognize coronary artery from 3D volume data, and then provide evidences of analysis and quantitative measurement information for coronary artery computer aided detection.
According to the structure features of coronary artery angiography scanned by multiple slices computed tomography (MSCT), an automatic segmentation algorithm is proposed. Firstly, detect and recognize the multiple seed points of the coronary artery in the scale space automatically from the 3D complex cardiac image datasets. Secondly, an improved layer region growing algorithm oriented to 3D tubular structure tissues is proposed to segment the coronary artery.
Experiments show that the algorithm can extract coronary artery vessels effectively, which can improve the automation of coronary artery analysis, thus improve physicians' work efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method is presented to automatically extract the midsagittal plane (MSP) from volumetric MR images. The MSP
which generally approximates the interhemispheric fissure is the plane separating both hemispheres. It is meaningful for
the brain segmentation, registration, quantification and pathology detection especially in Talairach space. The algorithm
works based on the theory of the symmetry principal axis, the local searching method and minimizing the local
symmetry coefficient. The proposed algorithm is validated on 20 T2-weighted MR data sets, which indicates that a clear
MSP image can be extracted in the presence of relatively bigger distance error or angular deviation. This fully automatic
algorithm is potentially useful in the clinical application and for research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the formation mechanism of the complex patterns observed on the skin of fishes has been investigated by a
two-coupled reaction diffusion model. The effects of coupling strength between two layers play an important role in
the pattern-forming process. It is found that only the epidermis layer can produce complicated patterns that have
structures on more than one length scale. These complicated patterns including super-stripe pattern, mixture of spots
and stripe, and white-eye pattern are similar to the pigmentation patterns on fishes' skin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical image fusion is a process of obtaining a new composite image from two or more source images which are from
different modalities. In this paper, we proposed a novel medical image fusion scheme based on the non-negative matrix
factorization (NMF) algorithm, the only resulted basis image is just the fused image. Since the CT and MRI images have
a lot of pixels which are zeros, the NMF algorithm can not be employed directly. To overcome this difficulty, we first
add a positive bias to the original data matrix and remove the bias from the resulted fusion image after the NMF
procedure. The experiment results show that the proposed approach outperforms the existing wavelet-based methods and
Laplacian pyramid-based methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Along with more demand for 3D reconstruction, quantitative analysis and visualization, the more precise segmentation
of medical image is required, especially MR head image. But the segmentation of MRI will be much more complex and
difficult because of indistinct boundaries between brain tissues due to their overlapping and penetrating with each other,
intrinsic uncertainty of MR images induced by heterogeneity of magnetic field, partial volume effect and noise. After
studying the kernel function conditions of support vector, we constructed wavelet SVM algorithm based on wavelet
kernel function. Its convergence and commonality as well as generalization are analyzed. The comparative experiments
are made using the different number of training samples and the different scans, and it .The wavelet SVM can be
extended easily and experiment results show that the SVM classifier offers lower computational time and better
classification precision and it has good function approximation ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the low-field medical magnetic resonance imaging (MRI) system, the original digital MR signal is generated with high
sampling rate and a large amount of noise. In this paper, we propose a wavelet transform-based preprocessing algorithm
for this MR signal, in order to eliminate the noise, reduce the sampling rate and compress the memory of data. We select
Daubechies filter as our decomposition filter and perform multi-level wavelet decomposition on the MR signal. The
scaling function coefficients are obtained at the levels of decomposition, and taken as the low-frequency signal
component. So that fast filtering and multistage decimation without spectrum-aliasing are realized. The experiment based
on the permanent magnetism resonance imaging system proves the efficiency and practicability of this algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Delineation of the subcortical nucleus in MR images is prerequisite for advanced radiotheraphy, surgical planning and morphometric analysis. However, it is always difficult to implement such a complicated work. We proposed a novel framework of 3D active shape model (ASM) based segmentation of the subcortical nucleus in MR images. Firstly, the most representative one of all samples represented by the segmented MR volumes is selected as the template and triangulated to generate a triangulated surface mesh. Then, free form deformation is used to establish dense point correspondences between the template and the other samples. A set of consistent triangle meshes are obtained to build the model by a statistical analysis. To fit the model to a MR volume, the model is initialized with Talairach transformation and the edge map around the model is extracted using watershed transform. An algorithm of robust point
matching is used to find a transformation matrix and model parameters to transpose the model near the target nucleus and match the model to the target nucleus, respectively. The proposed framework was tested on 18 brain MR volumes. The caudate, putamen, globus pallidus, thalamus, and hippocampus were selected as the objects. In comparison with manual segmentation, the accuracy (Mean±SD) of the proposed framework is 0.90±0.04 for all objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is commonplace for the lack of labeled data in novel domains on medical image computer-aided diagnosis but there
have been some labeled data or prior knowledge in old correlative domains. In this paper, instance-transfer approach is
introduced into medical image processing. And then we present a novel transfer learning model based on kernel
matching pursuit called TLKMP, which extends KMP (kernel matching pursuit learning machine, Vincent & Bengio,
2002). TLKMP uses the Greedy Approximation Residue to transfer instances into target domains which have little
labeled set different distributions from the source domains. So, valuable instances in resource domains are reused to
construct high quality classification model for the unlabeled set of the target domains. The experiment is performed
datasets on Gastric Cancer of Lymph Node database which comes from some a hospital. The results show that the
proposed algorithm has better classification performance compared with traditional KMP methods, and it improves
diagnosis accuracy rate of medical images effectively the same as the algorithm need lest labeled data for training a good
classification model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tensor-based morphometry (TBM) is an automated technique for detecting the anatomical differences between populations by examining the gradients of the deformation fields used to nonlinearly warp MR images. The purpose of this study was to investigate the whole-brain volume changes between the patients with unilateral temporal lobe epilepsy (TLE) and the controls using TBM with DARTEL, which could achieve more accurate inter-subject registration of brain images. T1-weighted images were acquired from 21 left-TLE patients, 21 right-TLE patients and 21 healthy controls,
which were matched in age and gender. The determinants of the gradient of deformation fields at voxel level were obtained to quantify the expansion or contraction for individual images relative to the template, and then logarithmical transformation was applied on it. A whole brain analysis was performed using general lineal model (GLM), and the multiple comparison was corrected by false discovery rate (FDR) with p<0.05. For left-TLE patients, significant volume reductions were found in hippocampus, cingulate gyrus, precentral gyrus, right temporal lobe and cerebellum. These
results potentially support the utility of TBM with DARTEL to study the structural changes between groups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a quantum edge detection algorithm was proposed for the blurry and complex characteristic of medical
images with the elicitation of the basic concept and principle of quantum signal processing. Firstly, based on the pixel
qubit and quantum states superposition concept, an image enhancement operator based on quantum probability statistic is
presented which combines with gray correlative characteristic of the pixels in 3×3 neighborhood windows. Then, in
order to realize edge detection, an edge measurement operator based on fuzzy entropy is adopted to the quantum
enhancement image. Experiments showed that this method is more efficient than traditional edge detection methods
because it has a better capability of edge detection to medical images, which can extract not only strong edge but also the
weak one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an ICA-based approach for assessing image quality. Independent component analysis (ICA),
which is a kind of fundamental statistical model for natural images, could model images as linear superpositions of basis
images. The features given by ICA are suitable for image quality assessment because they resemble the representation
given by simple-cells in the mammalian primary visual cortex. The steps of the proposed approach are listed concisely as
follows: estimation of basis images in the ICA model; image features extraction from reference images and their
corresponding distorted images; calculation of image quality scores or scales. Our experimental results show that the
proposed method could achieve competitive performance with other two typical models, Structure SIMilarity (SSIM)
and Visual Information Fidelity (VIF) by being tested on LIVE Subjective database. Some factors that may influence the
performance results, such as the size of sliding window, the total number of image patches, are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volumetric region growing among volume segmentations is an essential and important step in medical image processing.
Conventional schemes are implemented on CPU with the 3D volume, but it can not fit the current processing units, such
as GPU, which is good at texture mapping and parallel processing. In this paper, we present a novel volumetric region
growing scheme based on texture mapping. According to the feature of texture mapping operations, the proposed scheme
designs one texture chunk mapping operation to implement the volumetric seeded region growing. Every step growth is
unified to one type of accordant texture mapping operation. Experimental results of image segmentation on volumetric
medical CT data illustrate that the scheme can realize volumetric region growing for dividing the 3D CT dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we proposed an adaptive watermarking algorithm to embed invisible digital watermarking in the wavelet domain of ultrasonic image. By analyzing the characteristic of detail sub-band coefficients of the ultrasonic image after discrete wavelet transform (DWT), we use the mean and variance of the detail sub-bands to modify the wavelet coefficients adaptively, and the embedded watermark is invisible to human visual system (HVS) and adapted to the original image. We can derive the just noticeable different (JND), which describes the maximum signal intensity that the various parts of image can tolerate the digital watermarking. By using this digital watermarking technique we can embed a certainty or confidentiality information directly into original ultrasonic images so that the replication and transmission of ultrasonic image can be tracked efficiently. Therefore, the copyright and ownership of ultrasonic images can be protected, which is critical for the authorization usage of the source of ultrasonic images. The experimental results and attack analysis showed that the proposed algorithm is effective and robust to ultrasonic image processing operations and geometric attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current DCT based image enhancement techniques will produce heavy artifacts when the enhancement factors are
increased. In order to attack this issue, in this paper, we develop a new image enhancement algorithm in the DCT domain
for radiologists to screen mammograms. In the proposed algorithm, with a given target contrast value and visual quality
requirement, genetic algorithm is used to search the optimal parameter setting for image enhancement. The new image
enhancement algorithm can reduce the artifact introduced by the enhancement effectively. Both objective test and
subjective test were used to verify the proposed algorithm. The experimental results show that the enhanced images have
reduced artifacts and better visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce a parallel algorithm to implement the Region Growing algorithms in GPU, with the purpose
of 3D organ segmentation. Extensive Experiments have been executed on human CT Data, and these experiments show
that the algorithms obtain accurate results with a speed about 10-20 times faster than the traditional methods on CPU.
Several improvements to the traditional region growing algorithms are also introduced in this paper. This method is
integrated in several surgery planning and surgery navigation systems and has achieved good clinical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-aided diagnosis has become one of the major research subjects in medical imaging and diagnostic radiology.
Hypoxic-ischemic encephalopathy (HIE), remains a serious condition that causes significant mortality and long-term
morbidity to neonates. We adopt self-organizing feature maps to segment the tissues, such as white matter and grey
matter in the magnetic resonance images. The borderline between white matter and grey matter can be found and the
doubtful regions along with the borderline can be localized, then the feature in doubtful regions can be quantified. The
method can assist doctors to easily diagnose whether a neonate is ill with mild HIE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The number of projection affects the reconstruction quality in computed tomography system. Larger projection number
will lead to better reconstruction quality. Infinite projections can reconstruct the specimen without mathematical error.
On the other hand, less projection number can reduce the radiation dose, save time and keep the patients comfortable.
The optimal number of projection compromises both the mathematical accuracy and real-life requirement. The purpose
of this paper is to find the relationship between the projection number and the reconstruction quality. A micro-CT system
is developed to validate the relationship between projection number and reconstruction quality. But the actual precision
and signal to noise ratio is hard to find in experiments, because the sample are not standard whose physical
characteristics are certain. We can only provide limited projection number and get relative reconstruction quality in the
experiment. Half-scan method can reduce radiation dose, and it can achieve better results with sufficient sampling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel fusion method which is based on background brightness adjustment for multiple medical
microscopic images. In this process, the background of each microscopic image is separated using the intensity
histogram of the image in HSI color space firstly. The ratio and the difference between a selected reference intensity and
the average background intensity of each image are calculated. Then, according to the ratio and the difference, the
intensity and the saturation of each image are adjusted respectively. Finally, the overlap region between adjacent images
is fused by linearly variable weights in RGB color space. The results of experiments indicate that the method can
availably remove the differences in brightness and color between images, and generate a satisfactory mosicked image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extraction of the luminal contours from the intravascular ultrasound (IVUS) images is very important to analysis and diagnosis of coronary heart disease. Manual processing of large IVUS data is quite tedious and time consuming. This paper presented an algorithm for automatic detection of the luminal contours in intravascular ultrasound images, based on fuzzy clustering and snakes. To solve the difficulty of automatic contour initialization, this paper used fuzzy clustering and spline interpolation to obtain the initial contour. First, fuzzy clustering was used to detect the luminal
contours on the multiple longitudinal images. Then, luminal contour points were transformed into the individual transversal images. Those luminal contour points were spline-interpolated on these transversal images. The spline-interpolated contour was used as the initial contour of snakes. We evaluated automatically detection method based on the average contours obtained from expert manual segmentation as the ground truth, and the results had demonstrated that our method was accurate and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image edge is because the gradation is the result of not continuously, is image's information basic characteristic, is also
one of hot topics in image processing. This paper analyzes traditional arithmetic of image edge detection and existing
problem, uses adaptive lifting wavelet analysis, adaptive adjusts the predict filter and the update filter according to
information's partial characteristic, thus realizes the processing information accurate match; at the same time, improves
the wavelet edge detection operator, realizes one kind to be suitable for the adaptive lifting scheme image edge
detection's algorithm, and applies this method in the medicine image edge detection. The experiment results show that
this paper's algorithm is better than the traditional algorithm effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new non-homogeneous Markov random field model based on fuzzy membership to resolve
over-segmentation caused by traditional MRF model in the application of Brain MRI segmentation. Herein, we use fuzzy
membership to estimate the parameters in the model. Simulated brain MRIs with the noise of different intensity and real
brain MRIs are utilized in experiments. The results illustrate our method effectively reduces over-segmentation and
improves final segmentation results and precision, and its performance is more powerful than that of kernel-based fuzzy
c-means clustering algorithm and the traditional MRF model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
PACS model based on digital watermarking is proposed by analyzing medical image features and PACS requirements
from the point of view of information security, its core being digital watermarking server and the corresponding
processing module. Two kinds of digital watermarking algorithm are studied; one is non-region of interest (NROI)
digital watermarking algorithm based on wavelet domain and block-mean, the other is reversible watermarking
algorithm on extended difference and pseudo-random matrix. The former belongs to robust lossy watermarking, which
embedded in NROI by wavelet provides a good way for protecting the focus area (ROI) of images, and introduction of
block-mean approach a good scheme to enhance the anti-attack capability; the latter belongs to fragile lossless
watermarking, which has the performance of simple implementation and can realize tamper localization effectively, and
the pseudo-random matrix enhances the correlation and security between pixels. Plenty of experimental research has
been completed in this paper, including the realization of digital watermarking PACS model, the watermarking
processing module and its anti-attack experiments, the digital watermarking server and the network transmission
simulating experiments of medical images. Theoretical analysis and experimental results show that the designed PACS
model can effectively ensure confidentiality, authenticity, integrity and security of medical image information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is focused on the sophisticated realistic head modeling based on inhomogeneous and anisotropic conductivity
distribution of the head tissues. The finite element method (FEM) was used to model the five-layer head volume
conductor models with hexahedral elements from segmentation and mapping of DT-MRI data. Then the inhomogeneous
conductivities of the scalp, CSF and gray matter tissue were distributed according a normal distribution based on the
mean value of respective tissues. The electric conductivity of the brain tissues dictates different inhomogeneous and
anisotropic at some different microscopic levels. Including the inhomogeneous and anisotropy of the tissue would
improve the accuracy of the MREIT, EEG and MEG problems in the simulation research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Purpose:
We developed an automatic method for measurement of vertebral bone density based on QCT with the use of internal references(muscle and subcutaneous fat) instead of traditional external phantom.
Methods:
The automatic multistep approach starts with segmentation of periosteal and endosteal surfaces of spine to define ellipse
ROI in cancellous bone followed by segmentation of muscle and subcutaneous fat in the spine image and a subsequent calculation of bone mineral density in ellipse ROI and segmentation trabecular and cortical bone ROI using muscle and subcutaneous fat as internal references. The segmentation approach used a hybrid region-growing method which used local adaptive threshold and morphological operation.
Results:
We conducted with-phantom and without-phantom measurements by using 94 clinical cases. The doctor manually defined the ellipse ROI in the with-phantom measurement. As for the without-phantom measurement, we use our method to automatically gain the BMD. The Interaclass Correlation Coefficient (ICC) is 0.93. We removed the points whose muscle and fat values are 2 times deviated from the standard deviation. And the calibrated ICC value is 0.999.
Conclusion:
The without-phantom measurement method is not fit for the patients whose muscle and fat are seriously deviated from the average value. The without-phantom measurement method proposed in this paper can automatically measure the BMD of spine. By accurately segmenting cortical bone and trabecular bone, determining ROI and removing
inappropriate data, it is proved that the BMD measurement result by this method is highly consistent with that by with-phantom method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speckle is a multiplicative noise that degrades ultrasound images. In this paper, a statistical spatially adaptive approach
for speckle reduction in medical ultrasound images based posterior conditional means (PCM) estimation in the
nonsubsampled contourlet domain is proposed. In this framework, a new class of statistical model for nonsubsampled
contourlet coefficients is proposed. And the proposed method uses the Gaussian distribution for speckle noise and
normal inverse Gaussian distribution for modeling the statistics of nonsubsampled contourlet coefficients in a
logarithmically transformed ultrasound images. Experiments are carried out using synthetically speckled and real
ultrasound images. The experimental results demonstrate that the proposed method performs better than several other
existing methods in terms of quantitative performance as well as in term of visual quality of the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During a CT scan, patients' conscious or unconscious motions would result in motion artifacts which undermine the
image quality and hamper doctors' accurate diagnosis and therapy. It is desirable to develop a precise motion estimation
and artifact reduction method in order to produce high-resolution images. Rigid motion can be decomposed into two
components: translational motion and rotational motion. Since considering the rotation and translation simultaneously is
very difficult, most former studies on motion artifact reduction ignore rotation. The extended HLCC based method
considering the rotation and translation simultaneously relies on a searching algorithm which leads to expensive
computing cost. Therefore, a novel method which does not rely on searching is desirable. In this paper, we focus on
parallel-beam CT. We first propose a frequency domain based method to estimate rotational motion, which is not
affected by translational motion. It realizes the separation of rotation estimation and translation estimation. Then we
combine this method with the HLCC based method to construct a new method for general rigid motion called separative
estimation and collective correction method. Furthermore, we present numerical simulation results to show the accuracy
and robustness of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the popularization of multicore CPU, it will be the trend of CPU and be used in lots of applications. This paper
proposed a multicore-based parallelized registration algorithm, which is derived from the image registration based on
different evolution. This proposed algorithm takes full advantage of the parallel ability of multicore CPU, thus it
accomplishes the parallel evolution algorithm on graphic workstation, which traditionally works on supercomputer, and
realizes the multicore-based parallelized fast medical image registration. The experimental results show that, the
proposed algorithm has no difference in precision and stability with traditional image registration based on differential
evolution, and exhibits fast convergence and a speedup almost growing linearly with the number of kernels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To generate a digital brain atlas, we need to segment and extract brain structures from images of an individual brain.
However, many of the brain structures can hardly be accurately segmented if only according to the content of them. The
goal of this thesis is to develop an experiment platform, which focuses on the basic methods about labeling, extraction,
segmentation and visualization brain structures in MR images by leveraging the anatomic information and topological
structure in standard atlas. In this paper, we propose the concept of standardization of brain MR image and introduce a
king of standardization method based on geometry correction technique. To work out the method, first we preprocessed
the raw MR image, second automatically searched the control points which were used for establishing the correction
equation, and last we completed the standardization of the MR image. The result of the experiments demonstrated the
practicability efficiency of this kind of method, which could provide theoretical foundation and experimental means for
the diagnosis of brain disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new
medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we
classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the
information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation.
The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are
chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering
segmentation algorithm is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Patient motion during scanning will introduce artifacts in the reconstructed image in MRI imaging. Periodically Rotated
Overlapping Parallel Lines with Enhanced Reconstruction (PROPELLER) MRI is an effective technique to correct for
motion artifacts. The iterative method that combine the preconditioned conjugate gradient (PCG) algorithm with
nonuniform fast Fourier transformation (NUFFT) operations is applied to PROPELLER MRI in the paper. But the
drawback of the method is long reconstruction time. In order to make it viable in clinical situation, parallel optimization
of the iterative method on modern GPU using CUDA is proposed. The simulated data and in vivo data from
PROPELLER MRI are respectively reconstructed in order to test the method. The experimental results show that image
quality is improved compared with gridding method using the GPU based iterative method with compatible
reconstruction time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Myocardial electrical excitation propagation is anisotropic, with the most rapid spread of current along the direction of the long axis of the fiber. Fiber orientation is also an important determinant of myocardial mechanics. So myocardial fiber orientations are very important to heart modeling and simulation. Accurately construction of myocardial fiber orientations, however, is still a challenge. The purpose of this paper is to construct a heart geometrical model with myocardial fiber orientations based on CT and 3D laser scanned pictures. The iterative closest points (ICP) algorithms were used to register the fiber orientations with the heart geometry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GLUT4 is responsible for insulin-stimulated glucose uptake into fat cells and description of the dynamic behavior of it
can give insight in some working mechanisms and structures of these cells. Quantitative analysis of the dynamical
process requires tracking of hundreds of GLUT4 vesicles characterized as bright spots in noisy image sequences. In this
paper, a 3D tracking algorithm built in Bayesian probabilistic framework is put forward, combined with the unique
features of the TIRF microscopy. A brightness-correction procedure is firstly applied to ensure that the intensity of a
vesicle is constant along time and is only affected by spatial factors. Then, tracking is formalized as a state estimation
problem and a developed particle filter integrated by a sub-optimizer that steers the particles towards a region with high
likelihood is used. Once each tracked vesicle is located in image plane, the depth information of a granule can be
indirectly inferred according to the exponential relationship between its intensity and its vertical position. The
experimental results indicate that the vesicles are tracked well under different motion styles. More, the algorithm
provides the depth information of the tracked vesicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quantitative analysis system for real time myocardial contrast echocardiography can measure the values of A(microvascular cross-sectional area or myocardial blood volume)and β(myocardial microbubble velocity), A·β(myocardial blood flow), A-EER (endo-epi ratio of A ), β-EER and A·β-EER from the signal intensity of real-time 2-D grayscale images and power Doppler images, draw the time-intensity curves to indicate the variation of the intensity of micro-bubbles scattering in subendocardial layer and subepicardial layer with the varying of myocardial segments, and
estimate the hemodynamic parameters by nonlinear regression analysis. The system also conformed to the digital imaging and communications in medicine (DICOM) standard and could be integrated into the picture archiving and communication system (PACS). The examples of clinical study indicated the clinical effectiveness of the system and the reliability of the quantitative analysis techniques employed in the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because x-ray image has low contrast resolution and edge sharpness, this paper presents a novel medical image
segmentation base on level set. The traditional Chan-Vese model base on simplify Mumford-Shah model can not
segment images with intensity inhomogeneity. We structure energy function in one point's local region; partial
differential equations (PDEs) of curve evolution are obtained by minimization of the energy functional. We can obtain
global minimum when we expand it to the whole image. Our model can be used to segment images with intensity
inhomogeneity. Experimental results verify the effectives and robustness of this segmentation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged
using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD
sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been
designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a
device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding
exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the
circuit components have been adjusted. And high-quality images have been obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although the CT device can give the doctors a series of 2D medical images, it is difficult to give vivid view for the
doctors to acknowledge the decrease part. In order to help the doctors to plot the surgery, the virtual surgery system is
researched based on the three-dimensional visualization technique. After the disease part of the patient is scanned by the
CT device, the 3D whole view will be set up based on the 3D reconstruction module of the system. TCut a part is the
usually used function for doctors in the real surgery. A curve will be created on the 3D space; and some points can be
added on the curve automatically or manually. The position of the point can change the shape of the cut curves. The
curve can be adjusted by controlling the points. If the result of the cut function is not satisfied, all the operation can be
cancelled to restart. The flexible virtual surgery gives more convenience to the real surgery. Contrast to the existing
medical image process system, the virtual surgery system is added to the system, and the virtual surgery can be plotted
for a lot of times, till the doctors have enough confidence to start the real surgery. Because the virtual surgery system can
give more 3D information of the disease part, some difficult surgery can be discussed by the expert doctors in different
city via internet. It is a useful function to understand the character of the disease part, thus to decrease the surgery risk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.