images. We propose to use 2D Fourier filters in different transform domains including Fourier, wavelet, and nonsubsampled contourlet domains to eliminate this kind of noise. We used image entropy and vessel density as the metrics to evaluate their performance on noise elimination, we found that filtering after the nonsubsampled contourlet transform (NSCT) was the best choice among these approaches. For vessel preservation, the wavelet-domain filtering has the advantage of keeping signal-to-noise ratio while the NSCT filtering can preserve structure similarity to the most extent.
The visualization in a single view of abnormal association patterns obtained from mining lengthy marine raster datasets presents a great challenge for traditional visualization techniques. On the basis of the representation model of marine abnormal association patterns, an interactive visualization framework is designed with three complementary components: three-dimensional pie charts, two-dimensional variation maps, and triple-layer mosaics; the details of their implementation steps are given. The combination of the three components allows users to request visualization of the association patterns from global to detailed scales. The three-dimensional pie chart component visualizes the locations where more marine environmental parameters are interrelated and shows the parameters that are involved. The two-dimensional variation map component gives the spatial distribution of interactions between each marine environmental parameter and other parameters. The triple-layer mosaics component displays the detailed association patterns at locations specified by the users. Finally, the effectiveness and the efficiency of the proposed visualization framework are demonstrated using a prototype system with three visualization interfaces based on ArcEngine 10.0, and the abnormal association patterns among marine environmental parameters in the Pacific Ocean are visualized.
This paper deals with automatic grading of nuclear cataract (NC) from slit-lamp images in order to reduce the efforts in traditional manual grading. Existing works on this topic have mostly used brightness and color of the eye lens for the task but not the visibility of lens parts. The main contribution of this paper is in utilizing the visibility cue by proposing gray level image gradient-based features for automatic grading of NC. Gradients are important for the task because in a healthy eye, clear visibility of lens parts leads to distinct edges in the lens region, but these edges fade as severity of cataract increases. Experiments performed on a large dataset of over 5000 slit-lamp images reveal that the proposed features perform better than the state-of-the-art features in terms of both speed and accuracy. Moreover, fusion of the proposed features with the prior ones gives results better than any of the two used alone.
The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular
diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an
intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches
are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing
method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGA<sup>light</sup> database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGA<sup>light</sup> database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.
This paper presents an approach to gallbladder shape comparison by using 3D shape modeling and decomposition. The
gallbladder models can be used for shape anomaly analysis and model comparison and selection in image guided robotic
surgical training, especially for laparoscopic cholecystectomy simulation. The 3D shape of a gallbladder is first
represented as a surface model, reconstructed from the contours segmented in CT data by a scheme of propagation based
voxel learning and classification. To better extract the shape feature, the surface mesh is further down-sampled by a
decimation filter and smoothed by a Taubin algorithm, followed by applying an advancing front algorithm to further
enhance the regularity of the mesh. Multi-scale curvatures are then computed on the regularized mesh for the robust
saliency landmark localization on the surface. The shape decomposition is proposed based on the saliency landmarks
and the concavity, measured by the distance from the surface point to the convex hull. With a given tolerance the 3D
shape can be decomposed and represented as 3D ellipsoids, which reveal the shape topology and anomaly of a
gallbladder. The features based on the decomposed shape model are proposed for gallbladder shape comparison, which
can be used for new model selection. We have collected 19 sets of abdominal CT scan data with gallbladders, some
shown in normal shape and some in abnormal shapes. The experiments have shown that the decomposed shapes reveal
important topology features.
Untreated glaucoma leads to permanent damage of the optic nerve and resultant visual field loss, which can
progress to blindness. As glaucoma often produces additional pathological cupping of the optic disc (OD), cupdisc-
ratio is one measure that is widely used for glaucoma diagnosis. This paper presents an OD localization
method that automatically segments the OD and so can be applied for the cup-disc-ratio based glaucoma diagnosis.
The proposed OD segmentation method is based on the observations that the OD is normally much
brighter and at the same time have a smoother texture characteristics compared with other regions within retinal
images. Given a retinal image we first capture the ODs smooth texture characteristic by a contrast image that
is constructed based on the local maximum and minimum pixel lightness within a small neighborhood window.
The centre of the OD can then be determined according to the density of the candidate OD pixels that are
detected by retinal image pixels of the lowest contrast. After that, an OD region is approximately determined by
a pair of morphological operations and the OD boundary is finally determined by an ellipse that is fitted by the
convex hull of the detected OD region. Experiments over 71 retinal images of different qualities show that the
OD region overlapping reaches up to 90.37% according to the OD boundary ellipses determined by our proposed
method and the one manually plotted by an ophthalmologist.
Pathological myopia is the seventh leading cause of blindness. We introduce a framework based on PAMELA
(PAthological Myopia dEtection through peripapilLary Atrophy) for the detection of pathological myopia from fundus
images. The framework consists of a pre-processing stage which extracts a region of interest centered on the optic disc.
Subsequently, three analysis modules focus on detecting specific visual indicators. The optic disc tilt ratio module gives
a measure of the axial elongation of the eye through inference from the deformation of the optic disc. In the texturebased
ROI assessment module, contextual knowledge is used to demarcate the ROI into four distinct, clinically-relevant
zones in which information from an entropy transform of the ROI is analyzed and metrics generated. In particular, the
preferential appearance of peripapillary atrophy (PPA) in the temporal zone compared to the nasal zone is utilized by
calculating ratios of the metrics. The PPA detection module obtains an outer boundary through a level-set method, and
subtracts this region against the optic disc boundary. Temporal and nasal zones are obtained from the remnants to
generate associated hue and color values. The outputs of the three modules are used as in a SVM model to determine the
presence of pathological myopia in a retinal fundus image. Using images from the Singapore Eye Research Institute, the
proposed framework reported an optimized accuracy of 90% and a sensitivity and specificity of 0.85 and 0.95
respectively, indicating promise for the use of the proposed system as a screening tool for pathological myopia.
Glaucoma is a leading cause of blindness. The presence and extent of progression of glaucoma can be determined if the
optic cup can be accurately segmented from retinal images. In this paper, we present a framework which improves the
detection of the optic cup. First, a region of interest is obtained from the retinal fundus image, and a pallor-based
preliminary cup contour estimate is determined. Patches are then extracted from the ROI along this contour. To improve
the usability of the patches, adaptive methods are introduced to ensure the patches are within the optic disc and to
minimize redundant information. The patches are then analyzed for vessels by an edge transform which generates pixel
segments of likely vessel candidates. Wavelet, color and gradient information are used as input features for a SVM
model to classify the candidates as vessel or non-vessel. Subsequently, a rigourous non-parametric method is adopted in
which a bi-stage multi-resolution approach is used to probe and localize the location of kinks along the vessels. Finally,
contenxtual information is used to fuse pallor and kink information to obtain an enhanced optic cup segmentation. Using
a batch of 21 images obtained from the Singapore Eye Research Institute, the new method results in a 12.64% reduction
in the average overlap error against a pallor only cup, indicating viable improvements in the segmentation and supporting
the use of kinks for optic cup detection.
Retinal image analysis is used by clinicians to diagnose and identify, if any, pathologies present in a patient's eye. The
developments and applications of computer-aided diagnosis (CAD) systems in medical imaging have been rapidly
increasing over the years. In this paper, we propose a system to classify left and right eye retinal images automatically.
This paper describes our two-pronged approach to classify left and right retinal images by using the position of the
central retinal vessel within the optic disc, and by the location of the macula with respect to the optic nerve head. We
present a framework to automatically identify the locations of the key anatomical structures of the eye- macula, optic
disc, central retinal vessels within the optic disc and the ISNT regions. A SVM model for left and right eye retinal image
classification is trained based on the features from the detection and segmentation. An advantage of this is that other
image processing algorithms can be focused on regions where diseases or pathologies and more likely to occur, thereby
increasing the efficiency and accuracy of the retinal CAD system/pathology detection.
We have tested our system on 102 retinal images, consisting of 51 left and right images each and achieved and accuracy
of 94.1176%. The high experimental accuracy and robustness of this system demonstrates that there is potential for this
system to be integrated and applied with other retinal CAD system, such as ARGALI, for a priori information in
automatic mass screening and diagnosis of retinal diseases.
The accurate localization of the optic cup in retinal images is important to assess the cup to disc ratio (CDR) for
glaucoma screening and management. Glaucoma is physiologically assessed by the increased excavation of the optic cup
within the optic nerve head, also known as the optic disc. The CDR is thus an important indicator of risk and severity of
glaucoma. In this paper, we propose a method of determining the cup boundary using non-stereographic retinal images
by the automatic detection of a morphological feature within the optic disc known as kinks. Kinks are defined as the
bendings of small vessels as they traverse from the disc to the cup, providing physiological validation for the cup
boundary. To detect kinks, localized patches are first generated from a preliminary cup boundary obtained via level set.
Features obtained using edge detection and wavelet transform are combined using a statistical approach rule to identify
likely vessel edges. The kinks are then obtained automatically by analyzing the detected vessel edges for angular
changes, and these kinks are subsequently used to obtain the cup boundary. A set of retinal images from the Singapore
Eye Research Institute was obtained to assess the performance of the method, with each image being clinically graded
for the CDR. From experiments, when kinks were used, the error on the CDR was reduced to less than 0.1 CDR units
relative to the clinical CDR, which is within the intra-observer variability of 0.2 CDR units.
Glaucoma is an irreversible ocular disease leading to permanent blindness. However, early detection can be effective in
slowing or halting the progression of the disease. Physiologically, glaucoma progression is quantified by increased
excavation of the optic cup. This progression can be quantified in retinal fundus images via the optic cup to disc ratio
(CDR), since in increased glaucomatous neuropathy, the relative size of the optic cup to the optic disc is increased. The
ARGALI framework constitutes of various segmentation approaches employing level set, color intensity thresholds and
ellipse fitting for the extraction of the optic cup and disc from retinal images as preliminary steps. Following this,
different combinations of the obtained results are then utilized to calculate the corresponding CDR values. The
individual results are subsequently fused using a neural network. The learning function of the neural network is trained
with a set of 100 retinal images For testing, a separate set 40 images is then used to compare the obtained CDR against a
clinically graded CDR, and it is shown that the neural network-based result performs better than the individual
components, with 96% of the results within intra-observer variability. The results indicate good promise for the further
development of ARGALI as a tool for the early detection of glaucoma.
Medical image segmentation is a challenging process due to possible image over-segmentation and under-segmentation
(leaking). The CALM medical image segmentation system is constructed with an innovative scheme that cascades
threshold level-set and region-growing segmentation algorithms using Union and Intersection set operators. These set
operators help to balance the over-segmentation rate and under-segmentation rate of the system respectively. While
adjusting the curvature scalar parameter in the threshold level-set algorithm, we observe that the abrupt change in the
size of the segmented areas coincides with the occurrences of possible leaking. Instead of randomly choose a value or
use the system default curvature scalar values, this observation prompts us to use the following formula in CALM to
automatically decide the optimal curvature values γ to prevent the occurrence of leaking : δ<sup>2</sup>S/δγ<sup>2</sup> >= M, where S is the
size of the segmented area and M is a large positive number.
Motivated for potential applications in organ transplant and analysis, the CALM system is tested on the segmentation of
the kidney regions from the Magnetic Resonance images taken from the National University Hospital of Singapore. Due
to the nature of MR imaging, low-contrast, weak edges and overlapping regions of adjacent organs at kidney boundaries
are frequently seen in the datasets and hence kidney segmentation is prone to leaking. The kidney segmentation accuracy
rate achieved by CALM is 22% better compared with those achieved by the component algorithms or the system without
leaking detection mechanism. CALM is easy-to-implement and can be applied to many applications besides kidney
Cataract is one of the leading causes of blindness worldwide. A computer-aided approach to assess nuclear cataract
automatically and objectively is proposed in this paper. An enhanced Active Shape Model (ASM) is investigated to
extract robust lens contour from slit-lamp images. The mean intensity in the lens area, the color information on the
central posterior subcapsular reflex, and the profile on the visual axis are selected as the features for grading. A Support
Vector Machine (SVM) scheme is proposed to grade nuclear cataract automatically. The proposed approach has been
tested using the lens images from Singapore National Eye Centre. The mean error between the automatic grading and
grader's decimal grading is 0.38. Statistical analysis shows that 97.8% of the automatic grades are within one grade
difference to human grader's integer grades. Experimental results indicate that the proposed automatic grading approach
is promising in facilitating nuclear cataract diagnosis.
In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based
knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated
into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation
tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary
tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver
transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a
ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other
organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers
have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs,
including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms
using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the
liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are
also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built
available to medical imaging research community for performance benchmarking of liver segmentation algorithms.