This paper presents an imaging-genomic pipeline to derive three-dimensional intra-tumor heterogeneity features from contrast-enhanced CT images and correlates them with gene mutation status. The pipeline has been demonstrated using CT scans of patients with clear cell renal cell carcinoma (ccRCC) from The Cancer Genome Atlas. About 15% of ccRCC cases reported have BRCA1-associated protein 1 (BAP1) gene alterations that are associated with high tumor grade and poor prognosis. We hypothesized that BAP1 mutation status can be detected using computationally derived image features. The molecular data pertaining to gene mutation status were obtained from the cBioPortal. Correlation of the image features with gene mutation status was assessed using the Mann-Whitney-Wilcoxon rank-sum test. We also used the random forests classifier in the Waikato Environment for Knowledge Analysis software to assess the predictive ability of the computationally derived image features to discriminate cases with BAP1 mutations for ccRCC. Receiver operating characteristics were obtained using a leave-one-out-cross-validation procedure. Our results show that our model can predict BAP1 mutation status with a high degree of sensitivity and specificity. This framework demonstrates a methodology for noninvasive disease biomarker detection from contrast-enhanced CT images.
This paper describes our preliminary investigations for deriving and characterizing coarse-level textural regions present
in the lung field on chest radiographs using unsupervised grow-cut (UGC), a cellular automaton based unsupervised
segmentation technique. The segmentation has been performed on a publicly available data set of chest radiographs. The
algorithm is useful for this application because it automatically converges to a natural segmentation of the image from
random seed points using low-level image features such as pixel intensity values and texture features.
Our goal is to develop a portable screening system for early detection of lung diseases for use in remote areas in
developing countries. This involves developing automated algorithms for screening x-rays as normal/abnormal with a
high degree of sensitivity, and identifying lung disease patterns on chest x-rays. Automatically deriving and
quantitatively characterizing abnormal regions present in the lung field is the first step toward this goal. Therefore,
region-based features such as geometrical and pixel-value measurements were derived from the segmented lung fields. In
the future, feature selection and classification will be performed to identify pathological conditions such as pulmonary
tuberculosis on chest radiographs. Shape-based features will also be incorporated to account for occlusions of the lung
field and by other anatomical structures such as the heart and diaphragm.
This paper presents a new technique for segmenting thermographic images using a genetic algorithm (GA). The
individuals of the GA also known as chromosomes consist of a sequence of parameters of a level set function. Each
chromosome represents a unique segmenting contour. An initial population of segmenting contours is generated based on
the learned variation of the level set parameters from training images. Each segmenting contour (an individual) is
evaluated for its fitness based on the texture of the region it encloses. The fittest individuals are allowed to propagate to
future generations of the GA run using selection, crossover and mutation.
The dataset consists of thermographic images of hands of patients suffering from upper extremity musculo-skeletal
disorders (UEMSD). Thermographic images are acquired to study the skin temperature as a surrogate for the amount of
blood flow in the hands of these patients. Since entire hands are not visible on these images, segmentation of the outline
of the hands on these images is typically performed by a human. In this paper several different methods have been tried
for segmenting thermographic images: Gabor-wavelet-based texture segmentation method, the level set method of
segmentation and our GA which we termed LSGA because it combines level sets with genetic algorithms. The results
show a comparative evaluation of the segmentation performed by all the methods. We conclude that LSGA successfully
segments entire hands on images in which hands are only partially visible.
A genetic algorithm (GA) for automating the segmentation of the prostate on pelvic computed tomography (CT) images
is presented here. The images consist of slices from three-dimensional CT scans. Segmentation is typically performed
manually on these images for treatment planning by an expert physician, who uses the "learned" knowledge of organ
shapes, textures and locations to draw a contour around the prostate. Using a GA brings the flexibility to incorporate new
"learned" information into the segmentation process without modifying the fitness function that is used to train the GA.
Currently the GA uses prior knowledge in the form of texture and shape of the prostate for segmentation. We compare
and contrast our algorithm with a level-set based segmentation algorithm, thereby providing justification for using a GA.
Each individual of the GA population represents a segmenting contour. Shape variability of the prostate derived from
manually segmented images is used to form a shape representation from which an individual of the GA population is
randomly generated. The fitness of each individual is evaluated based on the texture of the region it encloses. The
segmenting contour that encloses the prostate region is considered more fit than others and is more likely to be selected
to produce an offspring over successive generations of the GA run. This process of selection, crossover and mutation is
iterated until the desired region is segmented. Results of 2D and 3D segmentation are presented and future work is also