Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient’s gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Large-scale feed-forward neural networks have seen intense application in many computer vision problems.
However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our
work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based
hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks
such as maze traversal and image processing when compared to generic feed forward networks. While deep neural
networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical
structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in
SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time,
potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial
expression and character recognition tasks using novel deep recurrent model and compares recognition performance
with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed
deep SRN based recognition framework in a humanoid robotic platform called NAO.
Image registration using Artificial Neural Network (ANN) remains a challenging learning task. Registration can be posed as a two-step problem: parameter estimation and actual alignment/transformation using the estimated parameters. To date ANN based image registration techniques only perform the parameter estimation, while affine equations are used to perform the actual transformation. In this paper, we propose a novel deep ANN based image rigid registration that combines parameter estimation and transformation as a simultaneous learning task. Our previous work shows that a complex universal approximator known as Cellular Simultaneous Recurrent Network (CSRN) can successfully approximate affine transformations with known transformation parameters. This study introduces a deep ANN that combines a feed forward network with a CSRN to perform full rigid registration. Layer wise training is used to pre-train feed forward network for parameter estimation and followed by a CSRN for image transformation respectively. The deep network is then fine-tuned to perform the final registration task. Our result shows that the proposed deep ANN architecture achieves comparable registration accuracy to that of image affine transformation using CSRN with known parameters. We also demonstrate the efficacy of our novel deep architecture by a performance comparison with a deep clustered MLP.
The goal of this intelligent transportation systems work is to improve the understanding of the impact of carbon emissions caused by vehicular traffic on highway systems. In order to achieve this goal, this work implements a pipeline for vehicle segmentation, feature extraction, and classification using the existing Virginia Department of Transportation (VDOT) infrastructure on networked traffic cameras. The VDOT traffic video is analyzed for vehicle detection and segmentation using an adaptive Gaussian mixture model algorithm. The morphological properties and histogram of oriented features are derived from the detected and segmented vehicles. Finally, vehicle classification is performed using a multiclass support vector machine classifier. The resulting classification scheme offers an average classification rate of 86% under good quality segmentation. The segmented vehicle and classification data can be used to obtain estimation of carbon emissions.