Automatic segmentation of brain tumors is a challenging problem with many inherent difficulties, such as restricted training data, great intra-class variance, and volumetric images with large computational requirements in terms of processing. To overcome these difficulties, we propose Brain Tumor Parser (BTP), a novel convolutional neural network that takes advantage of a refinement module and global 3D information to perform semantic segmentation of brain structures in volumetric images with multiple modalities. We draw inspiration from recent breakthroughs in edge detection and semantic segmentation in natural images, and we build an accurate and effective three-dimensional network that segments small structures while refining large instances in multi-modal Magnetic Resonance Imaging (MRI). We evaluate our approach on the data from the Brain Tumor segmentation (BraTS) 2017 challenge, obtaining comparable results with the best performing algorithms, while using a single yet efficient architecture.
Melanoma skin cancer diagnosis can be challenging due to the similarities of the early stage symptoms with regular moles. Standardized visual parameters can be determined and characterized to suspect a melanoma cancer type. The automation of this diagnosis could have an impact in the medical field by providing a tool to support the specialists with high accuracy. The objective of this study is to develop an algorithm trained to distinguish a highly probable melanoma from a non-dangerous mole by the segmentation and classification of dermoscopic mole images. We evaluate our approach on the dataset provided by the International Skin Imaging Collaboration used in the International Challenge Skin Lesion Analysis Towards Melanoma Detection. For the segmentation task, we apply a preprocessing algorithm and use Otsu's thresholding in the best performing color space; the average Jaccard Index in the test dataset is 70.05%. For the subsequent classification stage, we use joint histograms in the YCbCr color space, a RBF Gaussian SVM trained with five features concerning circularity and irregularity of the segmented lesion, and the Gray Level Co-occurrence matrix features for texture analysis. These features are combined to obtain an Average Classification Accuracy of 63.3% in the test dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.