Recently, Convolutional Neural Networks (CNNs) have been successfully used to detect microcalcifications in mammograms. An important step in CNN-based detection is image preprocessing that, in raw mammograms, is usually employed to equalize or remove the intensity-dependent quantum noise. In this work, we show how removing the noise can significantly improve the microcalcification detection performance of a CNN. To this end, we describe the quantum noise with a uniform square-root model. Under this assumption, the generalized Anscombe transformation is applied to the raw mammograms by estimating the noise characteristics from the image at hand. In the Anscombe domain, noise is filtered through an adaptive Wiener filter. The denoised images are recovered with an appropriate inverse transformation and are then used to train the CNN-based detector. Experiments were performed on 1,066 mammograms acquired with GE Senographe systems. MC detection performance of a CNN on noise-free mammograms was statistically significantly higher than on unprocessed mammograms. Results were also superior in comparison with a nonparametric noise-equalizing transformation previously proposed for digital mammograms.
Recently, both Deep Cascade classifiers and Convolutional Neural Networks (CNNs) have achieved state-ofthe-art microcalcification (MC) detection performance in digital mammography. Deep Cascades consist in long sequences of weak classifiers designed to effectively learn from heavily unbalanced data as in the case of MCs (∼ 1 MC every 10, 000 non-MC samples). CNNs are powerful models that achieve impressive results for image classification thanks to the ability to automatically extract general-purpose features from the data, but require balanced classes. In this work, we introduce a two-stage classification scheme that combines the benefits of both systems. Firstly, Deep Cascades are trained by requiring a very high sensitivity (99.5%) throughout the sequence of classifiers. As a result, while the number of MC samples remains practically unchanged, the number of non-MC samples is greatly reduced. The remaining data, approximately balanced, are used to train an additional stage of classification with a CNN. We evaluated the proposed approach on a database of 1, 066 digital mammograms. MC detection results of the combined classification were statistically significantly higher than Deep Cascade and CNN alone, yielding an average improvement in mean sensitivity of 3.19% and 2.45%, respectively. Remarkably, the proposed system also yielded a faster per-mammogram processing time (2.0s) compared to Deep Cascade (2.5s) and CNN (5.7s).
For more than a decade, radiologists have used traditional computer aided detection systems to read mammograms, but mainly because of a low computer specificity may not improve their screening performance, according to several studies. The breakthrough in deep learning techniques has boosted the performance of machine learning algorithms, also for breast cancer detection in mammography. The objective of this study was to determine whether radiologists improve their breast cancer detection performance when they concurrently use a deep learningbased computer system for decision support, compared to when they read mammography unaided. A retrospective, fully-crossed, multi-reader multi-case (MRMC) study was designed to compare this. The employed decision support system was Transpara™ (Screenpoint Medical, Nijmegen, the Netherlands). Radiologists interact by clicking an area on the mammogram, for which the computer system displays its cancer likelihood score (1-100). In total, 240 cases (100 cancers, 40 false positive recalls, 100 normals) acquired with two different mammography systems were retrospectively collected. Seven radiologists scored each case once with, and once without the use of decision support, providing a forced BI-RADS® score and a level of suspiciousness (1-100). MRMC analysis of variance of the area under the receiver operating characteristic curves (AUC), and specificity and sensitivity were computed. When using decision support, the AUC increased from 0.87 to 0.89 (P=0.043) and specificity increased from 73% to 78% (P=0.030), while sensitivity did not significantly increment (84% to 87%, P=0.180). In conclusion, radiologists significantly improved their performance when using a deep learningbased computer system as decision support.
Recent breakthroughs in training deep neural network architectures, in particular deep Convolutional Neural Networks (CNNs), made a big impact on vision research and are increasingly responsible for advances in Computer Aided Diagnosis (CAD). Since many natural scenes and medical images vary in size and are too large to feed to the networks as a whole, two stage systems are typically employed, where in the first stage, small regions of interest in the image are located and presented to the network as training and test data. These systems allow us to harness accurate region based annotations, making the problem easier to learn. However, information is processed purely locally and context is not taken into account. In this paper, we present preliminary work on the employment of a Conditional Random Field (CRF) that is trained on top the CNN to model contextual interactions such as the presence of other suspicious regions, for mammography CAD. The model can easily be extended to incorporate other sources of information, such as symmetry, temporal change and various patient covariates and is general in the sense that it can have application in other CAD problems.
Blood vessels are a major cause of false positives in computer aided detection systems for the detection of breast cancer. Therefore, the purpose of this study is to construct a framework for the segmentation of blood vessels in screening mammograms. The proposed framework is based on supervised learning using a cascade classifier. This cascade classifier consists of several stages where in each stage a GentleBoost classifier is trained on Haar-like features. A total of 30 cases were included in this study. In each image, vessel pixels were annotated by selecting pixels on the centerline of the vessel, control samples were taken by annotating a region without any visible vascular structures. This resulted in a total of 31,000 pixels marked as vascular and over 4 million control pixels. After training, the classifier assigns a vesselness likelihood to the pixels. The proposed framework was compared to three other vessel enhancing methods, i) a vesselness filter, ii) a gaussian derivative filter, and iii) a tubeness filter. The methods were compared in terms of area under the receiver operating characteristics curves, the Az values. The Az value of the cascade approach is 0:85. This is superior to the vesselness, Gaussian, and tubeness methods, with Az values of 0:77, 0:81, and 0:78, respectively. From these results, it can be concluded that our proposed framework is a promising method for the detection of vessels in screening mammograms.
In this study, a pattern recognition-based framework is presented to automatically segment the complete cerebral
vasculature from 4D Computed Tomography (CT) patient data. Ten consecutive patients whom were admitted
to our hospital on a suspicion of ischemic stroke were included in this study. A background mask and bone
mask were calculated based on intensity thresholding and morphological operations, and the following six image
features were proposed: 1) a subtraction image of a subtraction image consisting of timing-invariant CTA and
non-constrast CT, 2) the area under the curve of a gamma variate function fitted to the tissue curves, 3-5) three
optimized parameter values of this gamma variate function, and 6) a vessel likeliness function. After masking
bone and background, these features were used to train a linear discriminant voxel classifier (LDC) on regions
of interest (ROIs), which were annotated in soft tissue (white matter and gray matter) and vessels by an expert
observer. The LDC was trained in a leave-one-out manner in which 9 patients tissue ROIs were used for training
and the remaining patient tissue ROIs were used for testing the classifier. To evaluate the frame work, for each
training cycle the accuracy was calculated by dividing the true positives and negatives by the true positives and
negatives and false positives and negatives. The resulting averaged accuracy was 0:985±0:014 with a range of
0:957 to 0:999.