Terminal duct lobular units (TDLUs) are structures in the breast which involute with the completion of childbearing and physiological ageing. Women with less TDLU involution are more likely to develop breast cancer than those with more involution. Thus, TDLU involution may be utilized as a biomarker to predict invasive cancer risk. Manual assessment of TDLU involution is a cumbersome and subjective process. This makes it amenable for automated assessment by image analysis. In this study, we developed and evaluated an acini detection method as a first step towards automated assessment of TDLU involution using a dataset of histopathological whole-slide images (WSIs) from the Nurses’ Health Study (NHS) and NHSII. The NHS/NHSII is among the world's largest investigations of epidemiological risk factors for major chronic diseases in women. We compared three different approaches to detect acini in WSIs using the U-Net convolutional neural network architecture. The approaches differ in the target that is predicted by the network: circular mask labels, soft labels and distance maps. Our results showed that soft label targets lead to a better detection performance than the other methods. F<sub>1 </sub>scores of 0.65, 0.73 and 0.66 were obtained with circular mask labels, soft labels and distance maps, respectively. Our acini detection method was furthermore validated by applying it to measure acini count per mm<sup>2</sup> of tissue area on an independent set of WSIs. This measure was found to be significantly negatively correlated with age.
Localization of cardiac anatomical landmarks is an important step towards a more robust and accurate analysis of the heart. A fully automatic hybrid framework is proposed that detects key landmark locations in cardiac magnetic resonance (MR) images. Our method is trained and evaluated for the detection of mitral valve points on long-axis MRI and RV insert points in short-axis MRI. The framework incorporates four key modules for the localization of the landmark points. The first module crops the MR image around the heart using a convolutional neural network (CNN). The second module employs a U-Net to obtain an efficient feature representation of the cardiac image, as well as detect a preliminary location of the landmark points. In the third module, the feature representation of a cardiac image is processed with a Recurrent Neural Network (RNN). The RNN leverages either spatial or temporal dynamics from neighboring slides in time or space and obtains a second prediction for the landmark locations. In the last module the two predictions from the U-Net and RNN are combined and final locations for the landmarks are extracted. The framework is separately trained and evaluated for the localization of each landmark, it achieves a final average error of 2.87 mm for the mitral valve points and an average error of 3.64 mm for the right ventricular insert points. Our method shows that the use of a recurrent neural network for the modeling of additional temporal or spatial dependencies improves localization accuracy and achieves promising results.
A pipeline of unsupervised image analysis methods for extraction of geometrical features from retinal fundus images has previously been developed. Features related to vessel caliber, tortuosity and bifurcations, have been identified as potential biomarkers for a variety of diseases, including diabetes and Alzheimer’s. The current computationally expensive pipeline takes 24 minutes to process a single image, which impedes implementation in a screening setting. In this work, we approximate the pipeline with a convolutional neural network (CNN) that enables processing of a single image in a few seconds. As an additional benefit, the trained CNN is sensitive to key structures in the retina and can be used as a pretrained network for related disease classification tasks. Our model is based on the ResNet-50 architecture and outputs four biomarkers that describe global properties of the vascular tree in retinal fundus images. Intraclass correlation coefficients between the predictions of the CNN and the results of the pipeline showed strong agreement (0.86 - 0.91) for three of four biomarkers and moderate agreement (0.42) for one biomarker. Class activation maps were created to illustrate the attention of the network. The maps show qualitatively that the activations of the network overlap with the biomarkers of interest, and that the network is able to distinguish venules from arterioles. Moreover, local high and low tortuous regions are clearly identified, confirming that a CNN is sensitive to key structures in the retina.
Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.