Evaluating the severity of knee osteoarthritis (OA) accounts for significant plain film workload and is a crucial component of knee radiograph interpretation, which informs surgical decision-making for costly and invasive procedures such as knee replacement. The Kellgren-Lawrence (KL) grading scale systematically and quantitatively assesses the severity of knee OA but is associated with notable inter-reader variability. In this study, we propose a deep learning method for the assessment of joint space narrowing (JSN) in the knee, which is an essential part of determining the KL grade. To determine the extent of JSN, we analyzed 99 knee radiographs to calculate the distance between the femur and tibia. Our algorithm's measurements of JSN and KL grade correlated well other radiologists' assessments. The average distance (in pixels) between the femur and tibia bones as measured by our algorithm was 9.60 for KL=0, 7.60 for KL=1, 6.89 for KL=2, 3.75 for KL=3, 1.25 for KL=4. Additionally, we used 100 manually annotated knee radiographs to train the algorithm to segment the femur and tibia bones. When evaluated on an independent set of 20 knee radiographs, the algorithm demonstrated a Dice coefficient of 96.59%. An algorithm for measurement of JSN and KL grades may play a significant role in automatically, reliably, and passively evaluating knee OA severity and influence and surgical decision-making and treatment pathways.
Deep learning has achieved great success in image analysis and decision making in radiology. However, a large amount of annotated imaging data is needed to construct well-performing deep learning models. A particular challenge in the context of breast cancer is the number of available cases that contain cancer, given the very low prevalence of the disease in the screening population. The question arises whether normal cases, which in the context of breast cancer screening are available in abundance, can be used to train a deep learning model that identifies locations that are abnormal. In this study, we propose to achieve this goal through the generative adversarial network (GAN)-based image completion. Our hypothesis is that if a generative network has a difficulty to correctly complete a part of an image at a certain location, then such a location is likely to represent an abnormality. We test this hypothesis using a dataset of 4348 patients with digital breast tomosynthesis (DBT) imaging from our institution. We trained our model on normal only images, to be able to fill in parts of images that were artificially removed. Then, using an independent test set, at different locations in the images, we measured how difficult it was for the network to reconstruct an artificially removed patch of the image. The difficulty was measured by mean squared error (MSE) between the original removed patch and the reconstructed patch. On average, the MSE was 2.11 times higher (with standard deviation equal to 1.01) at the locations containing expert-annotated cancerous lesions than that at the locations outside those abnormal locations. Our generative approach demonstrates a great potential for using this model to aid breast cancer detection.
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a valuable modality for evaluating breast abnormalities found in mammography and performing early disease detection in high-risk patients. However, images produced by various MRI scanners (e.g., GE Healthcare & Siemens) differ in terms of intensity and other image characteristics such as noise distribution. This is a challenge both for the evaluation of images by radiologists and for the computational analysis of images using radiomics or deep learning. For example, an algorithm trained on a set of images acquired by one MRI scanner may perform poorly on a dataset produced by a different scanner. Therefore, there is an urgent need for image harmonization. Traditional image to image translation algorithms can be used to solve this problem, but they require paired data (i.e. the same object imaged using different scanners). In this study, we utilize a deep learning algorithm that uses unpaired data to solve this problem through a bi-directional translation between MRI images. The proposed method is based on a cycle-consistent adversarial network (CycleGAN) that uses two generator-discriminator pairs. The original CycleGAN struggles in preserving the structure (i.e. breast tissue characteristics and shape) during the translation. To overcome this, we modified the discriminator architecture and forced the penalization based on the structure at the scale of smaller patches. This allows the network to focus more on features pertaining to breast tissue. The results demonstrate that the transformed images are visually realistic, preserve the structure and harmonize intensity across images from different scanners.