Devices enabled by artificial intelligence (AI) and machine learning (ML) are being introduced for clinical use at an accelerating pace. In a dynamic clinical environment, these devices may encounter conditions different from those they were developed for. The statistical data mismatch between training/initial testing and production is often referred to as data drift. Detecting and quantifying data drift is significant for ensuring that AI model performs as expected in clinical environments. A drift detector signals when a corrective action is needed if the performance changes. In this study, we investigate how a change in the performance of an AI model due to data drift can be detected and quantified using a cumulative sum (CUSUM) control chart. To study the properties of CUSUM, we first simulate different scenarios that change the performance of an AI model. We simulate a sudden change in the mean of the performance metric at a change-point (change day) in time. The task is to quickly detect the change while providing few false-alarms before the change-point, which may be caused by the statistical variation of the performance metric over time. Subsequently, we simulate data drift by denoising the Emory Breast Imaging Dataset (EMBED) after a pre-defined change-point. We detect the change-point by studying the pre- and post-change specificity of a mammographic CAD algorithm. Our results indicate that with the appropriate choice of parameters, CUSUM is able to quickly detect relatively small drifts with a small number of false-positive alarms.
The reconstruction kernel in computed tomography (CT) generation determines the texture of the image. Consistency in reconstruction kernels is important as the underlying CT texture can impact measurements during quantitative image analysis. Harmonization (i.e., kernel conversion) minimizes differences in measurements due to inconsistent reconstruction kernels. Existing methods investigate harmonization of CT scans in single or multiple manufacturers. However, these methods require paired scans of hard and soft reconstruction kernels that are spatially and anatomically aligned. Additionally, a large number of models need to be trained across different kernel pairs within manufacturers. In this study, we adopt an unpaired image translation approach to investigate harmonization between and across reconstruction kernels from different manufacturers by constructing a multipath cycle generative adversarial network (GAN). We use hard and soft reconstruction kernels from the Siemens and GE vendors from the National Lung Screening Trial dataset. We use 50 scans from each reconstruction kernel and train a multipath cycle GAN. To evaluate the effect of harmonization on the reconstruction kernels, we harmonize 50 scans each from Siemens hard kernel, GE soft kernel and GE hard kernel to a reference Siemens soft kernel (B30f) and evaluate percent emphysema. We fit a linear model by considering the age, smoking status, sex and vendor and perform an analysis of variance (ANOVA) on the emphysema scores. Our approach minimizes differences in emphysema measurement and highlights the impact of age, sex, smoking status and vendor on emphysema quantification.
PurposeAnatomy-based quantification of emphysema in a lung screening cohort has the potential to improve lung cancer risk stratification and risk communication. Segmenting lung lobes is an essential step in this analysis, but leading lobe segmentation algorithms have not been validated for lung screening computed tomography (CT).ApproachIn this work, we develop an automated approach to lobar emphysema quantification and study its association with lung cancer incidence. We combine self-supervised training with level set regularization and finetuning with radiologist annotations on three datasets to develop a lobe segmentation algorithm that is robust for lung screening CT. Using this algorithm, we extract quantitative CT measures for a cohort (n = 1189) from the National Lung Screening Trial and analyze the multivariate association with lung cancer incidence.ResultsOur lobe segmentation approach achieved an external validation Dice of 0.93, significantly outperforming a leading algorithm at 0.90 (p < 0.01). The percentage of low attenuation volume in the right upper lobe was associated with increased lung cancer incidence (odds ratio: 1.97; 95% CI: [1.06, 3.66]) independent of PLCOm2012 risk factors and diagnosis of whole lung emphysema. Quantitative lobar emphysema improved the goodness-of-fit to lung cancer incidence (χ2 = 7.48, p = 0.02).ConclusionsWe are the first to develop and validate an automated lobe segmentation algorithm that is robust to smoking-related pathology. We discover a quantitative risk factor, lending further evidence that regional emphysema is independently associated with increased lung cancer incidence. The algorithm is provided at https://github.com/MASILab/EmphysemaSeg.
In lung cancer screening, estimation of future lung cancer risk is usually guided by demographics and smoking status. The role of constitutional profiles of human body, a.k.a. body habitus, is increasingly understood to be important, but has not been integrated into risk models. Chest low dose computed tomography (LDCT) is the standard imaging study in lung cancer screening, with the capability to discriminate differences in body composition and organ arrangement in the thorax. We hypothesize that the primary phenotypes identified using lung screening chest LDCT can form a representation of body habitus and add predictive power for lung cancer risk stratification. In this pilot study, we evaluated the feasibility of body habitus image-based phenotyping on a large lung screening LDCT dataset. A thoracic imaging manifold was estimated based on an intensity-based pairwise (dis)similarity metric for pairs of spatial normalized chest LDCT images. We applied the hierarchical clustering method on this manifold to identify the primary phenotypes. Body habitus features of each identified phenotype were evaluated and associated with future lung cancer risk using time-to-event analysis. We evaluated the method on the baseline LDCT scans of 1,200 male subjects sampled from National Lung Screening Trial. Five primary phenotypes were identified, which were associated with highly distinguishable clinical and body habitus features. Time-to-event analysis against future lung cancer incidences showed two of the five identified phenotypes were associated with elevated future lung cancer risks (HR=1.61, 95% CI = [1.08, 2.38], p=0.019; HR=1.67, 95% CI = [0.98, 2.86], p=0.057). These results indicated that it is feasible to capture the body habitus by image-base phenotyping using lung screening LDCT and the learned body habitus representation can potentially add value for future lung cancer risk stratification.
KEYWORDS: Artificial intelligence, Medical imaging, Computer security, 3D modeling, Computed tomography, Systems modeling, Data modeling, Clouds, Visualization, Tumor growth modeling
The deployment of deep learning algorithms in clinical practice faces challenges in data privacy and local hardware constraints. This work presents the tools and design choices of a browser-based edge computing framework to address these challenges. We leverage this framework for 3D medical image segmentation from computed tomography and characterize its speed, memory, and limitations across various operating systems and browsers. Our platform deploys deep learning-based segmentation of a 256×256×256 volume with an average runtime of 80 seconds and average memory usage of 1.5 GB on Firefox, Chrome, and Microsoft Edge using consumer-level laptops.
Features learned from single radiologic images are unable to provide information about whether and how much a lesion may be changing over time. Time-dependent features computed from repeated images can capture those changes and help identify malignant lesions by their temporal behavior. However, longitudinal medical imaging presents the unique challenge of sparse, irregular time intervals in data acquisition. While self-attention has been shown to be a versatile and efficient learning mechanism for time series and natural images, its potential for interpreting temporal distance between sparse, irregularly sampled spatial features has not been explored. In this work, we propose two interpretations of a time-distance vision transformer (ViT) by using (1) vector embeddings of continuous time and (2) a temporal emphasis model to scale self-attention weights. The two algorithms are evaluated based on benign versus malignant lung cancer discrimination of synthetic pulmonary nodules and lung screening computed tomography studies from the National Lung Screening Trial (NLST). Experiments evaluating the time-distance ViTs on synthetic nodules show a fundamental improvement in classifying irregularly sampled longitudinal images when compared to standard ViTs. In cross-validation on screening chest CTs from the NLST, our methods (0.785 and 0.786 AUC respectively) significantly outperform a cross-sectional approach (0.734 AUC) and match the discriminative performance of the leading longitudinal medical imaging algorithm (0.779 AUC) on benign versus malignant classification. This work represents the first self-attention-based framework for classifying longitudinal medical images. Our code is available at https://github.com/tom1193/time-distance-transformer.
Certain body composition phenotypes, like sarcopenia, are well established as predictive markers for post-surgery complications and overall survival of lung cancer patients. However, their association with incidental lung cancer risk in the screening population is still unclear. We study the feasibility of body composition analysis using chest low dose computed tomography (LDCT). A two-stage fully automatic pipeline is developed to assess the cross-sectional area of body composition components including subcutaneous adipose tissue (SAT), muscle, visceral adipose tissue (VAT), and bone on T5, T8 and T10 vertebral levels. The pipeline is developed using 61 cases of the VerSe`20 dataset, 40 annotated cases of NLST, and 851 inhouse screening cases. On a test cohort consisting of 30 cases from the inhouse screening cohort (age 55 - 73, 50% female) and 42 cases of NLST (age 55 - 75, 59.5% female), the pipeline achieves a root mean square error (RMSE) of 7.25 mm (95% CI: [6.61, 7.85]) for the vertebral level identification and mean Dice similarity score (DSC) 0.99 ± 0.02, 0.96 ± 0.03, and 0.95 ± 0.04 for SAT, muscle, and VAT, respectively for body composition segmentation. The pipeline is generalized to the CT arm of the NLST dataset (25,205 subjects, 40.8% female, 1,056 lung cancer incidences). Time-to-event analysis for lung cancer incidence indicates inverse association between measured muscle cross-sectional area and incidental lung cancer risks (p < 0.001 female, p < 0.001 male). In conclusion, automatic body composition analysis using routine lung screening LDCT is feasible.
Concentric tube and steerable needle robots provide minimally invasive access to confined or remote spaces in the human body. While the modeling and control of these devices has received a great deal of attention in the robotics literature, comparatively less attention has been paid to date to the design of the mechatronic system that grasps the tubes/needles at their bases and applies axial twists and telescopic motions to component tubes, which we refer to as an actuation unit. Toward moving these systems to clinical use, this paper explores the design of a new, compact modular robotic actuation unit that incorporates new approaches to homing and tool changes. In particular, we accomplish homing using sensors that require no moving wires, eliminating potential failure points on the robot. We also present a new quick-connect mechanism that enables a collection of tubes to be rapidly coupled to or decoupled from the robot. This paper describes our new actuation unit design, illustrating our new tube coupling and homing concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.