PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 978901 (2016) https://doi.org/10.1117/12.2230174
This PDF file contains the front matter associated with SPIE Proceedings Volume 9789, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sheng-Yang M. Goh, Andrei Irimia, Paul M. Vespa, John D. Van Horn
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 978903 (2016) https://doi.org/10.1117/12.2216150
In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types
necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is
advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in
integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained
refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of
neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the
integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI
analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated
pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using
FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools,
and results are visualized using a novel graphical approach called a ‘connectogram’, where brain connectivity
information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are
represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal
change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due
to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shijian Wang, Ming Fan, Juan Zhang, Bin Zheng, Xiaojia Wang, Lihua Li
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 978904 (2016) https://doi.org/10.1117/12.2217658
Breast cancer is one of the most common malignant tumor with upgrading incidence in females. The key to
decrease the mortality is early diagnosis and reasonable treatment. Molecular classification could provide better
insights into patient-directed therapy and prognosis prediction of breast cancer. It is known that different molecular
subtypes have different characteristics in magnetic resonance imaging (MRI) examination. Therefore, we assumed
that imaging features can reflect molecular information in breast cancer. In this study, we investigated associations
between dynamic contrasts enhanced MRI (DCE-MRI) features and molecular subtypes in breast cancer. Sixty
patients with breast cancer were enrolled and the MR images were pre-processed for noise reduction, registration
and segmentation. Sixty-five dimensional imaging features including statistical characteristics, morphology, texture
and dynamic enhancement in breast lesion and background regions were semiautomatically extracted. The
associations between imaging features and molecular subtypes were assessed by using statistical analyses,
including univariate logistic regression and multivariate logistic regression. The results of multivariate regression
showed that imaging features are significantly associated with molecular subtypes of Luminal A (p=0.00473),
HER2-enriched (p=0.00277) and Basal like (p=0.0117), respectively. The results indicated that three molecular
subtypes are correlated with DCE-MRI features in breast cancer. Specifically, patients with a higher level of
compactness or lower level of skewness in breast lesion are more likely to be Luminal A subtype. Besides, the
higher value of the dynamic enhancement at T1 time in normal side reflect higher possibility of HER2-enriched
subtype in breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 978905 (2016) https://doi.org/10.1117/12.2217660
Breast cancer is the second leading cause of women death in the United States. Currently,
Neoadjuvant Chemotherapy (NAC) has become standard treatment paradigms for breast cancer
patients. Therefore, it is important to find a reliable non-invasive assessment and prediction
method which can evaluate and predict the response of NAC on breast cancer. The Dynamic
Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) approach can reflect dynamic
distribution of contrast agent in tumor vessels, providing important basis for clinical diagnosis.
In this study, the efficacy of DCE-MRI on evaluation and prediction of response to NAC in
breast cancer was investigated. To this end, fifty-seven cases of malignant breast cancers with
MRI examination both before and after two cycle of NAC were analyzed. After pre-processing
approach for segmenting breast lesions and background regions, 126-dimensional imaging
features were extracted from DCE-MRI. Statistical analyses were then performed to evaluate
the associations between the extracted DCE-MRI features and the response to NAC.
Specifically, pairwise t test was used to calculate differences of imaging features between MRI
examinations before-and-after NAC. Moreover, the associations of these image features with
response to NAC were assessed using logistic regression. Significant association are found
between response to NAC and the features of lesion morphology and background parenchymal
enhancement, especially the feature of background enhancement in normal side of breast
(P=0.011). Our study indicate that DCE-MRI features can provide candidate imaging markers
to predict response of NAC in breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 978906 (2016) https://doi.org/10.1117/12.2220768
Lung cancer is the first killer among the cancer deaths. Malignant lung nodules have extremely high mortality while
some of the benign nodules don't need any treatment .Thus, the accuracy of diagnosis between benign or malignant
nodules diagnosis is necessary. Notably, although currently additional invasive biopsy or second CT scan in 3 months
later may help radiologists to make judgments, easier diagnosis approaches are imminently needed. In this paper, we
propose a novel CAD method to distinguish the benign and malignant lung cancer from CT images directly, which can
not only improve the efficiency of rumor diagnosis but also greatly decrease the pain and risk of patients in biopsy
collecting process. Briefly, according to the state-of-the-art radiomics approach, 583 features were used at the first step
for measurement of nodules' intensity, shape, heterogeneity and information in multi-frequencies. Further, with Random
Forest method, we distinguish the benign nodules from malignant nodules by analyzing all these features. Notably, our
proposed scheme was tested on all 79 CT scans with diagnosis data available in The Cancer Imaging Archive (TCIA)
which contain 127 nodules and each nodule is annotated by at least one of four radiologists participating in the project.
Satisfactorily, this method achieved 82.7% accuracy in classification of malignant primary lung nodules and benign
nodules. We believe it would bring much value for routine lung cancer diagnosis in CT imaging and provide
improvement in decision-support with much lower cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surgical PACS, 3D Printing, and Imaging Informatics for Non-radiological Applications
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 978908 (2016) https://doi.org/10.1117/12.2216952
3D printing an anatomically accurate, functional flow loop phantom of a patient’s cardiac vasculature was used to assist
in the surgical planning of one of the first native transcatheter mitral valve replacement (TMVR) procedures. CTA scans
were acquired from a patient about to undergo the first minimally-invasive native TMVR procedure at the Gates Vascular
Institute in Buffalo, NY. A python scripting library, the Vascular Modeling Toolkit (VMTK), was used to segment the 3D
geometry of the patient’s cardiac chambers and mitral valve with severe stenosis, calcific in nature. A stereolithographic
(STL) mesh was generated and AutoDesk Meshmixer was used to transform the vascular surface into a functioning closed
flow loop. A Stratasys Objet 500 Connex3 multi-material printer was used to fabricate the phantom with distinguishable
material features of the vasculature and calcified valve. The interventional team performed a mock procedure on the
phantom, embedding valve cages in the model and imaging the phantom with a Toshiba Infinix INFX-8000V 5-axis Carm
bi-Plane angiography system.
Results: After performing the mock-procedure on the cardiac phantom, the cardiologists optimized their transapical
surgical approach. The mitral valve stenosis and calcification were clearly visible. The phantom was used to inform the
sizing of the valve to be implanted.
Conclusion: With advances in image processing and 3D printing technology, it is possible to create realistic patientspecific
phantoms which can act as a guide for the interventional team. Using 3D printed phantoms as a valve sizing
method shows potential as a more informative technique than typical CTA reconstruction alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 978909 (2016) https://doi.org/10.1117/12.2217036
Complex vascular anatomies can cause the failure of image-guided endovascular procedures. 3D printed patient-specific
vascular phantoms provide clinicians and medical device companies the ability to preemptively plan surgical treatments,
test the likelihood of device success, and determine potential operative setbacks. This research aims to present advanced
mesh manipulation techniques of stereolithographic (STL) files segmented from medical imaging and post-print surface
optimization to match physiological vascular flow resistance. For phantom design, we developed three mesh
manipulation techniques. The first method allows outlet 3D mesh manipulations to merge superfluous vessels into a
single junction, decreasing the number of flow outlets and making it feasible to include smaller vessels. Next we
introduced Boolean operations to eliminate the need to manually merge mesh layers and eliminate errors of mesh self-intersections
that previously occurred. Finally we optimize support addition to preserve the patient anatomical geometry.
For post-print surface optimization, we investigated various solutions and methods to remove support material and
smooth the inner vessel surface. Solutions of chloroform, alcohol and sodium hydroxide were used to process various
phantoms and hydraulic resistance was measured and compared with values reported in literature. The newly mesh
manipulation methods decrease the phantom design time by 30 - 80% and allow for rapid development of accurate
vascular models. We have created 3D printed vascular models with vessel diameters less than 0.5 mm. The methods
presented in this work could lead to shorter design time for patient specific phantoms and better physiological
simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890A (2016) https://doi.org/10.1117/12.2216468
Cytopathology is the study of disease at the cellular level and often used as a screening tool for cancer. Thyroid
cytopathology is a branch of pathology that studies the diagnosis of thyroid lesions and diseases. A pathologist
views cell images that may have high visual variance due to different anatomical structures and pathological
characteristics. To assist the physician with identifying and searching through images, we propose a deep semantic
mobile application. Our work augments recent advances in the digitization of pathology and machine learning
techniques, where there are transformative opportunities for computers to assist pathologists. Our system uses
a custom thyroid ontology that can be augmented with multimedia metadata extracted from images using deep
machine learning techniques. We describe the utilization of a particular methodology, deep convolutional neural
networks, to the application of cytopathology classification. Our method is able to leverage networks that have
been trained on millions of generic images, to medical scenarios where only hundreds or thousands of images
exist. We demonstrate the benefits of our framework through both quantitative and qualitative results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890B (2016) https://doi.org/10.1117/12.2216657
Today, subject’s medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs).
However, eCRFs only insufficiently support integration of subject’s image data, although medical imaging is looming
large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the
smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference
cards are placed in the camera’s field of view next to the lesion. The cards are used for automatic calibration of
geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification.
For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject
relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica,
an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials.
Once the photographs have been securely stored on the server, they are released automatically from the mobile device.
The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is
frequently performed to measure the effect of wound incision management systems. All 205 images, which have been
collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject’s
eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs
decreased. Our approach also increases data security and privacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging Informatics for Diagnostics and Therapeutic Applications
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890C (2016) https://doi.org/10.1117/12.2216574
In radiology, diagnostic errors occur either through the failure of detection or incorrect interpretation. Errors are
estimated to occur in 30-35% of all exams and contribute to 40-54% of medical malpractice litigations. In this
work, we focus on reducing incorrect interpretation of known imaging features.
Existing literature categorizes cognitive bias leading a radiologist to an incorrect diagnosis despite having correctly
recognized the abnormal imaging features: anchoring bias, framing effect, availability bias, and premature closure.
Computational methods make a unique contribution, as they do not exhibit the same cognitive biases as a human.
Bayesian networks formalize the diagnostic process. They modify pre-test diagnostic probabilities using clinical
and imaging features, arriving at a post-test probability for each possible diagnosis.
To translate Bayesian networks to clinical practice, we implemented an entirely web-based open-source software
tool. In this tool, the radiologist first selects a network of choice (e.g. basal ganglia). Then, large, clearly labeled
buttons displaying salient imaging features are displayed on the screen serving both as a checklist and for input. As
the radiologist inputs the value of an extracted imaging feature, the conditional probabilities of each possible
diagnosis are updated. The software presents its level of diagnostic discrimination using a Pareto distribution chart,
updated with each additional imaging feature.
Active collaboration with the clinical radiologist is a feasible approach to software design and leads to design
decisions closely coupling the complex mathematics of conditional probability in Bayesian networks with practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890D (2016) https://doi.org/10.1117/12.2216959
Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted
images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum
intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed.
Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base
and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect
limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method
to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based
on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks
containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask.
Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within
the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection,
enabling superior visualization of cortical hemorrhages and vessels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890E (2016) https://doi.org/10.1117/12.2217733
We have developed an imaging informatics based decision support system that learns from retrospective treatment plans
to provide recommendations for healthy tissue sparing to prospective incoming patients. This system incorporates a
model of best practices from previous cases, specific to tumor anatomy. Ultimately, our hope is to improve clinical
workflow efficiency, patient outcomes and to increase clinician confidence in decision-making. The success of such a
system depends greatly on the training dataset, which in this case, is the knowledge base that the data-mining algorithm
employs. The size and heterogeneity of the database is essential for good performance. Since most institutions employ
standard protocols and practices for treatment planning, the diversity of this database can be greatly increased by
including data from different institutions. This work presents the results of incorporating cross-country, multi-institutional
data into our decision support system for evaluation and testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kevin Ma, Joseph Liu, Xuejun Zhang, Alex Lerner, Mark Shiroishi, Lilyana Amezcua, Brent Liu
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890F (2016) https://doi.org/10.1117/12.2217903
We have designed and developed a multiple sclerosis eFolder system for patient data storage, image
viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system
aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient
treatments and data analysis. The system needs to quantify lesion volumes, identify and register lesion
locations to track shifts in volume and quantity of lesions in a longitudinal study. In order to perform lesion
registration, we have developed a brain warping and normalizing methodology using Statistical Parametric
Mapping (SPM) MATLAB toolkit for brain MRI. Patients’ brain MR images are processed via SPM’s
normalization processes, and the brain images are analyzed and warped according to the tissue probability
map. Lesion identification and contouring are completed by neuroradiologists, and lesion volume
quantification is completed by the eFolder’s CAD program. Lesion comparison results in longitudinal
studies show key growth and active regions. The results display successful lesion registration and tracking
over a longitudinal study. Lesion change results are graphically represented in the web-based user interface,
and users are able to correlate patient progress and changes in the MRI images. The completed lesion and
disease tracking tool would enable the eFolder to provide complete patient profiles, improve the efficiency
of patient care, and perform comprehensive data analysis through an integrated imaging informatics
system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890G (2016) https://doi.org/10.1117/12.2218363
Understanding the neural basis of Major Depressive Disorder (MDD) is important for the diagnosis and treatment of this
mental disorder. The default mode network (DMN) is considered to be highly involved in the MDD. To find directed
interaction between DMN regions associated with the development of MDD, the effective connectivity within the DMN
of the MDD patients and matched healthy controls was estimated by using a recently developed spectral dynamic causal
modeling. Sixteen patients with MDD and sixteen matched healthy control subjects were included in this study. While
the control group underwent the resting state fMRI scan just once, all patients underwent resting state fMRI scans before
and after two months’ treatment. The spectral dynamic causal modeling was used to estimate directed connections
between four DMN nodes. Statistical analysis on connection strengths indicated that efferent connections from the
medial frontal cortex (MFC) to posterior cingulate cortex (PCC) and to right parietal cortex (RPC) were significant
higher in pretreatment MDD patients than those of the control group. After two-month treatment, the efferent
connections from the MFC decreased significantly, while those from the left parietal cortex (LPC) to MFC, PCC and
RPC showed a significant increase. These findings suggest that the MFC may play an important role for inhibitory
conditioning of the DMN, which was disrupted in MDD patients. It also indicates that disrupted suppressive function of
the MFC could be effectively restored after two-month treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890H (2016) https://doi.org/10.1117/12.2225146
The bone age of a human can be identified using carpal and epiphysis bones ossification, which is limited to teen
age. The accurate age estimation depends on best separation of bone pixels and soft tissue pixels in the ROI image.
The traditional approaches like canny, sobel, clustering, region growing and watershed can be applied, but these
methods requires proper pre-processing and accurate initial seed point estimation to provide accurate results.
Therefore this paper proposes new approach to segment the bone from soft tissue and background pixels. First pixels
are enhanced using BPE and the edges are identified by HIPI. Later a K-Means clustering is applied for
segmentation. The performance of the proposed approach has been evaluated and compared with the existing
methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Big Data Technologies and Image Sharing in Medical Imaging and Informatics
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890I (2016) https://doi.org/10.1117/12.2204970
The paper presents a Case-Based Reasoning Tool for Breast Cancer Knowledge Management to improve breast cancer
screening. To develop this tool, we combine both concepts and techniques of Case-Based Reasoning (CBR) and Data
Mining (DM). Physicians and radiologists ground their diagnosis on their expertise (past experience) based on clinical
cases. Case-Based Reasoning is the process of solving new problems based on the solutions of similar past problems and
structured as cases. CBR is suitable for medical use. On the other hand, existing traditional hospital information systems
(HIS), Radiological Information Systems (RIS) and Picture Archiving Information Systems (PACS) don’t allow
managing efficiently medical information because of its complexity and heterogeneity. Data Mining is the process of
mining information from a data set and transform it into an understandable structure for further use. Combining CBR to
Data Mining techniques will facilitate diagnosis and decision-making of medical experts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890J (2016) https://doi.org/10.1117/12.2216278
The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2
million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible
light photographs, some of which are images containing faces (“face images”). Some of these face images are acquired
in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one
of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The
Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included
many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning
technique, which reduced the number of false positives and as a result, the detection precision was improved
significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola-
Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and
deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct
a large training set by manual delineation of the face regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890K (2016) https://doi.org/10.1117/12.2216648
Due to the huge amount of research involving medical images, there is a widely accepted need for
comprehensive collections of medical images to be made available for research. This demand led to the
design and implementation of a flexible image repository, which retrospectively collects images and data
from multiple sites throughout the UK. The OPTIMAM Medical Image Database (OMI-DB) was created to
provide a centralized, fully annotated dataset for research. The database contains both processed and
unprocessed images, associated data, annotations and expert-determined ground truths. Collection has been
ongoing for over three years, providing the opportunity to collect sequential imaging events. Extensive
alterations to the identification, collection, processing and storage arms of the system have been undertaken to
support the introduction of sequential events, including interval cancers.
These updates to the collection systems allow the acquisition of many more images, but more importantly,
allow one to build on the existing high-dimensional data stored in the OMI-DB. A research dataset of this
scale, which includes original normal and subsequent malignant cases along with expert derived and clinical
annotations, is currently unique. These data provide a powerful resource for future research and has initiated
new research projects, amongst which, is the quantification of normal cases by applying a large number of
quantitative imaging features, with a priori knowledge that eventually these cases develop a malignancy. This
paper describes, extensions to the OMI-DB collection systems and tools and discusses the prospective
applications of having such a rich dataset for future research applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890L (2016) https://doi.org/10.1117/12.2216746
Background The extraction and analysis of image features (radiomics) is a promising field in the precision medicine era,
with applications to prognosis, prediction, and response to treatment quantification. In this work, we present a mutual
information – based method for quantifying reproducibility of features, a necessary step for qualification before their
inclusion in big data systems.
Materials and Methods Ten patients with Non-Small Cell Lung Cancer (NSCLC) lesions were followed over time (7
time points in average) with Computed Tomography (CT). Five observers segmented lesions by using a semi-automatic
method and 27 features describing shape and intensity distribution were extracted. Inter-observer reproducibility was
assessed by computing the multi-information (MI) of feature changes over time, and the variability of global extrema.
Results The highest MI values were obtained for volume-based features (VBF). The lesion mass (M), surface to volume
ratio (SVR) and volume (V) presented statistically significant higher values of MI than the rest of features. Within the
same VBF group, SVR showed also the lowest variability of extrema. The correlation coefficient (CC) of feature values
was unable to make a difference between features.
Conclusions MI allowed to discriminate three features (M, SVR, and V) from the rest in a statistically significant manner.
This result is consistent with the order obtained when sorting features by increasing values of extrema variability. MI is a
promising alternative for selecting features to be considered as surrogate biomarkers in a precision medicine context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890M (2016) https://doi.org/10.1117/12.2218257
Providing eligibility, efficacy and security evaluation by quantitative and qualitative disease findings, medical imaging
has become increasingly important in clinical trials. Here, subject’s data is today captured in electronic case reports
forms (eCRFs), which are offered by electronic data capture (EDC) systems. However, integration of subject’s medical
image data into eCRFs is insufficiently supported. Neither integration of subject’s digital imaging and communications
in medicine (DICOM) data, nor communication with picture archiving and communication systems (PACS), is possible.
This aggravates the workflow of the study personnel, in special regarding studies with distributed data capture in
multiple sites. Hence, in this work, a system architecture is presented, which connects an EDC system, a PACS and a
DICOM viewer via the web access to DICOM objects (WADO) protocol. The architecture is implemented using the
open source tools OpenClinica, DCM4CHEE and Weasis. The eCRF forms the primary endpoint for the study personnel,
where subject’s image data is stored and retrieved. Background communication with the PACS is completely hidden for
the users. Data privacy and consistency is ensured by automatic de-identification and re-labelling of DICOM data with
context information (e.g. study and subject identifiers), respectively. The system is exemplarily demonstrated in a
clinical trial, where computer tomography (CT) data is de-centrally captured from the subjects and centrally read by a
chief radiologists to decide on inclusion of the subjects in the trial. Errors, latency and costs in the EDC workflow are
reduced, while, a research database is implicitly built up in the background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cloud Computing and Collaborating for Medical Imaging Services and Applications
Andreas Fetzer, Jasmin Metzger, Darko Katic, Keno März, Martin Wagner, Patrick Philipp, Sandy Engelhardt, Tobias Weller, Sascha Zelzer, et al.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890O (2016) https://doi.org/10.1117/12.2217163
In the surgical domain, individual clinical experience, which is derived in large part from past clinical cases, plays
an important role in the treatment decision process. Simultaneously the surgeon has to keep track of a large
amount of clinical data, emerging from a number of heterogeneous systems during all phases of surgical treatment.
This is complemented with the constantly growing knowledge derived from clinical studies and literature. To
recall this vast amount of information at the right moment poses a growing challenge that should be supported
by adequate technology.
While many tools and projects aim at sharing or integrating data from various sources or even provide knowledge-based
decision support - to our knowledge - no concept has been proposed that addresses the entire surgical
pathway by accessing the entire information in order to provide context-aware cognitive assistance. Therefore a
semantic representation and central storage of data and knowledge is a fundamental requirement.
We present a semantic data infrastructure for integrating heterogeneous surgical data sources based on a common
knowledge representation. A combination of the Extensible Neuroimaging Archive Toolkit (XNAT) with semantic
web technologies, standardized interfaces and a common application platform enables applications to access and
semantically annotate data, perform semantic reasoning and eventually create individual context-aware surgical
assistance.
The infrastructure meets the requirements of a cognitive surgical assistant system and has been successfully
applied in various use cases. The system is based completely on free technologies and is available to the community
as an open-source package.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890P (2016) https://doi.org/10.1117/12.2216421
This paper describes a Computer Aided Diagnosis (CAD) system based on cellphone and distributed cluster. One of
the bottlenecks in building a CAD system for clinical practice is the storage and process of mass pathology samples
freely among different devices, and normal pattern matching algorithm on large scale image set is very time consuming.
Distributed computation on cluster has demonstrated the ability to relieve this bottleneck. We develop a system enabling
the user to compare the mass image to a dataset with feature table by sending datasets to Generic Data Handler Module
in Hadoop, where the pattern recognition is undertaken for the detection of skin diseases. A single and combination
retrieval algorithm to data pipeline base on Map Reduce framework is used in our system in order to make optimal
choice between recognition accuracy and system cost. The profile of lesion area is drawn by doctors manually on the
screen, and then uploads this pattern to the server. In our evaluation experiment, an accuracy of 75% diagnosis hit rate is
obtained by testing 100 patients with skin illness. Our system has the potential help in building a novel medical image
dataset by collecting large amounts of gold standard during medical diagnosis. Once the project is online, the participants
are free to join and eventually an abundant sample dataset will soon be gathered enough for learning. These results
demonstrate our technology is very promising and expected to be used in clinical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890Q (2016) https://doi.org/10.1117/12.2217396
Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890R (2016) https://doi.org/10.1117/12.2225139
Although the computer-aided diagnosis (CAD) system can be applied for classifying the breast masses, the
effects of this method on improvement of the radiologist’ accuracy for distinguishing malignant from benign
lesions still remain unclear. This study provided a novel method to classify breast masses by integrating the
intelligence of human and machine. In this research, 224 breast masses were selected in mammography from
database of DDSM with Breast Imaging Reporting and Data System (BI-RADS) categories. Three observers
(a senior and a junior radiologist, as well as a radiology resident) were employed to independently read and
classify these masses utilizing the Positive Predictive Values (PPV) for each BI-RADS category. Meanwhile,
a CAD system was also implemented for classification of these breast masses between malignant and benign.
To combine the decisions from the radiologists and CAD, the fusion method of the Multi-Agent was provided.
Significant improvements are observed for the fusion system over solely radiologist or CAD. The area under
the receiver operating characteristic curve (AUC) of the fusion system increased by 9.6%, 10.3% and 21%
compared to that of radiologists with senior, junior and resident level, respectively. In addition, the AUC of
this method based on the fusion of each radiologist and CAD are 3.5%, 3.6% and 3.3% higher than that of
CAD alone. Finally, the fusion of the three radiologists with CAD achieved AUC value of 0.957, which was
5.6% larger compared to CAD. Our results indicated that the proposed fusion method has better performance
than radiologist or CAD alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890S (2016) https://doi.org/10.1117/12.2218211
Mammography is the gold standard for breast cancer screening, reducing mortality by about
30%. The application of a computer-aided detection (CAD) system to assist a single radiologist
is important to further improve mammographic sensitivity for breast cancer detection. In this
study, a design and realization of the prototype for remote diagnosis system in mammography based on cloud platform were proposed. To build this system, technologies were utilized including medical image information construction, cloud infrastructure and human-machine
diagnosis model. Specifically, on one hand, web platform for remote diagnosis was established
by J2EE web technology. Moreover, background design was realized through Hadoop
open-source framework. On the other hand, storage system was built up with Hadoop
distributed file system (HDFS) technology which enables users to easily develop and run on
massive data application, and give full play to the advantages of cloud computing which is characterized by high efficiency, scalability and low cost. In addition, the CAD system was
realized through MapReduce frame. The diagnosis module in this system implemented the
algorithms of fusion of machine and human intelligence. Specifically, we combined results of diagnoses from doctors’ experience and traditional CAD by using the man-machine intelligent
fusion model based on Alpha-Integration and multi-agent algorithm. Finally, the applications
on different levels of this system in the platform were also discussed. This diagnosis system
will have great importance for the balanced health resource, lower medical expense and
improvement of accuracy of diagnosis in basic medical institutes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890T (2016) https://doi.org/10.1117/12.2216183
Radiologists currently use a variety of terminologies and standards in most hospitals in China, and even there are multiple terminologies being used for different sections in one department. In this presentation, we introduce a medical semantic comprehension system (MedSCS) to extract semantic information about clinical findings and conclusion from free text radiology reports so that the reports can be classified correctly based on medical terms indexing standards such as Radlex or SONMED-CT. Our system (MedSCS) is based on both rule-based methods and statistics-based methods which improve the performance and the scalability of MedSCS. In order to evaluate the over all of the system and measure the accuracy of the outcomes, we developed computation methods to calculate the parameters of precision rate, recall rate, F-score and exact confidence interval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890U (2016) https://doi.org/10.1117/12.2216367
An increasing adoption of electronic medical records has made information more accessible to clinicians and researchers
through dedicated systems such as HIS, RIS and PACS. The speed and the amount at which information are generated in
a multi-institutional clinical study make the problem complicated compared to day-to-day hospital workflow. Often,
increased access to the information does not translate into the efficient use of that information. Therefore, it becomes
crucial to establish models which can be used to organize and visualize multi-disciplinary data. Good visualization in
turn makes it easy for clinical decision-makers to reach a conclusion within a small span of time. In a clinical study
involving multi-disciplinary data and multiple user groups who need access to the same data and presentation states
based on the stage of the clinical trial or the task are crucial within the workflow. Therefore, in order to demonstrate the
conceptual system design and system workflow, we will be presenting a clinical trial based on application of proton
beam for radiosurgery which will utilize our proposed system. For demonstrating user role and visualization design
purposes, we will be focusing on three different user groups which are researchers involved in patient enrollment and
recruitment, clinicians involved in treatment and imaging review and lastly the principle investigators involved in
monitoring progress of clinical study. Also datasets for each phase of the clinical study including preclinical and clinical
data as it related to subject enrollment, subject recruitment (classifier), treatment (DICOM), imaging, and pathological
analysis (protein staining) of outcomes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890V (2016) https://doi.org/10.1117/12.2216451
In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users’ requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance
computations can be time consuming when the number of image character features grows large, and thus this limits the
usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890W (2016) https://doi.org/10.1117/12.2216560
The purpose of this study is to investigate the role of shape and texture in the classification of hepatic fibrosis by selecting the optimal parameters for a better Computer-aided diagnosis (CAD) system. 10 surface shape features are
extracted from a standardized profile of liver; while15 texture features calculated from gray level co-occurrence matrix
(GLCM) are extracted within an ROI in liver. Each combination of these input subsets is checked by using support vector machine (SVM) with leave-one-case-out method to differentiate fibrosis into two groups: normal or abnormal.
The accurate rate value of all 10/15 types number of features is 66.83% by texture, while 85.74% by shape features,
respectively. The irregularity of liver shape can demonstrate fibrotic grade efficiently and texture feature of CT image is
not recommended to use with shape feature for interpretation of cirrhosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890X (2016) https://doi.org/10.1117/12.2217314
To improve healthcare service quality with balancing healthcare resources between large and
small hospitals, as well as reducing costs, each district health administration in Shanghai with more than 24 million citizens has built image-enabled electronic healthcare records (iEHR) system to share patient
medical records and encourage patients to visit small hospitals for initial evaluations and preliminary
diagnoses first, then go to large hospitals to have better specialists’ services. We implemented solution for
iEHR systems, based on the IHE XDS-I integration profile and installed the systems in more than 100
hospitals cross three districts in Shanghai and one city in Jiangsu Province in last few years. Here, we give operational results of these systems in these four districts and evaluated the performance of the
systems in servicing the regional collaborative imaging diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890Y (2016) https://doi.org/10.1117/12.2218093
Previously, we presented an ePR system to support imaging based stroke rehabilitation clinical trials. To facilitate the data analysis, we developed a generalized linear mixed effects model (GLMM) module to investigate correlation based on features extracted from textual database and imaging biomarkers. With the proposed module, the system is able to evaluate a variety of measurements including quantitative imaging features. Moreover, once an accurate GLMM model is identified from the clinical trial, the module can be used to predict outcomes for new patients based on their conditions and used as a decision support tool for optimizing the treatment plans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Medical Imaging 2016: PACS and Imaging Informatics: Next Generation and Innovations, 97890Z (2016) https://doi.org/10.1117/12.2225142
Neoadjuvant chemotherapy (NACT) is being used increasingly in the management of patients with breast cancer for
systemically reducing the size of primary tumor before surgery in order to improve survival. The clinical response of
patients to NACT is correlated with reduced or abolished of their primary tumor, which is important for treatment in the
next stage. Recently, the dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is used for evaluation of
the response of patients to NACT. To measure this correlation, we extracted the dynamic features from the DCE- MRI
and performed association analysis between these features and the clinical response to NACT. In this study, 59 patients
are screened before NATC, of which 47 are complete or partial response, and 12 are no response. We segmented the
breast areas depicted on each MR image by a computer-aided diagnosis (CAD) scheme, registered images acquired from
the sequential MR image scan series, and calculated eighteen features extracted from DCE-MRI. We performed SVM
with the 18 features for classification between patients of response and no response. Furthermore, 6 of the 18 features
are selected to refine the classification by using Genetic Algorithm. The accuracy, sensitivity and specificity are 87%,
95.74% and 50%, respectively. The calculated area under a receiver operating characteristic (ROC) curve is 0.79±0.04.
This study indicates that the features of DCE-MRI of breast cancer are associated with the response of NACT. Therefore,
our method could be helpful for evaluation of NACT in treatment of breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.