Cochlear implants (CIs) are surgically implanted neural prosthetic devices used to treat severe-to-profound hearing loss. Our group has developed Image-Guided Cochlear Implant Programming (IGCIP) techniques to assist audiologists with the configuration of the implanted CI electrodes. CI programming is sensitive to the spatial relationship between the electrodes and intra cochlear anatomy (ICA) structures. We have developed algorithms that permit determining the position of the electrodes relative to the ICA structure using pre- and post-implantation CT image pairs. However, these do not extend to CI recipients for whom pre-implantation CT (Pre-CT) images are not available because post-implantation CT (Post-CT) images are affected by strong artifacts introduced by the metallic implant. Recently, we proposed an approach that uses conditional generative adversarial nets (cGANs) to synthesize Pre-CT images from Post-CT images. This permits to use algorithms designed to segment Pre-CT images even when these are not available. We have shown that it substantially and significantly improves the results obtained with our previous published methods that segment post- CT images directly. Here we evaluate the effect of this new approach on the final output of our IGCIP techniques, which is the configuration of the CI electrodes, by comparing configurations of the CI electrodes obtained using the real and the synthetic Pre-CT images. In 22/87 cases synthetic image lead to the same results as the real images. Because more than one configuration may lead to equivalent neural stimulation patterns, visual assessment of solutions is required to compare those that differ. This study is ongoing.
Cochlear implants (CIs) are neuroprosthetic devices that can improve hearing in patients with severe-to-profound hearing loss. Postoperatively, a CI device needs to be programmed by an audiologist to determine parameter settings that lead to the best outcomes. Recently, our group has developed an image-guided cochlear implant programming (IGCIP) system to simplify the traditionally tedious post-programming procedure and improve hearing outcomes. IGCIP requires image processing techniques to analyze the location of inserted electrode arrays (EAs) with respect to the intra-cochlear anatomy (ICA), and robust and accurate segmentation methods for the ICA are a critical step in the process. We have proposed active shape model (ASM)-based method and deep learning (DL)-based method for this task, and we have observed that DL methods tend to be more accurate than ASM methods while ASM methods tend to be more robust. In this work, we propose a U-Net-like architecture that incorporates ASM segmentation into the network so that it can refine the provided ASM segmentation based on the CT intensity image. Results we have obtained show that the proposed method can achieve the same segmentation accuracy as that of the DL-based method and the same robustness as that of the ASM-based method.
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to treat patients with hearing loss. For CI recipients, sound bypasses the natural transduction mechanism and directly stimulates the neural regions, thus creating a sense of hearing. Post-operatively, CIs need to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and only relies on the subjective response of the patient. Multiple programming sessions are usually needed, which can take a frustratingly long time. We have developed an imageguided cochlear implant programming (IGCIP) system to facilitate the process. In IGCIP, we segment the intra-cochlear anatomy and localize the electrode arrays in the patient’s head CT image. By utilizing their spatial relationship, we can suggest programming settings that can significantly improve hearing outcomes. To segment the intra-cochlear anatomy, we use an active shape model (ASM)-based method. Though it produces satisfactory results in most cases, sub-optimal segmentation still happens. As an alternative, herein we explore using a deep learning method to perform the segmentation task. Large image sets with accurate ground truth (in our case manual delineation) are typically needed to train a deep learning model for segmentation but such a dataset does not exist for our application. To tackle this problem, we use segmentations generated by the ASM-based method to pre-train the model and fine-tune it on a small image set for which accurate manual delineation is available. Using this method, we achieve better results than the ASM-based method.
Chronic graft-versus-host disease (cGVHD) is a frequent and potentially life-threatening complication of allogeneic hematopoietic stem cell transplantation (HCT) and commonly affects the skin, resulting in distressing patient morbidity. The percentage of involved body surface area (BSA) is commonly used for diagnosing and scoring the severity of cGVHD. However, the segmentation of the involved BSA from patient whole body serial photography is challenging because (1) it is difficult to design traditional segmentation method that rely on hand crafted features as the appearance of cGVHD lesions can be drastically different from patient to patient; (2) to the best of our knowledge, currently there is no publicavailable labelled image set of cGVHD skin for training deep networks to segment the involved BSA. In this preliminary study we create a small labelled image set of skin cGVHD, and we explore the possibility to use a fully convolutional neural network (FCN) to segment the skin lesion in the images. We use a commercial stereoscopic Vectra H1 camera (Canfield Scientific) to acquire ~400 3D photographs of 17 cGVHD patients aged between 22 and 72. A rotational data augmentation process is then applied, which rotates the 3D photos through 10 predefined angles, producing one 2D projection image at each position. This results in ~4000 2D images that constitute our cGVHD image set. A FCN model is trained and tested using our images. We show that our method achieves encouraging results for segmenting cGVHD skin lesion in photographic images.
KEYWORDS: Image registration, Head, Magnetic resonance imaging, Brain, 3D image processing, Medical imaging, 3D modeling, Data modeling, Image segmentation, Neuroimaging
Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.
Proc. SPIE. 10133, Medical Imaging 2017: Image Processing
KEYWORDS: Data modeling, Magnetic resonance imaging, Image segmentation, 3D modeling, Image registration, Medical imaging, Head, Machine learning, Neuroimaging, 3D image processing, Brain
Medical image registration establishes a correspondence between images of biological structures and it is at the core of
many applications. Commonly used deformable image registration methods are dependent on a good preregistration
initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based
transformation between the images. The selection of landmarks is however important. In this work, we present a
learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize
non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes
and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The
transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration
algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the
presented registration initialization over a standard intensity-based affine registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.