Magentic Reasonance Imaging for mouse phenotype study is one of the important tools to understand human
diseases. In this paper, we present a fully automatic pipeline for the process of morphometric mouse brain
analysis. The method is based on atlas-based tissue and regional segmentation, which was originally developed
for the human brain. To evaluate our method, we conduct a qualitative and quantitative validation study as
well as compare of b-spline and fluid registration methods as components in the pipeline. The validation study
includes visual inspection, shape and volumetric measurements and stability of the registration methods against
various parameter settings in the processing pipeline. The result shows both fluid and b-spline registration
methods work well in murine settings, but the fluid registration is more stable. Additionally, we evaluated our
segmentation methods by comparing volume differences between Fmr1 FXS in FVB background vs C57BL/6J
In this paper, we present MIDAS, a web-based digital archiving system that processes large collections of data. Medical imaging research often involves interdisciplinary teams, each performing a separate task, from acquiring datasets to analyzing the processing results. Moreover, the number and size of the datasets continue to increase every year due to recent advancements in acquisition technology. As a result, many research laboratories centralize their data and rely on distributed computing power. We created a web-based digital archiving repository based on openstandards. The MIDAS repository is specifically tuned for medical and scientific datasets and provides a flexible data management facility, a search engine, and an online image viewer. MIDAS enables users to run a set of extensible image processing algorithms from the web to the selected datasets and to add new algorithms to the MIDAS system, facilitating the dissemination of users' work to different research partners. The MIDAS system is currently running in several research laboratories and has demonstrated its ability to streamline the full image processing workflow from data acquisition to image analysis and reports.
In this paper, we present an open-source framework for testing tracking devices in surgical
navigation applications. At the core of image-guided intervention systems is the tracking interface
that handles communication with the tracking device and gathers tracking information. Given that
the correctness of tracking information is critical for protecting patient safety and for ensuring the
successful execution of an intervention, the tracking software component needs to be thoroughly
tested on a regular basis. Furthermore, with widespread use of extreme programming methodology
that emphasizes continuous and incremental testing of application components, testing design
becomes critical. While it is easy to automate most of the testing process, it is often more difficult to
test components that require manual intervention such as tracking device.
Our framework consists of a robotic arm built from a set of Lego Mindstorms and an open-source
toolkit written in C++ to control the robot movements and assess the accuracy of the tracking
devices. The application program interface (API) is cross-platform and runs on Windows, Linux and
We applied this framework for the continuous testing of the Image-Guided Surgery Toolkit
(IGSTK), an open-source toolkit for image-guided surgery and shown that regression testing on
tracking devices can be performed at low cost and improve significantly the quality of the software.
The Image-Guided Surgery Toolkit (IGSTK) is an open source C++ software library that provides the basic components
needed to develop image-guided surgery applications. The focus of the toolkit is on robustness using a state machine
architecture. This paper presents an overview of the project based on a recent book which can be downloaded from
igstk.org. The paper includes an introduction to open source projects, a discussion of our software development process
and the best practices that were developed, and an overview of requirements. The paper also presents the architecture
framework and main components. This presentation is followed by a discussion of the state machine model that was
incorporated and the associated rationale. The paper concludes with an example application.
Visualization and image processing of medical datasets has become an essential task for clinical diagnosis support as well as for treatment planning. In order to enable a physician to use and evaluate algorithms within a clinical setting, easily applicable software prototypes with a dedicated user interface are essential. However, substantial programming knowledge is still required today when using powerful open source libraries such as the Visualization Toolkit (VTK) or the Insight Toolkit (ITK). Moreover, these toolkits provide only limited graphical user interface functionality. In this paper, we present the visual programming and rapid prototyping platform MeVisLab which provides flexible and simple handling of visualization and image processing algorithms of VTK/ITK, Open Inventor and the MeVis Image Library by modular visual programming. No programming knowledge is required to set up image processing and visualization pipelines. Complete applications including user interfaces can be easily built within a general framework. In addition to the VTK/ITK features, MeVisLab provides a full integration of the Open Inventor library and offers a state-of-the-art integrated volume renderer. The integration of VTK/ITK algorithms is performed automatically: an XML structure is created from the toolkits' source code followed by an automatic module generation from this XML description. Thus, MeVisLab offers a one stop solution integrating VTK/ITK as modules and is suited for rapid prototyping as well as for teaching medical visualization and image analysis. The VTK/ITK integration is available as package of the free version of MeVisLab.
Open source software has tremendous potential for improving the productivity of research labs and enabling the development of new medical applications. The Image-Guided Surgery Toolkit (IGSTK) is an open source software toolkit based on ITK, VTK, and FLTK, and uses the cross-platform tools CMAKE and DART to support common operating systems such as Linux, Windows, and MacOS. IGSTK integrates the basic components needed in surgical guidance applications and provides a common platform for fast prototyping and development of robust image-guided applications. This paper gives an overview of the IGSTK framework and current status of development followed by an example needle biopsy application to demonstrate how to develop an image-guided application using this toolkit.
SC538: Medical Image Analysis with ITK and Related Open-Source Software
This course introduces attendees to select open-source efforts in the field of medical image analysis. Opportunities for users and developers are presented. The course particularly focuses on the open-source Insight Toolkit (ITK) for medical image segmentation and registration. The course describes the procedure for downloading and installing the toolkit and covers the use of its data representation and filtering classes. Attendees are shown how ITK can be used in their research, rapid prototyping, and application development.
Stephen Aylward, Kitware, Inc. – Chapel Hill Office
• The Insight Software Consortium: contributing and using open-source
• The architecture and installation of the Insight Toolkit
Josh Cates, Univ. of Utah
• Segmentation methods of the Insight Toolkit
Lydia Ng, Allen Brain Institute
• Registration methods of the Insight Toolkit
Julien Jomier, Kitware, Inc. – Chapel Hill Office
• Image IO using the Insight Toolkit
• The Image-Guided Surgery Toolkit (http://www.igstk.org)
Stephen Aylward, Kitware, Inc. – Chapel Hill Office
• Applications of the Insight Toolkit