Tomographic acquisition uses projection angles evenly distributed around 2π. The Mojette transform and the discrete
Finite Radon Transform (FRT) both use discrete geometry to overcome the ill-posedeness of the inverse Radon
transform. This paper focuses on the transformation of acquired tomographic projections into suitable discrete projection
forms. Discrete Mojette and FRT algorithms can then be used for image reconstruction. The impact of physical
acquisition parameters (which produce uncertainties in the detected projection data) is also analysed to determine the
possible useful interpolations according to the choice of angle acquisitions and the null space of the transform. The mean
square error (MSE) reconstruction results obtained for data from analytical phantoms consistently shows the superiority
of these discrete approaches when compared to the classical "continuous space" FBP reconstruction.
The goal of this paper is to characterize the noise properties of a spline Filtered BackProjection (denoted as FBP) reconstruction scheme. More specifically, the paper focuses on angular and radial sampling of projection data and on assumed local properties of the function to be reconstructed. This new method is visually and quantitatively compared to standard sampling used for FBP scheme. In the second section, we recall the sampling geometry adapted to the discrete geometry of the reconstructed image. Properties of the discrete zero order Spline Ramp filter for classic angles and discrete angles generated from Farey’s series reconstruction are used to generate their equivalent representations for first order Spline filters. Digital phantoms are used to assess the results and the correctness of the linearity and shift-invariantness assumption for the discrete reconstructions. The filter gain has been studied in the Mojette case since the number of projections can be very
different from one angle to another. In the third section, we describe the Spline filter implementation and the continuous/discrete correspondence. In section 4, Poisson noise is added to noise-free onto the projections. The reconstructions between classic angle distribution and Mojette acquisition geometry are compared. Even if the number of bins per projections is fixed for classic FBP while it varies for the Mojette geometry (leading to very different noise behavior per bin) the results of both algorithms are very close. The discussion allows for a general comparison between classic FBP reconstruction and Mojette FBP. The very encouraging results obtained for the Mojette case conclude for the developments of future acquisition devices modeled with the Mojette geometry.
Image blurring as a result of patient motion, including organ movement, can cause loss of sensitivity in the detection of disease. The use of gated protocols using external signals to synchronize the acquisition with the motion of the organ of interest may provide a solution. Although such a solution has been implemented in cardiac imaging, the implementation of respiratory gating is more challenging considering the irregular nature of respiratory motion. In this work we investigated the use of two different physiological signals; namely respiratory flow and impedance plethysmography for synchronization of pulmonary scintigraphy with respiratory motion. An acquisition and post-processing signal interface was developed using LabVIEW in order to allow detection and comparison of the two signals for the same patient. Methodology was also developed for the rejection of irregular respiratory cycles based on mean amplitude, overall cycle duration and the cycle inspiration to expiration duration ratio (I/E). Rejection criteria based on tidal volume were also examined using the respiratory flow signal. Our data demonstrate that the two respiratory signals investigated are equivalent with only a phase shift difference present. In the case of respiratory flow, irregular cycles were rejected by setting acceptance limits at 40% and 30% around the mean for the I/E and the amplitude or duration of the cycle respectively. In the case of impedance plethysmography a limit of 50% for all rejection criteria was found to be optimum. Finally, a dynamic acquisition protocol was developed and tested providing synchronized scintigraphic images using both types of recorded respiratory signals.
GATE (Geant4 Application for Tomographic Emission) was used to perform a Monte-Carlo simulation of a fully 3D clinical PET scanner. The Philips Allegro PET system was simulated in order (a). to allow a detailed study of the parameters affecting the system’s performance under various imaging conditions, and (b). to further validate the use of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The simulation of count rate related performance characteristics of the scanner was facilitated through the development of a dead time model to describe data flow and data loss at the level of detected single events and coincidences. The developed system design and associated dead time model were validated by comparing simulated with experimental measurements obtained with the Allegro PET system. These measurements included the use of point as well as distributed sources, allowing us to determine spatial resolution, scatter fraction, sensitivity, and count rate performance based on the NEMA NU2-1994 protocols. Using the NEMA phantom, simulated single and coincidence count rates were within 5% of the corresponding measured rates throughout a wide range of activity concentrations. Scatter fraction and random coincidences were also measured and combined with total recorded coincidence rates in order to validate simulated NEC rate curves. Differences between simulated and measured NEC curves were found to be within 7% and can be attributed to the approximations in the simulation including no photomultiplier tube response, gantry surroundings or the effects of pulse pile up in the modeling of the electronics. These results support an accurate modelling of the Philips Allegro PET system using GATE in combination with an appropriate dead time model.
Presently most Nuclear Medicine physicians are well trained to report PET FDG studies. However, only a very limited number of them are able to diagnose difficult, unusual cases. For this reason, we developed an electronic lightbox called POSITOSCOPE onto which PET studies can be downloaded, displayed, reported and sent to remote sites for expert advice. To promote its use, we emphasized user-friendliness which is a keypoint of the prototype: the POSITOSCOPE looks like a classical lightbox equipped with a small touchscreen and a digital sound recorder. It is connected to local PET scanners and long distance high speed networks. Difficult studies can thus be sent to remote experts. The request consists of the whole image data set and a soundtrack explaining its nature. It may be sent to one or more experts. At this stage, only the local physician is responsible for reporting even though (s)he makes use of remote expertise. The prototype is being tested in two hospitals and the clinical evaluation involving four University hospitals and one private practice Nuclear Medicine center, started last September. Our goal is not to have PET studies acquired in a local center and to have them reported in a remote reference center, but to provide remote expertise when necessary to improve daily reporting of PET studies and to improve the expertise of local Nuclear Medicine physicians. The concept may be easily extended to unusual single photon studies for which local expertise is not always available, and to multimodality studies.
This paper describes a new kind of use for image watermarking. A stream watermarking method is presented, in which a key allows the authorized users to recover the original image. Our algorithm exploits the redundancy properties of the Mojette Transform. This transform is based on a specific discrete version of the Radon transform with an exact inversion. Anyone whom knows the watermark key will be able to decode the original image whereas only a marked image can be decoded without this key. The presented algorithm is suitable for different applications when fragile and reversible watermarks are mandatory such as medical image watermarking, and it could also be used for a data access scheme (cryptography). A multiscale watermark variation is presented and can be used when different user profile levels are encountered.
This paper reports the validation work concerning the Monte Carlo simulator we developed for Nuclear Medicine imaging. First we compared the simulated and acquired data of a Data Spectrum Thorax phantom. The two data sets agree fairly well but significant differences can be found at the pixel level. They are probably due to slightly different experimental conditions which are very difficult to control. Second we compared the simulated data obtained using three different interaction models. No difference could be found at the pixel level but image-wide energy spectra slightly differ.
We address the issue of using deformable models to reconstruct the shape of unknown objects in the context of 3D tomography. We focus on the reconstruction of piecewise- uniform radioactive distributions such as in blood pool or lung imaging. We represent the unknown distribution by a set of closed surfaces defining uniformly emitting regions in space. The methods implemented so far tend to directly deform the surfaces. Rather than deforming the surface models themselves, we explore the deformation of the space in which the surfaces are contained to match a set of scintigraphic measurements. We focus on the use of free-form deformations to describe the continuous transformation of space. We illustrate this approach by reconstructing simulated scintigraphic data of the lungs.
In this paper, we present a multimedia, ATM network based approach to generating and transmitting imaging procedure multimedia (MmR) reports in emergency situations. This approach was applied to V/P lung scintigrams in our institution. The architecture of our multimedia reporting system consists of a (gamma) -camera providing V/P lung scintigram as Interfile formatted data, a workstation in which MmRs can be generated and from which they can be accessed, a set of low cost workstations where MmR can be displayed, and an ATM network running throughout our hospital and connecting the above stations. The main features of the MmR are detailed in the paper and are assessed from a physician point of view.
Quantitation of nuclear medicine data is a major goal in medical imaging. It implies that photon attenuation, scatter and depth dependent spatial resolution be corrected for. Realistic, anthropomorphic numerical phantoms are needed to understand how these phenomena degrade nuclear medicine images, and to validate correction methods. We developed a Monte Carlo simulator which simulates photon transport in an anthropomorphic phantom. The main feature of our phantom consists in estimating the attenuation coefficient for the three main types of physical interaction from CT data and tissue nature in each voxel. The simulated data obtained with this approach show how accurate in terms of geometry and attenuation coefficient, a phantom must be defined to properly simulate scintigraphic acquisitions. It highlights the important of bone tissues in the formation of scatter as well as the influence of patient's morphology in attenuation phenomena.
This paper addresses the issue of writing image processing algorithms and programs that are independent of the dimension of the dataset. Such an approach aims at writing libraries and tool-boxes that will be smaller as well as easier to debug. The data to be processed is stored in a multi-dimensional, self-documented format describing, not only the content of the image, but also its context and the conditions of its acquisition. The work presented in this paper is based on the image kernel of the MIMOSA standard. We propose a recursive programming scheme that allows one to write general algorithms for such multi-dimensional images. Oddly enough, the design of such algorithms is easy and intuitive, thanks to the recursion. Moreover,the computational costs remains comparable to the one of dimension-specific algorithms. The cost of the recursion is indeed negligible compared to the cost of non trivial processings. We present an implementation of a reduced version of the MIMOSA image kernel, show how elementary processing such as convolution and filtering can be easily implemented. Finally we propose an algorithm for the nD fast fourier transform operating on real data.
We propose a method for the segmentation of cerebral sulci, representing them by surfaces. This method is based on the computation of the differential characteristics of MRI data. The computation of curvature information, using the Lvv operator, allows one to differentiate sulcal and gyral regions, resulting in a global detection of the cortical scheme. The analytical description of a particular sulcus is obtained by initializing an active model on its trace upon the brain surface. The result is a surface representing the buried part of the sulcus. The 'snake-spline' model allows one to define an algorithm which is simpler and more robust than the classical snake. This method of segmentation yields good results for the 3D segmentation and visualization of cortical sulci.
The conventional approach to tomographic reconstruction in the presence of noise consists in finding some compromise between the likelihood of the noisy projections and the expected smoothness of the solution, given the ill-posed nature of the reconstruction problem. Modelling noise properties is usually performed in iterative reconstruction schemes. In this paper, an analytical approach to the reconstruction from noisy projections is proposed. A statistical model is used to separate the relevant part of the projections from noise before the reconstruction. As reconstruction of sampled noise-free projections is still an ill- posed problem, a continuity assumption regarding the object to be reconstructed is also formulated. This assumption allows us to derive a spline filtered backprojection in order to invert the Radon operator. Preliminary results show the interest of combining continuity assumptions with noise modelling into an analytical reconstruction procedure.
The purpose of this paper is i) to explain the need for a generic image model in medical imaging, ii) to describe under which conditions such a model can be built, and iii) to present the image model we have been developing during the last two years in the framework ofthe EurlPacs IMimosaproject ofthe ATM programme ofthe European Communities'. Several organisations are in the process of defining communication standards (in particular DICOM) for medical imaging, as successfully demonstrated during the last RNSA meeting. Such a standard is an absolute necessity for implementing PACS, since it provides a framework to exchange image information produced by multi-vendor acquisition devices. Unfortunately such a standard is not sufficient to build a clinically useful PACS. One must also describe how data are organised in medical imaging, to allow end users (clinicians) to understand image information. This is the aim of the EurlPacs I Mimosaproject. The basic assumption of this work is that there is a common denominator in the way clinicians "understand" medical images even though local particularisms may hide it. Consequently our model aims at describing medical images in a way general enough to allow for a generic description, while providing facilities to describe local characteristics. Our approach makes use of a fairly standard modelling technique : data model using NIAM, fimctional modelling2 and organisational modelling. It turns out that local particularisms can be described at the dynamic level or even at the implementation level which is not considered in the formal model, such that a generic model can be defined. Moreover communication standards such as DICOM2 can be used within our model to describe how image data are actually organised as files to be transferred between PACS nodes. In this regard there is no overlap between the Mimosa model and communication standards. We consider three levels for the data model : an examination context which describes high level objects such as patient folder, request, report, a PACS model which describes the resources (network, acquisition devices, archives, image workstation) involved in image manipulation, and an image kernel which describes images. The examination context essentially contains attributes allowing HIS/RIS to monitor and control medical image information. They constitute most of the information exchanged between PACS and HIS/RIS. The PACS model addresses issues such as network performances, local storage capacity to provide image information in the right place at the right time. The image kernel specifies image attributes able to accurately define how images are acquired, processed, interpreted and used during diagnostic and/or therapeutic processes. It is clear that this model must be generic and modality independent5 to encompass any and every use of medical images, and precise enough to allow for their efficient use (in particular for multidimensional and multimodality data). Consequently this model may seem complex and significantly differs from commonly used image models. However it proved to be able to describe all examples against which it was tested contrary to other models. Because of its apparent complexity and because of its potential power, we think it is worth devoting a paper to its description. In section 2 we explain why such a model is required. In section 3 we describe the core of the model : the "image object" and its various components : Formal aspects, Version, Representation, Logical Files and Copies. In the same section we present two important related concepts : Image generator and Reference position. In section 4 we show how image objects can be grouped to become meaningful at the examination context level.
Proc. SPIE. 2166, Medical Imaging 1994: Image Perception
KEYWORDS: Target detection, Signal to noise ratio, Digital image processing, Sensors, Image processing, Interference (communication), Medical imaging, Digital imaging, Signal processing, Signal detection
Medical image quality can be defined in terms of observer performances in detecting image abnormalities, since diagnosis is essentially based on visual inspection of medical images. There exists a large body of theoretical and experimental work specifying it in terms of signal to noise ratio, area under the ROC curve, and detectability index. However, the comparison between the theoretical and experimental figures of merit (FOMs) is made difficult because FOMs do operate on different signals and observers. In this paper we investigate the relationships between such signals, observers, and FOMs; the soundness of the underlying assumptions; and the possibility of optimizing image display. In section 2 we define three signal-observer pairs for which the main theoretical and experimental results are recalled. We also present the results obtained in our lab to show their consistency with results found in the literature. In section 3 we describe an experiment designed to evaluate the relationships between the three types of signal-observer pairs, and to assess the robustness of the model with respect to the assumptions. We also present in this section the results of this experiment. In section 4 these results and the relevance of FOMs are discussed.
For the last two years, we have been developing a medical image processing system driven by a knowledge-based system, which has been partially presents at the last SPIE Medical Imaging conference. In short, it consists of three modules: (1) an expert system (ES) which handles generic knowledge about image processing, image sources and medicine, and specific knowledge for every developed application. Consequently, it knows why, under which circumstances and in which environment an image processing tool must be achieved. (2) a relational data base (rDB) on which the ES may perform requests to select image data for an application. (3) an image processing (IP) toolbox which is able to run procedures according to the ES specifications on data pointed to by the rDB. In other words the IP toolbox knows how to run a procedure but not why.
In practical situations, images are discrete and only discrete filtering can be performed, such that the above theory must be adapted accordingly. In this paper, we derive the filter family which must replace the Gaussian kernel, in this case. The result can be understood because the Fourier transform of the second derivative corresponds to the multiplication by the square of the frequency, such that our filter is the discrete version of a Gaussian. In other words, our approach consistently generalizes the continuous theory to the discrete case. When the discrete equivalent of the Laplacian is defined on the basis of n-order B-spline interpolating functions, the image stack exactly verifies the continuous diffusion equation at the spatially sampled points. These results are generalized to any linear partial differential operator corresponding to another requirement on the image stack, just by defining the discrete equivalent operator.
The goal of this paper is to describe a consistent method which permits to define discrete image processing operators in the same way as discrete image formation operators. This is done via the use of the generalized sampling theorem which establishes the relationship between continuous and discrete functions according to the mean-square error in a spline or bandlimited subspace. A discrete operator is defined according to its continuous counterpart operating on continuous functions in the same subspace. Classical medical image acquisition bases often are radial where classical image processing operators are deduced from separable bases. The paper shows the trends between these two imperatives for medical image processing, explains where are the risks for information loss induced by implementing discrete linear operators and presents two methods to partially or totally keep the initial stored information.
An intelligent image interpretation system should be able to help the physician during diagnosis by taking into account medical imaging specificities: (1) various domains of knowledge -- medical scene, acquisition, image processing (IP), interpretation; (2) complex IP procedures which provide relevant diagnostic information, particularly multimodality medical imaging procedures. In this context, we are designing a multimodality medical image interpretation system (MMIIS) involving different expert systems and procedural actors: (1) The medical image database, with an image object formalism. Proper image data are managed as files, image data on which requests can be made, in relational DBs, inference image information in knowledge bases. (2) The user interface we chose was as standard as possible to build a portable system (OSF/Motif). (3) Expert systems and particularly the IP expert system, with specific characteristics, describing knowledge about IP procedures (comparing an object-oriented and a Prolog-based implementation). (4) The IP actor, structured thanks to an IP classification. (5) Communication interfaces, realizing the integration of the above components. They are necessary to achieve homogeneity throughout the system. The complexity of the whole system is due to the complexity of implementing each module as well as their integration. The architecture of the integrated MMIIS is presented as well as the functionality, implementation, and interaction of its various components.
A densitometric model was developed to estimate absolute blood flows in vessels from a DSA sequence. It is derived from the image intensity to contrast agent (CA) relationship and from the mass conservation law. We showed that the flow rate through a vascular cross section is determined from time summation (Phi) of densitometric areas within a single ROI. It also depends on the mass and the attenuation coefficient (mu) of CA and on acquisition conditions. After estimating the apparent value of (mu) , experiments with vessel phantoms were performed on DSA systems to validate this model. The effect of the distance between the injection site and the region of measurement, the magnification factor, the tubing cross-section area, the injected mass of iodine, and the flow rate of injected CA was tested and analyzed. The accuracy and the reproducibility of water flow rate measurements by this method were estimated and the deviations explained. Finally, we show how such experiments can be used to quantify a stenosis from a whole DSA image sequence. Area narrowing is equal to the ratio of the integrate terms (Phi) for reference and stenotic segments. Relative flows at a vessel bifurcation can also be estimated by applying the model to each segment.
The main components of a multimodality medical image interpretation system (MMIIS) are: (1) a user interface, (2) an image database, storing image objects along with their description, (3) expert systems (ES) in various medical imaging domains and particularly in image processing (IP), and (4) an IP actor, toolbox of standard IP procedures. To implement such a system, we are building two prototypes: one with an object-oriented (OO) expert system and one with a classical logical expert system. In these two different approaches, we have to model the medical imaging objects and represent them. Both approaches use an OO data model even if its implementation is different in: (1) the characteristics of each ES, in managing knowledge and inferences (uncertainty, non-monotonicity, backward and forward chaining, meta- knowledge), (2) the environment to implement the different experts and to activate IP procedures, and (3) the communication means between the experts and the other components. In the OO approach, an ES based on smalltalk is used, and in the conventional one an adhoc Prolog ES was built. Our goal is to compare their advantages and disadvantages in implementing a MMIIS.
This paper is concerned with the main underlying concepts of a comprehensive data model for the medical image database (MIDB), which was developed by the NRV-PACS group. This model is based on semantic and object-oriented model theory and describes not only an image and its environment, as it is the case for standard models, but all meaningful informations of medical image environment, such as acquired or processed data set structures. This model can also take into account changes in image production processes (new kinds of acquisition or new image processing techniques) because it does not describe the entities themselves but how they are created.
Proc. SPIE. 1446, Medical Imaging V: PACS Design and Evaluation
KEYWORDS: Data modeling, Surgery, Data storage, Image processing, Medical imaging, Data acquisition, Data archive systems, Data processing, Computed tomography, Picture Archiving and Communication System
The development of PACS image databases has long been thought of as a major technological challenge, due to the amount of data to be managed. On the contrary, the authors think that despite major improvements in storage technology, new data management techniques must be proposed to make image databases medically useful in PACS environments. More precisely, image databases must contain not only images per se, but also the description of all objects used in medical imaging, in order to permit the remote processing, analysis, and interpretation of images. In several other papers, the authors explain why they adopted an object-oriented approach to model information in medical imaging. In this paper, the focus is on the inventory of objects manipulated in medical imaging from a qualitative viewpoint. For this purpose, a large number of representative imaging procedures were selected. The authors characterized how they are asked for by clinicians, realized in imaging departments, and consumed by requiring physicians and surgeons, in three French university hospitals. On the basis of this inventory, a set of image data -- i.e. of objects used in medical imaging -- was defined to show that this set must evolve with advances in medical imaging, and to point out that relational DBMS concepts cannot represent all image data.
In this paper, a new approach to the Filtered BackProjection (FBP) algorithm is presented. The method is based on the reconstruction stability in Sobolev spaces and B-spline functions which define a Pixel Intensity Distribution Model (PIDM-n) according to the spline degree n of the desired reconstruction. It is shown that PIDM-n reconstructions can be efficiently obtained. Angular sampling is studied and comparison with standard FBP shows the superiority of the algorithm presented. Moreover, simulation studies of noise degradation and blur in the projections show the algorithm to be superior to FBP in this more realistic case.
Image analysis (IA) knowledge and scene analysis (SA) knowledge are cojointly used in Medical Image Interpretation (MII). Usually knowledge is implicitly incorporated into procedures, making the latter very application-dependent. On the contrary, we want to clearly separate them in the MII system we are designing. For this purpose, we selected the evaluation of the functional pulmonary fraction on SPECT images as a case example from which we characterized: (1) image analysis knowledge: -- the manipulated IA objects are identified and organized in a stable and minimal set of generalized IA objects. -- The used IA procedures are made generic to ensure the separation between SA and IA knowledge. We classify them according to their IA objects parameters. (2) domain-specific knowledge: -- control and ordering of such procedures according to scene information are identified and represented in an adequate form to be integrated in the system. The choices made above (theoretical definitions and organization of IA objects and procedures, and the separation between IA and domain-specific knowledge) that ensure generality and application- independence in MII is explained and their validity shown on the selected example. The approach is to be generalized to multimodality medical image interpretation.
In this paper, a continuous / discrete projection / backprojection model is presented, from which the validity of
the discrete projection / reconstruction algorithm can be assessed. We mainly focus on projection sampling since angular
sampling has been extensively studied previously. For this purpose a pixel intensity distribution model relating continuous
and discrete original functions is proposed. Sampling of model projections is then studied, and projection filtering analyzed.
Proper implementation of the discrete backprojection operator is derived, such that the resulting reconstructed function can
be compared with the original one, and the overall consistency of the aproach proved. Experimental results are presented to
demonstrate the validity of the theoritical approach. The consequences of properly sampling projections in practical
conditions are fmally discussed.