## 1.

## Introduction

Medical image registration is core to many image analysis pipelines. It consists of bringing two or more images into spatial alignment, often mapping one image into the space of another. However, recent studies have highlighted that this directionality in the registration can create a bias in subsequent analyses.

Yushkevich et al.^{1} and Leung et al.^{2} highlighted the effect of asymmetric processing when estimating the brain atrophy occurring during a neurodegenerative process such as Alzheimer’s disease. Thompson et al.^{3} pointed out that bias in registration directionality has a large impact on the extraction of imaging biomarkers derived from tensor-based morphometry. Fox et al.^{4} enumerated several assessment methods, including symmetry, against which new methodologies for atrophy estimation should be tested.

Several approaches have been proposed to address this issue in the context of nonlinear registration. Christensen and Johnson^{5} jointly optimize forward and backward transformations while minimizing an inverse-consistency error criterion. Tao et al.^{6} and Modat et al.^{7} used similar schemes using the demons approach^{8} and a cubic $b$-spline parametrization,^{9} respectively. Vercauteren et al.^{10} took advantage of a nonstationary velocity field parametrization to concurrently optimize forward and backward transformations through the exponentiation of a velocity field and its negated version. Avants et al.^{11} presented a symmetric approach based on the optimization of a nonstationary velocity field discretized into intermediate space images.

Little work has, however, been proposed to remove directionality bias in the case of global registration. Leung et al.^{2} proposed to run several asymmetric registrations, for example forward and backward in the case of pairwise registration, and to combine the obtained transformations into a consensus registration through averaging in the log-Euclidean space. Reuter et al.^{12} developed a global registration technique based on robust statistic optimization performed in a midpoint space. This approach minimizes the residual differences between input images while rejecting outliers. A linear intensity scaling is used in order to increase the robustness of the algorithm and deal with different ranges of intensity within the same modality or pulse-sequence. This method has been altered by Wachinger et al.^{13} to accomodate for multimodal images by using local entropy.

In this paper, we propose a robust and symmetric registration method that differs from Reuter et al. in two main aspects. First, we used a block-matching approach^{14} to establish the spatial correspondences, where the normalized cross correlation is used as a measure of similarity. Due to the small dimension of the blocks under consideration, the proposed approach is suitable for multimodal registration applications.^{15} The second main difference is that joint forward and backward transformation parameters are simultaneously calculated rather than using a midpoint. This removes the need to discretize the transformed input images into an average space, which can be problematic when input images have different fields-of-view and resolution, as often seen in registration of different modalities. In these scenarios, the user must make an algorithmic decision on how to discretize the midspace, which can lead to a bias toward one of the images. Assuming two images of different resolutions, the user has to decide how to discretize the midspace. With the proposed approach, the transformation is optimized within the native spaces of the input images. Similarly to Reuter et al. and Wachinger et al., the proposed approach is also robust to the presence of outliers thanks to the use of a least-trimmed squared (LTS) optimization.

We evaluated the accuracy, precision, and robustness of our symmetric block-matching–based approach using several databases of synthetic magnetic resonance (MR) images, MR longitudinal studies, multimodal studies of magnetic resonance imaging (MRI), Positron emission tomography (PET), and computerised tomography (CT) scans, and pairs of preoperative and intraoperative MR.

## 2.

## Method

## 2.1.

### Block Matching for Global Registration: the Classical Approach

In the original block-matching method, the registration is performed using a two-step approach. The first step is to establish point correspondences between the current transformed floating image and the reference image using the block-matching strategy. This is done by dividing the reference and transformed floating images into blocks of uniform size. Each block in the reference image is compared with image blocks in the corresponding neighborhood of the transformed floating image. The matching block is the one with the maximum normalized cross correlation (NCC) with the reference block computed as:

## 2.2.

### Directionality Bias in Image Registration

The classic block-matching image registration maps one image into the space of another, producing a transformation ${T}_{I\to J}$ that maps image $I$ to image $J$ and enables image $J$ to be warped into the space of image $I$. If the two images were reversed so that $J$ is mapped to $I$, a new transformation ${T}_{J\to I}$ is produced. The directionality that is prevalent in many image registration algorithms would result in ${T}_{I\to J}\ne {T}_{J\to I}^{-1}$. If an algorithm is symmetric, then these two transformations would be the inverse of each other, and the registration result would not depend on the order of images.

Figure 1 illustrates the impact of the directionality bias on a subsequent analysis: the full brain atrophy estimation. In this example, two $\mathrm{T}1$-weighted MRIs from the same subject and acquired with a 1-year interval are spatially normalized to a midspace based on an affine transformation. This step ensures that no resampling bias affects the brain atrophy estimation. Three registration approaches are used to estimate the affine parametrization: (i) an asymmetric registration where the baseline scan is considered as the reference and the follow-up scan as the floating, ${T}_{\mathrm{asym},b\to f}$, (ii) an asymmetric registration where the follow-up scan is considered as the reference and the baseline scan as the floating, ${T}_{\mathrm{asym},f\to b}$, and (iii) a symmetric registration scheme, ${T}_{\mathrm{sym}}$. After the resampling of the input images to a midspace, they are corrected for differential bias fields and the atrophy is estimated using the boundary shift integral method as described by Leung et al.^{2} The obtained full brain volume changes over one year are 14.49, 20.55, and 16.51 mm using the forward, backward, and symmetric registration schemes, respectively. It can be noted that using the asymmetric registration approaches, the obtained atrophy measurements are different even though the composition of the obtained matrices for this specific case is close to identity, hence close to being inverse consistent

## 2.3.

### Symmetric Extension to Block-Matching Global Registration

Since the original block-matching-based registration introduces a directionality, the results would yield transformations that are not the inverse of each other when the order of images are switched. The inverse consistent extension to the block-matching algorithm ensures that the registration is symmetric and no bias is coming from the registration direction.

Similar to the classical approach, the optimum transformation is estimated in an iterative fashion where the current estimated transformation is updated at each iteration. Each iteration follows a two-step scheme where the first step is to establish point correspondences between the two images. Using block matching, two sets of correspondences are estimated: $\{{\overrightarrow{C}}_{I\to J}\}$ mapping points from image $I$ to image $J$ and $\{{\overrightarrow{C}}_{J\to I}\}$ to map points from image $J$ to image $I$. This two sets of correspondences are assessed from blocks defined in image $I$ and $J$, respectively.

The second step is to update the transformation parameters estimated through LTS regression. In the original block-matching-based approach, the updated parameters are incremental and composed with the current estimate of the transformation. For example, if the current transformation at iteration $t$ is ${F}_{t}$ and an update ${f}_{t}$ is computed, then the updated transformation would be ${F}_{t+1}={f}_{t}\circ {F}_{t}$. If the same was applied to the inverse of the transformation ${B}_{t}={F}_{t}^{-1}$, and an incremental update ${b}_{t}$ was obtained such that ${b}_{t}={f}_{t}^{-1}$, then the properties of matrix multiplication state that ${({f}_{t}\circ {F}_{t})}^{-1}={F}_{t}^{-1}\circ {f}_{t}^{-1}={B}_{t}\circ {b}_{t}$. This results in the incremental updates being premultiplied in one direction but postmultiplied in the other direction. Since matrix multiplication is not commutative, this approach would break symmetry as the resulting transformations would depend on how the images were ordered. To address this issue, the block-matching correspondences for the symmetric approach are always established using the original image positions at every iteration. This is achieved by combining, at every iteration, the obtained block-matching correspondences with the previous estimate of the transformation. As a result no matrix multiplication has to be performed since the update is integrated into the LTS regression as:

instead of the previous update approach:To ensure inverse consistency between the transformation parameters, the transformation matrices obtained through the LTS fitting are averaged^{16} and updated as

Using such an approach, the result transformations, forward and backward, are inverse consistent (${T}_{I\to J}={T}_{J\to I}^{-1}$) and the directionality of the registration does not affect the recovered transformation parameters.

## 3.

## Validation

The proposed block-matching method has been implemented as part of the NiftyReg open-source software.^{17} All registrations were run using the same parameters to allow for a fair comparison of the results. NCC was used as a local measure of similarity in the block-matching procedure. The block dimensions were chosen to be $4\times 4\times 4\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{voxels}$. The blocks were sorted according to decreasing intensity variance and the top 50% of the blocks in the reference images were considered for the LTS optimisation step. During the LTS optimization, 50% of the blocks with the largest-squared Euclidean residuals (according to the current estimated transformation) were considered as outliers. For the registrations, a multiscale pyramidal approach with three resolution levels was used to avoid local minima and increase the capture range. The registrations produced a full affine transformation (12 degrees-of-freedom). For the retrospective image registration evaluation (RIRE) database, 4 resolution levels and a rigid transformation (6 degrees-of-freedom) were used.

## 3.1.

### Validation on Synthetic Data

In order to assess the capture range and robustness of the proposed symmetric approach, two synthetic images^{18} were used: a $\mathrm{T}1$-weighted ($\mathrm{T}1\mathrm{w}$) and a $\mathrm{T}2$-weighted ($\mathrm{T}2\mathrm{w}$) MRI. Both images were simulated with 9% noise level and 20% intensity nonuniformity. Orthogonal views of these images are shown in Fig. 2. The images were *a priori* in alignment and known affine transformations were applied to one image. Registrations were then performed using the symmetric and the asymmetric approaches in both forward and backward directions. The registration error was computed at the eight corners and the average Euclidean distance between the ground truth and the recovered positions for these corner points are reported.

The known transformations were generated by applying rotations from $-45$ to 45 at 15 increment steps, translations from $-45$ to 45 voxels with a 15 voxels increment, scaling from 50% to 150% with a 10% increment, and finally shearing from $-0.2$ to 0.2 with a 0.05 increment. This resulted in 4851 transformations (varying each of the 12 components independently) and a total of 14,553 individual registrations (symmetric, asymmetric forward, and asymmetric backward). The two input images were also registered using the FMRIB’s linear image registration tool (FLIRT) algorithm,^{19} part of the FSL software package.^{20} The normalized mutual information was used as a similarity measure. Note that due to the high level of noise and intensity nonuniformity, FLIRT lead to an error of 1.30 and 1.16 voxels for the forward and backward approaches, respectively, in the case where the input images were already aligned. Based on these error values, any registrations results with an average error larger than 2 voxels were classified as a failure.

The success rates for the symmetric, forward, and backward implementations were 91.12%, 78.96%, and 86.02%, respectively. For 1742 transformations, the symmetric approach successfully recovered the transformation when one asymmetric scheme failed. In 410 cases, the symmetric recovered the known transformation when both the forward and backward scheme failed. For 19 occurrences, the symmetric approach failed whereas the backward method successfully recovered the transformation and, in one case, the symmetric registration failed while both asymmetric approaches recovered the transformation.

## 3.2.

### Validation on the RIRE Database

The RIRE project, based on the Vanderbilt database,^{21} consists of CT, PET, and MRI scans with an associated marker-based gold standard transformation.^{22} We assessed the accuracy of the proposed symmetric approach using this database and compared its result with the asymmetric approach. For all registrations, the CT and PET images were used as references and the landmark errors were assessed in their space. The voxel dimensions of the CT and PET images were $0.65\times 0.65\times 4.00$ and $2.59\times 2.59\times 8.00\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$, respectively. The proton density, $\mathrm{T}1\mathrm{w}$, and $\mathrm{T}2\mathrm{w}$ MRI were used as floating images. Figure 3 shows coronal views of multiple modality scans from a subject.

Intrasubject registration was performed on 9 subjects and the registration error, defined as the Euclidean distance in millimeters, between the ground truth and the recovered transformed positions, was computed at 704 landmark positions. Table 1 shows, for all registration techniques, the mean and maximum errors per modality and the overall error. Figure 4(a) shows the cumulative registration error over 364 landmarks when the CT images are used as reference, and Fig. 4(b) presents the same result for 340 landmarks defined on the PET images.

## Table 1

Registration errors from multimodal intrasubject registrations using the RIRE database. Sym. BM: symmetric block-matching registration, Asym. BM: original, asymmetric block matching.The columns show the mean and maximum errors of the various multimodal combinations registered, with the overall error in the last column.

Reference image | CT | CT | CT | PET | PET | PET | ALL |
---|---|---|---|---|---|---|---|

Floating image | Proton density (PD) | T1w | T2w | PD | T1w | T2w | |

Sym. BM (mean error) | 1.71 | 1.60 | 1.64 | 3.11 | 3.54 | 2.49 | 2.33 |

Sym. BM (max error) | 3.07 | 3.62 | 3.33 | 9.28 | 9.12 | 7.55 | 9.28 |

Asym. BM (mean error) | 2.13 | 1.95 | 3.39 | 4.39 | 9.94 | 2.68 | 3.94 |

Asym. BM (max error) | 12.45 | 4.85 | 25.24 | 21.48 | 68.98 | 7.38 | 68.98 |

## 3.3.

### Validation on the MIRIAD Database

We used the MIRIAD database,^{23}^{,}^{24} to assess the transitivity properties of the symmetric and asymmetric approaches. The MIRIAD database consist of brain MRI scans from 46 subjects diagnosed with Alzheimer’s disease and 23 age-matched healthy controls. For all subjects, we used a baseline, 6- and 12-months followup scans. The 6- and 12-months follow-up scans were registered to their corresponding baseline (${T}_{\mathrm{BL}-6}$ and ${T}_{\mathrm{BL}-12}$ respectively) and the 12-months scans were registered to the 6-months scans (${T}_{6-12}$). Transitivity error was defined as the average Euclidean distance at the transformed corners of the baseline image using the direct correspondences (${T}_{\mathrm{BL}-12}$) compared with the composed transformation (${T}_{6-12}\circ {T}_{\mathrm{BL}-6}$). The mean transitivity error over the 69 subjects was 0.54 (0.27) and 0.66 mm (0.34) for the symmetric and asymmetric approaches. The symmetric approach leads to significantly lower transitivity error than the asymmetric approach (two-tailed, paired *t*-test, $p<0.005$). The minimum and maximum average errors were [0.15, 1.35] and [0.20, 1.60] for the symmetric block matching and the asymmetric block matching, respectively. For comparison, using FLIRT with the default parameters, we obtained a mean error of 1.00 mm (0.71) and minimum and maximum average errors of [0.27, 4.74].

## 3.4.

### Intraoperative Images

A dataset comprising 15 pairs of pre and intraoperative $\mathrm{T}1$-weighted MR images was used to assess the robustness of the proposed algorithm to missing tissues. The preoperative MRI were acquired on a 3T GE Signa Excite HD (General Electric, Waukesha, Milwaukee, Wisconsin) with a spatial resolution of $0.9\times 0.9\times 1.1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{mm}$. The intraoperative scans, acquired on a 1.5T Siemens Espree (Erlangen, Germany), have a spatial resolution of $1.1\times 1.1\times 1.3\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\mathrm{mm}}^{3}$ and were all acquired after some tissue resection was performed. Note that the intraoperative acquisitions do not have the brain center in the field-of-view. Figure 5 shows the pre and intraoperative scans from one subject.

We performed the 15 intrasubject registrations using the proposed symmetric block-matching approach and twice with the asymmetric block-matching approach (one where the preoperative image served as reference and one where the inraoperative was the reference). It is difficult to localize anatomical landmarks in the intraoperative MRI images due to severe nonlinear geometric distortions and degraded image quality, hence we only performed a qualitative analysis of the registration results. Two reviewers were asked to qualify the registration results as successful, acceptable or failure based on visual assessment while being blinded to the registration scheme. The results are reported in Table 2. The symmetric was able to successfully register all 15 datasets, while multiple cases using the asymmetric registrations were classified as failures.

## Table 2

Qualitative evaluation of pre and intraoperative registration performance.

Successful | Acceptable | Failure | |
---|---|---|---|

Symmetric | 15 | 0 | 0 |

Forward | 12 | 1 | 2 |

Backward | 5 | 3 | 7 |

## 4.

## Discussion

We presented a symmetric extension of the block-matching algorithm for global registration. To the best of our knowledge, the proposed scheme is one of the two available implementations for robust multimodal symmetric registration, the other being the implementation of the work by Wachinger et al.^{13} The algorithm has been compared with the original asymmetric framework in both mono and multimodal registration. Using simulated transformations, we found that the symmetric formulation led to an increase in capture range. This method also yield a lower transitivity error over multiple time-points and multiple subjects, a small but acknowledged source of bias in subsequent analyses. Using the RIRE database for evaluation of intermodality registration, we found that the symmetric approach outperformed its asymmetric counterpart both in terms of accuracy and in term of robustness as shown by the lower cumulative errors over multiple landmarks. Finally, using pre and intraoperative MRI data, the proposed algorithm was shown to be robust to missing tissue. Note that all validations were performed without using any masking or initial alignment of the brain center of mass, as the overall aim was to compare several methods in the same context. As a result, we do not claim that the proposed implementation is suitable for all applications and outperforms all other implementations. Our aim was to highlight the advantages of using a symmetric registration approach as opposed to an asymmetric approach.

## Acknowledgments

M. Modat is supported by the UCL Leonard Wolfson Experimental Neurology Centre. The Dementia Research Centre is supported by Alzheimer’s Research United Kingdom, Brain Research Trust, and The Wolfson Foundation. P. Daga is supported by EPSRC (EP/J020990/01). G. Winston was supported by an MRC Clinical Research Training Fellowship (G0802012) S. Ourselin receives funding from the EPSRC (EP/H046410/1, EP/J020990/1, EP/K005278), the MRC (MR/J01107X/1), the EU-FP7 project VPH-DARE@IT (FP7-ICT-2011-9-601055), the NIHR Biomedical Research Unit (Dementia) at UCL and the National Institute for Health Research University College London Hospitals Biomedical Research Centre (NIHR BRC UCLH/UCL High Impact Initiative). Acquisition of a subset of the patient scans used in this paper was funded by a Wellcome Trust Programme Grant (WT083148).

## References

P. A. Yushkevichet al., “Bias in estimation of hippocampal atrophy using deformation-based morphometry arises from asymmetric global normalization: an illustration in ADNI 3T MRI data,” NeuroImage 50(2), 434–445 (2010).NEIMEF1053-8119http://dx.doi.org/10.1016/j.neuroimage.2009.12.007Google Scholar

K. K. Leunget al., “Consistent multi-time-point brain atrophy estimation from the boundary shift integral,” NeuroImage 59(4), 3995–4005 (2012).NEIMEF1053-8119http://dx.doi.org/10.1016/j.neuroimage.2011.10.068Google Scholar

W. K. ThompsonD. HollandA. D. N. Initiative, “Bias in tensor based morphometry stat-ROI measures may result in unrealistic power estimates,” NeuroImage 57(1), 1–4 (2011).NEIMEF1053-8119http://dx.doi.org/10.1016/j.neuroimage.2010.11.092Google Scholar

N. C. FoxG. R. RidgwayJ. M. Schott, “Algorithms, atrophy and Alzheimer’s disease: cautionary tales for clinical trials,” NeuroImage 57(1), 15–18 (2011).NEIMEF1053-8119http://dx.doi.org/10.1016/j.neuroimage.2011.01.077Google Scholar

G. E. ChristensenH. J. Johnson, “Consistent image registration,” IEEE Trans. on Med. Imaging 20(7), 568–582 (2001).ITMID40278-0062http://dx.doi.org/10.1109/42.932742Google Scholar

G. Taoet al., “Symmetric inverse consistent nonlinear registration driven by mutual information,” Comput. Methods Programs Biomed. 95(2), 105–115 (2009).CMPBEK0169-2607http://dx.doi.org/10.1016/j.cmpb.2009.01.011Google Scholar

M. Modatet al., “Inverse-consistent symmetric free form deformation,” in WBIR’12: Proc. of the 5th Int. Conf. on Biomedical Image Registration, Nashville, TN, B. M. Dawantet al., Eds. (2012).http://dx.doi.org/10.1007/978-3-642-31340-0_9Google Scholar

J. P. Thirion, “Image matching as a diffusion process: an analogy with Maxwell’s demons,” Med. Image Anal. 2(3), 243–260 (1998).MIAECY1361-8415http://dx.doi.org/10.1016/S1361-8415(98)80022-4Google Scholar

D. Rueckertet al., “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. on Med. Imaging 18(8), 712–721 (1999).ITMID40278-0062http://dx.doi.org/10.1109/42.796284Google Scholar

T. Vercauterenet al., “Symmetric log-domain diffeomorphic registration: a demons-based approach,” in Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, New York, NY, D. Metaxaset al., Eds., pp. 754–761 (2008).http://dx.doi.org/10.1007/978-3-540-85988-8_90Google Scholar

B. B. Avantset al., “Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain,” Med. Image Anal. 12(1), 26–41 (2008).MIAECY1361-8415http://dx.doi.org/10.1016/j.media.2007.06.004Google Scholar

M. ReuterH. D. RosasB. Fischl, “Highly accurate inverse consistent registration: a robust approach,” NeuroImage 53(4), 1181–1196 (2010).NEIMEF1053-8119http://dx.doi.org/10.1016/j.neuroimage.2010.07.020Google Scholar

C. Wachingeret al., “Robust registration of multi-modal images,” in Human Brain Mapping, Seatle (2013).Google Scholar

S. Ourselinet al., “Reconstructing a 3D structure from serial histological sections,” Image Vis. Comput. 19(1–2), 25–31 (2001).IVCODK0262-8856http://dx.doi.org/10.1016/S0262-8856(00)00052-4Google Scholar

O. CommowickN. Wiest-DaessleS. Prima, “Block-matching strategies for rigid registration of multimodal medical images,” in 2012 9th IEEE Int. Symp. on Biomedical Imaging (ISBI), Barcelona, Spain, pp. 700–703 (2012).http://dx.doi.org/10.1109/ISBI.2012.6235644Google Scholar

M. Alexa, “Linear combination of transformations,” ACM Trans. Graphics 21, 380–387 (2002).ATGRDF0730-0301http://dx.doi.org/10.1145/566654.566592Google Scholar

M. JenkinsonS. M. Smith, “A global optimisation method for robust affine registration of brain images,” Med. Image Anal. 5(2), 143–156 (2001).MIAECY1361-8415http://dx.doi.org/10.1016/S1361-8415(01)00036-6Google Scholar

J. Westet al., “Comparison and evaluation of retrospective intermodality brain image registration techniques,” J. Comput. Assisted Tomography 21(4), 554–566 (1997).JCATD50363-8715http://dx.doi.org/10.1097/00004728-199707000-00007Google Scholar

I. B. Maloneet al., “MIRIAD-public release of a multiple time point Alzheimer’s MR imaging dataset,” NeuroImage 70, 33–36 (2013).NEIMEF1053-8119http://dx.doi.org/10.1016/j.neuroimage.2012.12.044Google Scholar

## Biography

**Marc Modat** received his PhD at the Centre for Medical Image Computing at University College London in 2012 under the supervision of professors Sebastien Ourselin and Nick Fox. He is now a senior research associate affiliated with the Centre for Medical Image Computing and the Dementia Research Centre, University College London, where he focuses on the translation of methodological development to the clinic.

**David M. Cash** is a senior research associate at the Dementia Research Centre in University College London Institute of Neurology and the Centre for Medical Image Computing where he works on translation of novel imaging biomarkers for dementia, with a particular interest on endpoints for clinical trials. He received his PhD from Vanderbilt University in 2004 under the supervision of professor Robert L. Galloway.

**Pankaj Daga** received his PhD from University College London in the field of medical image computing in 2014. During his PhD, he worked on novel and computationally efficient image analysis algorithms that have been successfully implemented in the clinic. His research interests lie in the field of medical image registration with specialfocus on Bayesian formalism of medical image analysis problems.

**Gavin P. Winston** is a clinical research fellow in the Department of Clinical and Experimental Epilepsy in University College London Institute of Neurology. His research interest is the use of novel imaging techniques in the understanding and treatment of patients with refractory epilepsy. He received his PhD in neuroimaging of epilepsy in 2014 under the supervision of professor John Duncan.

**John S. Duncan** is a consultant neurologist specializing in epilepsy, practicing at the National Hospital for Neurology and Neurosurgery, Queen Square, London, where he is a clinical director and at Epilepsy Society in Chalfont St. Peter, United Kingdom. He was appointed as a professor of Neurology at the UCL Institute of Neurology in 1998. In 2004, he received the annual Clinical Research recognition award of the American Epilepsy Society. He is a past-president of the United Kingdom chapter of ILAE. In 2005, he was elected as an ambassador for Epilepsy and to be a fellow of the Academy of Medical Sciences.

**Sébastien Ourselin** is a head of the Translational Imaging Group within the Centre for Medical Imaging Computing at University College London, and a professor of Medical Image Computing. He obtained his PhD in computer science from INRIA (France) in the field of medical image analysis under the supervision of professor Nicholas Ayache.