Computer Aided Diagnostic (CAD) systems are already of proven value in healthcare, especially for surgical planning, nevertheless much remains to be done. Gliomas are the most common brain tumours (70%) in adults, with a survival time of just 2-3 months if detected at WHO grades III or higher. Such tumours are extremely variable, necessitating multi-modal Magnetic Resonance Images (MRI). The use of Gadolinium-based contrast agents is only relevant at later stages of the disease where it highlights the enhancing rim of the tumour. Currently, there is no single accepted method that can be used as a reference. There are three main challenges with such images: to decide whether there is tumour present and is so localize it; to construct a mask that separates healthy and diseased tissue; and to differentiate between the tumour core and the surrounding oedema. This paper presents two contributions. First, we develop tumour seed selection based on multiscale multi-modal texture feature vectors. Second, we develop a method based on a local phase congruency based feature map to drive level-set segmentation. The segmentations achieved with our method are more accurate than previously presented methods, particularly for challenging low grade tumours.
Image-based medical diagnosis typically relies on the (poorly reproducible) subjective classification of textures
in order to differentiate between diseased and healthy pathology. Clinicians claim that significant benefits would
arise from quantitative measures to inform clinical decision making. The first step in generating such measures
is to extract local image descriptors - from noise corrupted and often spatially and temporally coarse resolution
medical signals - that are invariant to illumination, translation, scale and rotation of the features. The Dual-Tree Complex Wavelet Transform (DT-CWT) provides a wavelet multiresolution analysis (WMRA) tool e.g.
in 2D with good properties, but has limited rotational selectivity. Also, it requires computationally-intensive
steering due to the inherently 1D operations performed. The monogenic signal, which is defined in n >= 2D
with the Riesz transform gives excellent orientation information without the need for steering. Recent work has
suggested the Monogenic Riesz-Laplace wavelet transform as a possible tool for integrating these two concepts
into a coherent mathematical framework. We have found that the proposed construction suffers from a lack of
rotational invariance and is not optimal for retrieving local image descriptors. In this paper we show:
1. Local frequency and local phase from the monogenic signal are not equivalent, especially in the phase
congruency model of a "feature", and so they are not interchangeable for medical image applications.
2. The accuracy of local phase computation may be improved by estimating the denoising parameters while
maximizing a new measure of "featureness".