The ability to detect and match features across multiple views of a scene is a crucial first step in many computer vision
algorithms for dynamic scene analysis. State-of-the-art methods such as SIFT and SURF perform successfully when
applied to typical images taken by a digital camera or camcorder. However, these methods often fail to generate an
acceptable number of features when applied to medical images, because such images usually contain large homogeneous
regions with little color and intensity variation. As a result, tasks like image registration and 3D structure recovery
become difficult or impossible in the medical domain.
This paper presents a scale, rotation and color/illumination invariant feature detector and descriptor for medical
applications. The method incorporates elements of SIFT and SURF while optimizing their performance on medical data.
Based on experiments with various types of medical images, we combined, adjusted, and built on methods and
parameter settings employed in both algorithms. An approximate Hessian based detector is used to locate scale invariant
keypoints and a dominant orientation is assigned to each keypoint using a gradient orientation histogram, providing
rotation invariance. Finally, keypoints are described with an orientation-normalized distribution of gradient responses at
the assigned scale, and the feature vector is normalized for contrast invariance. Experiments show that the algorithm
detects and matches far more features than SIFT and SURF on medical images, with similar error levels.