Breast density has become an established risk indicator for developing breast cancer. Current clinical practice
reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense,
or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis
and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density).
We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion
directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique,
left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate.
In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for
segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via
edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral
oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with
respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed
with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic
views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided
plausibility correction is finally employed for tissue separation.
The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86)
to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).
Unconstrained environments with variable ambient illumination and changes of head pose are still challenging for many face recognition systems. To recognize a person independent of pose, we first fit an active appearance model to a given facial image. Shape information is used to transform the face into a pose-normalized representation. We decompose the transformed face into local regions and extract texture features from these not necessarily rectangular regions using a shape-adapted discrete cosine transform. We show that these features contain sufficient discriminative information to recognize persons across changes in pose. Furthermore, our experimental results show a significant improvement in face recognition performance on faces with pose variations when compared with a block-DCT based feature extraction technique in an access control scenario.
In medical X-ray examinations, images suffer considerably from severe, signal-dependent noise as a result of the
effort to keep applied doses as low as possible. This noise can be seen as an additive signal that degrades image
quality and might disguise valuable content. Lost information has to be restored in a post-processing step. The
crucial aspect of filtering medical images is preservation of edges and texture on the one hand and removing
noise on the other hand. Classical smoothing filters, such as Gaussian or box filtering. are data-independent
and equally blur the image content. State-of-the-art methods currently make use of local neighborhoods or
global image statistics. However, exploiting global self-similarity within an image and inter-image similarity for
subsequent frames of a sequence bears an unused potential for image restoration. We introduce a non-local
filter with data-dependent response that closes the gap between local filtering and stochastic methods. The
filter is based on the non-local means approach proposed by Buades<sup>1</sup> et al. and is similar to bilateral filtering.
In order to apply this approach to medical data, we heavily reduce the computational costs incurred by the
original approach. Thus it is possible to interactively enhance single frames or selected regions of interest within
a sequence. The proposed filter is applicable for time-domain filtering without the need for accurate motion
estimation. Hence it can be seen as a general solution for filtering 2D as well as 2D+t X-ray image data.