Innovation in multimedia systems impacts on our society. For example surveillance camera systems combine video and
audio information. Currently a new sensor for capturing fingerprint traces is being researched. It combines greyscale
images to determine the intensity of the image signal, on one hand, and topographic information to determine fingerprint
texture on a variety of surface materials, on the other. This research proposes new application areas which will be
analyzed from a technical-legal view point. It assesses how technology design can promote legal criteria of German and
European privacy and data protection. For this we focus on one technology goal as an example.
The analysis of latent fingerprint patterns generally requires clearly recognizable friction ridge patterns. Currently,
overlapping latent fingerprints pose a major problem for traditional crime scene investigation. This is due to the fact that
these fingerprints usually have very similar optical properties. Consequently, the distinction of two or more overlapping
fingerprints from each other is not trivially possible. While it is possible to employ chemical imaging to separate
overlapping fingerprints, the corresponding methods require sophisticated fingerprint acquisition methods and are not
compatible with conventional forensic fingerprint data.
A separation technique that is purely based on the local orientation of the ridge patterns of overlapping fingerprints is
proposed by Chen et al. and quantitatively evaluated using off-the-shelf fingerprint matching software with mostly
artificially composed overlapping fingerprint samples, which is motivated by the scarce availability of authentic test
The work described in this paper adapts the approach presented by Chen et al. for its application on authentic high
resolution fingerprint samples acquired by a contactless measurement device based on a Chromatic White Light (CWL)
sensor. An evaluation of the work is also given, with the analysis of all adapted parameters. Additionally, the
separability requirement proposed by Chen et al. is also evaluated for practical feasibility. Our results show promising
tendencies for the application of this approach on high-resolution data, yet the separability requirement still poses a
In this paper we first design a suitable context model for microphone recordings, formalising and describing the
involved signal processing pipeline and the corresponding influence factors. As a second contribution we apply the
context model to devise empirical investigations about: a) the identification of suitable classification algorithms for
statistical pattern recognition based microphone forensics, evaluating 74 supervised classification techniques and 8
clusterers; b) the determination of suitable features for the pattern recognition (with very good results for second order
derivative MFCC based features), showing that a reduction to the 20 best features has no negative influence to the
classification accuracy, but increases the processing speed by factor 30; c) the determination of the influence of changes
in the microphone orientation and mounting on the classification performance, showing that the first has no detectable
influence, while the latter shows a strong impact under certain circumstances; d) the performance achieved in using the
statistical pattern recognition based microphone forensics approach for the detection of audio signal compositions.
Digital long-term preservation has become an important topic not only in the preservation domain, but also due to
facilitation by several national and international projects like US National Digital Information Infrastructure and
Preservation Program , the German NESTOR project  and the EU FP7 SHAMAN Integrated Project . The
reason for this is that a large part of nowadays produced documents and other goods are digital in nature and even some
- called "born-digital" - have no analog master. Thus a great part of our cultural and scientific heritage for the coming
generations is digital and needs to be preserved as reliable as it is the case for physical objects even surviving hundreds
However, the continuously succession of new hardware and software generations coming in very short intervals
compared to the mentioned time spans render digital objects from just some generations ago inaccessible. Thus they need
to be migrated on new hardware and into newer formats. At the same time integrity and authenticity of the preserved
information is of great importance and needs to be ensured. However this becomes a challenging task considering the
long time spans and the necessary migrations which alter the digital object.
Therefore in a previous work  we introduced a syntactic and semantic verification approach in combination with the
Clark-Wilson security model . In this paper we present a framework to ensure the security aspects of integrity and
authenticity of digital objects especially images from the time of their submission to a digital long-term preservation
system (ingest) up to its latter access and even past this. The framework especially describes how to detect if the digital
object has retained both of its security aspects while at the same allowing changes made to it by migration.
A continuously growing amount of information of today exists not only in digital form but were actually born-digital.
These informations need be preserved as they are part of our cultural and scientific heritage or because of legal
requirements. As many of these information are born-digital they have no analog origin, and cannot be preserved by
traditional means without losing their original representation. Thus digital long-term preservation becomes continuously
important and is tackled by several international and national projects like the US National Digital Information
Infrastructure and Preservation Program , the German NESTOR project  and the EU FP7 SHAMAN Integrated
In digital long-term preservation the integrity and authenticity of the preserved information is of great importance and a
challenging task considering the requirement to enforce both security aspects over a long time often assumed to be at
least 100 years. Therefore in a previous work  we showed the general feasibility of the Clark-Wilson security model
 for digital long-term preservation in combination with a syntactic and semantic verification approach  to tackle
these issues. In this work we do a more detailed investigation and show exemplarily the influence of the application of
such a security model on the use cases and roles of a digital
long-term preservation environment. Our goals is a scalable
security model - i.e. no fixed limitations of usable operations, users and objects - for mainly preserving integrity of
objects but also ensuring authenticity.
Annotation watermarking is an application of watermarking where information about a cover or a part thereof are
embedded in the cover itself to link both directly together. In earlier work we introduced Nested Object Annotation
Watermarking as a special case, where the semantic, shape and hierarchical relations between the depicted nested objects
are embedded in the area of each object only. As these regions can be anywhere and may be composed of any shape
there is very limited a-priori knowledge for synchronization, which results in a higher complexity and ultimately in a
higher error-proneness. In general an exhaustive search strategy for the proper blocks to reconstruct the shape suffers
from the intrinsic combinatorial explosion of this process. Therefore in earlier work, at first we focused on rectangular
embedding schemes with a block luminance algorithm, a steganographic WetPaperCode algorithm, a rectangular and
finally a polygonal Dual-Domain DFT algorithm.
In this paper we review and compare these algorithms in terms of their transparency, capacity, shape fidelity and
robustness against the selected aspects JPEG compression and cropping. For the DFT algorithm we also show the
influence of several parameters, present our idea of a method to reduce the combinatorial explosion by collating the
paths in the search tree, and show that our new synchronization approach surpasses our former rectangular method in
terms of correct retrievals, despite its higher complexity.
Annotation watermarking (also called caption or illustration watermarking) is a specific application of image watermarking, where supplementary information is embedded directly in the media, linking it to media content, whereby it is not get separated from the media by non-malicious processing steps like image cropping or non-lossy compression. Nested object annotation watermarking (NOAWM) was recently introduced as a specialization within annotation watermarking, for embedding hierarchical object relations in photographic images. In earlier work, several techniques for NOAWM have been suggested and have shown some domain-specific problems with respect to transparency (i.e. preciseness of annotation regions) and robustness (i.e. synchronization problems due to high density, multiple watermarking), which is addressed in this paper. A first contribution of this paper is therefore proposed a theoretical framework to characterize requirements and properties of previous art and suggest a classification of known NOAWM schemes. The second aspect is the study of one specific transparency aspect, the preciseness of the spatial annotations preserved by NOAWM schemes, based on a new area-based quality measurement. Finally, the synchronization problems reported from earlier works is addressed. One possible solution is to use content-specific features of the image to support synchronization. We discuss various theoretical approaches based on for example visual hashes and image contouring and present experimental results.