An interesting problem that has concerned forensic scientist for many years, is their need for accurate, reliable
and objective methods for performing fracture matching examinations. The aim of these fracture matching
methods is to determine if two broken object halves can be matched together, e.g., when one half is recovered
at a crime scene, while the other half is found in the possession of a suspect. In this paper we discuss the use
of a commercial white-light profilometer system for obtaining 2D/3D image surface scans of multiple fractured
objects. More specifically, we explain the use of this system for digitizing the fracture surface of multiple facing
halves of several snap-off blade knives. Next, we discuss the realization and evaluation of several image processing
methods for trying to match the obtained image scans corresponding to each of the broken off blade elements
used in our experiments. The algorithms that were tested and evaluated include: global template matching
based on image correlation and multiple template matching based on local image correlation, using so-called
"vote-map" computation. Although many avenues for further research still remain possible, we show that the
second method yields very good results for allowing automated searching and matching of the imaged fracture
surfaces for each of the examined blade elements.
Proc. SPIE. 6696, Applications of Digital Image Processing XXX
KEYWORDS: Image processing algorithms and systems, Semantic video, Detection and tracking algorithms, Image segmentation, Video, Data processing, Image filtering, Video processing, Motion estimation, Video coding
Object segmentation is considered as an important step in video analysis and has a wide range of practical
applications. In this paper we propose a novel video segmentation method, based on a combination of watershed
segmentation and mean-shift clustering. The proposed method segments video by clustering spatio-temporal data
in a six-dimensional feature space, where the features are spatio-temporal coordinates and spectral attributes.
The main novelty is an efficient data aggregation method employing watershed segmentation and local feature
averaging. The experimental results show that the proposed algorithm significantly reduces the processing time
by mean-shift algorithm and results in superior video segmentation where video objects are well defined and tracked throughout the time.
Until recently, the forensic or investigative reconstruction of
shredded documents has always been dismissed as an important but
unsolvable problem. Manual reassembly of the physical remnants can always be considered, but for large amounts of shreds this problem can quickly become an intangible task that requires vast amounts of time and/or personnel. In this paper we propose and discuss several image processing techniques that can be used to enable the reconstruction of strip-shredded documents stored within a database of digital images. The technical content of this paper mainly revolves around the use of feature based matching and grouping methods for classifying the initial database of shreds, and the subsequent procedure for computing more accurate pairing results for the obtained classes of shreds. Additionally, we discuss the actual reassembly of the different shreds on top of a common image canvas.
We illustrate our algorithms with example matching and reconstruction
results obtained for a real shred database containing various types of shredded document pages. Finally, we briefly discuss the impact of our findings on secure document management strategies and the possibilities for applying the proposed techniques within the context of forensic investigation.
Traditional watershed and marker-based image segmentation algorithms are very sensitive to noise. The main reason for this is that these
segmentation algorithms are locally dependent on some type of edge indicator input image that is traditionally computed on a pixel-by-pixel basis. Additionally, as a result of raw watershed segmentation, the original image can be seriously oversegmented, and it may be difficult to reduce the oversegmentation and the impact of noise without also inducing several undesired region merges. This last problem is a typical result of local "edge gaps" that may appear along the topographic watershed mountain rims. Through these gaps the marker or watershed labels can easily leak into neighboring segments. We propose a novel pair of algorithms that uses "thick fluid" label propagation in order to try and solve these problems. The thick fluid technique is based on considering information from multiple adjacent pixels along the topographic watershed mountain rims that separate the different objects in an initial pre-segmented image.
The normalized cut algorithm is a graph partitioning algorithm that has previously been used successfully for image segmentation. It is originally applied to pixels by considering each pixel in the image as a node in the graph. In this paper we investigate the feasibility of applying the normalized cut algorithm to micro segments by considering each micro segment as a node in the graph. This will severely reduce the computational demand of the normalized cut algorithm, due to the reduction of the number of nodes in the graph. The foundation of the translation to micro segments will be the region adjacency graph. A floating point based rainfalling watershed algorithm will create the initial micro segmentation. We will first explain the rainfalling watershed algorithm. Then we will review the original normalized cut algorithm for image segmentation and describe the changes that are needed when we apply the normalized cut algorithm to micro segments. We investigate the noise robustness of the complete segmentation algorithm on an artificial image and show the results we obtained on photographic images. We also illustrate the computational demand reduction by comparing the running times.
This paper investigates the use of computer vision techniques to aid in the semi-automatic reconstruction of torn or ripped-up documents.
First, we discuss a procedure for obtaining a digital database of a given set of paper fragments using a flatbed image scanner, a brightly coloured scanner background, and a region growing algorithm.
The contour of each segmented piece of paper is then traced around using a chain code algorithm and the contours are annotated by calculating a set of feature vectors. Next, the contours of the fragments are matched against each other using the annotated feature information and a string matching algorithm. Finally, the matching results are used to reposition the paper fragments so that a jigsaw
puzzle reconstruction of the document can be obtained. For each of the three major components, i.e., segmentation, matching, and global document reconstruction, we briefly discuss a set of prototype GUI
tools for guiding and presenting the obtained results. We discuss the performance and the reconstruction results that can be obtained, and show that the proposed framework can offer an interesting set of tools to forensic investigators.