We present GRAPHJ, a forensic tool for handwriting analysis. The proposed tool has been designed to implement the real forensic protocol adopted by the “Reparto Investigazioni Scientifiche” of Carabinieri, Italy. GRAPHJ allows the examiner to (1) automatically detect text lines and words in the document, (2) search for a specific character and detect its occurrences in the handwritten text, (3) measure different quantities related to the detected elements (e.g., heights and widths of characters), and (4) generate a report containing measurements, statistics, and the values of all parameters used during the analysis. The generation of the report helps to improve the repeatability of the whole process. The experiments performed on a set of handwritten documents show that GRAPHJ allows one to extract quantitative measures comparable to those acquired manually by an expert examiner. We also report a study on the use of the relative position of the superscript dot of the “i” characters as a parameter to infer the identity of the writer. The study has been performed using GRAPHJ and illustrates its value as a forensic tool.
When a picture is shot all the information about the distance between the object and the camera gets lost. Depth estimation from a single image is a notable issue in computer vision. In this work we present a hardware and software framework to accomplish the task of 3D measurement through structured light. This technique allows to estimate the depth of the objects, by projecting specific light patterns on the measuring scene. The potentialities of the structured light are well-known in both scientific and industrial contexts. Our framework uses a picoprojector module provided by STMicroelectronics, driven by the designed software projecting time- multiplexing Gray code light patterns. The Gray code is an alternative method to represent binary numbers, ensuring that the hamming distance between two consecutive numbers is always one. Because of this property, this binary coding gives better results for depth estimation task. Many patterns are projected at different time instants, obtaining a dense coding for each pixel. This information is then used to compute the depth for each point in the image. In order to achieve better results, we integrate the depth estimation with the inverted Gray code patterns as well, to compensate projector-camera synchronization problems as well as noise in the scene. Even though our framework is designed for laser picoprojectors, it can be used with conventional image projectors and we present the results for this case too.
In this paper we present a technique to classify five common classes of shapes acquired with a capacitive touch display: finger, ear, cheek, hand hold, half ear-half cheek. The need of algorithms able to discriminate among the aforementioned shapes comes from the growing diffusion of touch screen based consumer devices (e.g. smartphones, tablet, etc.). In this context, detection and the recognition of fingers are fundamental tasks in many touch based user applications (e.g., mobile games). Shape recognition algorithms are also extremely useful to identify accidental touches in order to avoid involuntary activation of the device functionalities (e.g., accidental calls). Our solution makes use of simple descriptors designed to capture discriminative information of the considered classes of shapes. The recognition is performed through a decision tree based approach whose parameters are learned on a set of labeled samples. Experimental results demonstrate that the proposed solution achieves good recognition accuracy.
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with
the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile
platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile
devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile
devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing
power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction,
face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile
platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
Holistic representations of natural scenes are an effective and powerful source of information for semantic classification and analysis of arbitrary images. Recently, the frequency domain has been successfully exploited to
holistically encode the content of natural scenes in order to obtain a robust representation for scene classification.
Despite the technological hardware and software advances, consumer single sensor imaging devices technology
are quite far from the ability of recognize scenes and/or to exploit the visual content during (or after) acquisition
time. In this paper we consider the properties of the scenes regarding its naturalness. The proposed method
exploits a holistic representation of the scene obtained directly in the DCT domain and fully compatible with
the JPEG format. Experimental results confirm the effectiveness of the proposed method.
We propose novel techniques for microarray image analysis. In particular, we describe an overall pipeline able to solve the most common problems of microarray image analysis. We propose the microarray image rotation algorithm (MIRA) and the statistical gridding pipeline (SGRIP) as two advanced modules devoted to restoring the original microarray grid orientation and to detecting, the correct geometrical information about each spot of input microarray, respectively. Both solutions work by making use of statistical observations, obtaining adaptive and reliable information about each spot property. They improve the performance of the microarray image segmentation pipeline (MISP) we recently developed. MIRA, MISP, and SGRIP modules have been developed as plug-ins for an advanced framework for microarray image analysis. A new quality measure able to effectively evaluate the adaptive segmentation with respect to the fixed (e.g., nonadaptive) circle segmentation of each spot is proposed. Experiments confirm the effectiveness of the proposed techniques in terms of visual and numerical data.
The rapid increase of technological innovations in the mobile phone industry induces the research community to
develop new and advanced systems to optimize services offered by mobile phones operators (telcos) to maximize their
effectiveness and improve their business. Data mining algorithms can run over data produced by mobile phones usage
(e.g. image, video, text and logs files) to discover user's preferences and predict the most likely (to be purchased) offer
for each individual customer. One of the main challenges is the reduction of the learning time and cost of these
automatic tasks. In this paper we discuss an experiment where a commercial offer is composed by a small picture
augmented with a short text describing the offer itself. Each customer's purchase is properly logged with all relevant
information. Upon arrival of new items we need to learn who the best customers (prospects) for each item are, that is,
the ones most likely to be interested in purchasing that specific item. Such learning activity is time consuming and, in
our specific case, is not applicable given the large number of new items arriving every day. Basically, given the current
customer base we are not able to learn on all new items. Thus, we need somehow to select among those new items to
identify the best candidates. We do so by using a joint analysis between visual features and text to estimate how good
each new item could be, that is, whether or not is worth to learn on it. Preliminary results show the effectiveness of the
proposed approach to improve classical data mining techniques.
Microarray is a new class of biotechnologies able to help biologist researches to extrapolate new knowledge from biological experiments. Image Analysis is devoted to extrapolate, process and visualize image information. For this reason it has found application also in Microarray, where it is a crucial step of this technology (e.g. segmentation). In this paper we describe MISP (Microarray Image Segmentation Pipeline), a new segmentation pipeline for Microarray Image Analysis. The pipeline uses a recent segmentation algorithm based on statistical analysis coupled with K-Means algorithm. The <i>Spot</i> masks produced by MISP are used to determinate spots information and quality measures. A software prototype system has been developed; it includes visualization, segmentation, information and quality measure extraction. Experiments show the effectiveness of the proposed pipeline both in terms of visual accuracy and measured quality values. Comparisons with existing solutions (e.g. Scanalyze) confirm the improvement with respect to previously published works.
In the last decade, consumer imaging devices such as camcorders, digital cameras, smartphones and tablets have been dramatically diffused. The increasing of their computational performances combined with an higher storage capability allowed to design and implement advanced imaging systems that can automatically process visual data with the purpose of understanding the content of the observed scenes.
In the next years, we will be conquered by wearable visual devices acquiring, streaming and logging video of our daily life. This new exciting imaging domain, in which the scene is observed from a first person point of view, poses new challenges to the research community, as well as gives the opportunity to build new applications. Many results in image processing and computer vision related to motion analysis, tracking, scene and object recognition and video summarization, have to be re-defined and re-designed by considering the emerging wearable imaging domain.
In the first part of this course we will review the main algorithms involved in the single-sensor imaging devices pipeline describing also some advanced applications. In the second part of the course we will give an overview of the recent trends of imaging devices considering the wearable domain. Challenges and applications will be discussed considering the state-of-the-art literature.