This paper introduces a homogeneity assessment method for the printed versions of uniform color images. This parameter has been specifically selected as one of the relevant attributes of printing quality. The method relies on image processing algorithms from a scanned image of the printed surface, especially the computation of gray level co-occurrence matrices and of objective homogeneity attribute inspired of Haralick's parameters. The viewing distance is also taken into account when computing the homogeneity index. Resizing and filtering of the scanned image are performed in order to keep the level of details visible by a standard human observer at short and long distances. The combination of the obtained homogeneity scores on both high and low resolution images provides a homogeneity index, which can be computed for any printed version of a uniform digital image. We tested the method on several hardcopies of a same image, and compared the scores to the empirical evaluations carried out by non-expert observers who were asked to sort the samples and to place them on a metric scale. Our experiments show a good matching between the sorting by the observers and the score computed by our algorithm.
Thanks to the localized surface plasmon resonance of silver nanoparticles, mesoporous titania films loaded with
silver salts manifest a photochromic behavior that can be used to perform updatable laser microinscriptions. Under UV
illumination, the silver salts are reduced into silver nanoparticles and the illuminated areas become grey-brown. This
coloration can be completely erased by oxidizing the silver nanoparticles with a polychromatic or monochromatic visible
light whose spectrum lies in the resonance band of silver nanoparticles. The paper investigates the usage of such
photochromic Ag/TiO<sub>2</sub> films for creating an updatable random texturing. Random textures are produced on coated glass
samples, initially homogeneous, by exposing them to speckle patterns resulting from the scattering of a UV laser beam
from an optically rough surface. The stability of such textures under homogeneous UV post-exposures is investigated as
a function of the speckle exposure time. Under optimized exposure conditions, the textures remain stable enough for a
long time and the differences between textures are sufficiently discriminative to use the texturing process for goods
authentication. This is demonstrated by calculating the correlation coefficient of thousands of couples of texture images.
The numerical treatment of images has the advantage to be robust to changes in the sample repositioning between
different image records. The rewritability of the samples is characterized through the comparison of different textures
successively erased and written at the same place on multiple samples.
In this work, we propose to extend the secure information display introduced by Yamamoto et al.  to full color
images. Yamamoto's technique makes use of black and transparent mask as decoding shadow image of a visual
cryptography scheme sharing 3 bits multi-color messages. By combining perspective setup together with a color visual
cryptography (VC) scheme which does not use any mask, we can securely display color images. A satisfying color VC
scheme is used which can be printed on a transparency film. When printed, colors act as filters  and allow a wider
color gamut for the message which is not limited to saturated color as in  because of the black and transparent mask as
decoding shadow image. In our implementation of the two-out-of-two visual cryptography scheme which shares a secret
message into two color shadow images, the first one is projected onto a glass diffuser and the second one is printed on a
transparency. A registration method is used in order to overcome the difficulty of shadow image alignment. As the two
shadow images are superposed with an air layer, the message disappears when the angular position is not close to the
ideal one. Examples with binary colored messages and with color images are provided to show the extension. By moving
the detector (or the eyes) angularly around the right position, perspective effects can be perceived.
Mobile readers used for optical identification of manufactured products can be tampered in different ways: with hardware
Trojan or by powering up with fake configuration data. How a human verifier can authenticate the reader to be handled
for goods verification?
In this paper, two cryptographic protocols are proposed to achieve the verification of a RAM-based system through a
trusted auxiliary machine. Such a system is assumed to be composed of a RAM memory and a secure block (in practice
a FPGA or a configurable microcontroller). The system is connected to an input/output interface and contains a Non
Volatile Memory where the configuration data are stored. Here, except the secure block, all the blocks are exposed to
At the registration stage of the first protocol, the MAC of both the secret and the configuration data, denoted <i>M</i><sub>0</sub> is
computed by the mobile device without saving it then transmitted to the user in a secure environment. At the verification
stage, the reader which is challenged with nonces sendsMACs / HMACs of both nonces and MAC <i>M</i><sub>0</sub> (to be recomputed),
keyed with the secret. These responses are verified by the user through a trusted auxiliary MAC computer unit. Here the
verifier does not need to tract a (long) list of challenge / response pairs. This makes the protocol tractable for a human
verifier as its participation in the authentication process is increased. In counterpart the secret has to be shared with the
auxiliary unit. This constraint is relaxed in a second protocol directly derived from Fiat-Shamir's scheme.
This paper aims to provide a remote fingerprint object authentication protocol dedicated to anti-counterfeiting
applications. The corresponding security model is given. The suggested scheme is based on an Elliptic Curve
Cryptography (ECC) encryption adding a mechanism to control integrity at the verification stage. The
privacy constraint useful in many applications leads us to embed a Private Information Retrieval scheme in
As in a previous SPIE presentation, we begin with an optical reader. We drastically lower the amount of
computation made at this stage.
Digital holography (DH) is being increasingly used for its time-resolved three-dimensional (3-D) imaging capabilities.
A 3-D volume can be numerically reconstructed from a single 2-D hologram. Applications of DH range from
experimental mechanics, biology, and fluid dynamics. Improvement and characterization of the 3-D reconstruction
algorithms is a current issue. Over the past decade, numerous algorithms for the analysis of holograms have
been proposed. They are mostly based on a common approach to hologram processing: digital reconstruction
based on the simulation of hologram diffraction. They suffer from artifacts intrinsic to holography: twin-image
contamination of the reconstructed images, image distortions for objects located close to the hologram borders.
The analysis of the reconstructed planes is therefore limited by these defects. In contrast to this approach, the
inverse problems perspective does not transform the hologram but performs object detection and location by
matching a model of the hologram. Information is thus extracted from the hologram in an optimal way, leading
to two essential results: an improvement of the axial accuracy and the capability to extend the reconstructed
field beyond the physical limit of the sensor size (out-of-field reconstruction). These improvements come at the
cost of an increase of the computational load compared to (typically non iterative) classical approaches.
A piling of Fresnel patterns is inserted in an image for data encoding and image hashing synchronization,
dedicated to local authentication. Beyond, the problem is to preserve both decoding and image content
perception. Besides this, the insertion must not too much alter perceptual image hashing.
A new symbology designed around morphometrics and well-adapted to object authentication is introduced.
Morphometrics are defined in a formal way as the output of a new channel (the scan-only channel) working
in parallel to the classical print/scan channel. Two protocols based on cryptographic schemes are presented
for a visual verification. A variant of the secret sharing scheme is suggested for one of them.
This paper follows a paper by Bringer et al.<sup>3</sup> to adapt a security model and protocol used for remote biometric
authentication to the case of remote morphometric object authentication. We use a different type of encryption
technique that requires smaller key sizes and has a built-in mechanism to help control the integrity of the messages
received by the server. We also describe the optical technology used to extract the morphometric templates.
In-line digital holography conciles the applicative interest of a simple optical set-up with the speed, low cost and potential of digital reconstruction. We address the twin-image problem that arises in holography due to the lack of phase information in intensity measurements. This problem is of great importance in in-line holography where spatial elimination of the twin-image cannot be carried out as in off-axis holography. Applications in digital holography of particle fields greatly depend on its suppression to reach greater particle concentrations, keeping a sufficient signal to noise ratio in reconstructed images. We describe in this paper methods to improve numerically the reconstructed images by twin-image reduction.