This paper presents a novel scheme for automatic and intelligent 3D digitization using robotic cells. The advantage of our procedure is that it is generic since it is not performed for a specific scanning technology. Moreover, it is not dependent on the methods used to perform the tasks associated with each elementary process. The comparison of results between manual and automatic scanning of complex objects shows that our digitization strategy is very efficient and faster than trained experts. The 3D models of the different objects are obtained with a strongly reduced number of acquisitions while moving efficiently the ranging device.
We propose a progressive transmission approach of an image authenticated using an overlapping subimage that can be removed to restore the original image. Our approach is different from most visible watermarking approaches that allow one to later remove the watermark, because the mark is not directly introduced in the two-dimensional image space. Instead, it is rather applied to an equivalent monovariate representation of the image. Precisely, the approach is based on our progressive transmission approach that relies on a modified Kolmogorov spline network, and therefore inherits its advantages: resilience to packet losses during transmission and support of heterogeneous display environments. The marked image can be accessed at any intermediate resolution, and a key is needed to remove the mark to fully recover the original image without loss. Moreover, the key can be different for every resolution, and the images can be globally restored in case of packet losses during the transmission. Our contributions lie in the proposition of decomposing a mark (an overlapping image) and an image into monovariate functions following the Kolmogorov superposition theorem; and in the combination of these monovariate functions to provide a removable visible “watermarking” of images with the ability to restore the original image using a key.
The goal of this work is to develop a complete and automatic scanning system with minimum prior information. We aim
to establish a methodology for the automation of the 3D digitization process. The paper presents a method based on the
evolution of the Bounding Box of the object during the acquisition. The registration of the data is improved through the
modeling of the positioning system. The obtained models are analyzed and inspected in order to evaluate the robustness of
our method. Tests with real objects have been performed and results of digitization are provided.
We present a novel 3-D recovery method based on structured light. This method unifies depth from focus (DFF) and depth from defocus (DFD) techniques with the use of a dynamic (de)focused projection. With this approach, the image acquisition system is specifically constructed to keep a whole object sharp in all the captured images. Therefore, only the projected patterns experience different defocused deformations according to the object's depths. When the projected patterns are out of focus, their point-spread function (PSF) is assumed to follow a Gaussian distribution. The final depth is computed by the analysis of the relationship between the sets of PSFs obtained from different blurs and the variation of the object's depths. Our new depth estimation can be employed as a stand-alone strategy. It has no problem with occlusion and correspondence issues. Moreover, it handles textureless and partially reflective surfaces. The experimental results on real objects demonstrate the effective performance of our approach, providing reliable depth estimation and competitive time consumption. It uses fewer input images than DFF, and unlike DFD, it ensures that the PSF is locally unique.
In this paper, we present a novel approach for adaptive and progressive image acquisition, based on the progressive
transmission of an image decomposed into compositions and superpositions of monovariate functions. The
monovariate functions are iteratively constructed from the acquired data, to progressively reconstruct the final
image: the transmission is performed directly in the 1D space of the monovariate functions, independently of any
statistical properties of the image. Each monovariate function contains only a fraction of the pixels of the image.
Each new transmitted monovariate function adds data to the previously transmitted monovariate functions.
After each partial acquisition, by using the updated monovariate functions, the image is reconstructed with an
increased resolution. Finally, once all the monovariate functions have been transmitted, the original image is
reconstructed exactly at the maximum resolution of the sensor. This approach is characterized by its flexibility:
any numbers of intermediate transmissions and reconstructions are possible. Moreover, the intermediate images
can be reconstructed at any resolution, and for any number of intermediate reconstructions, the original image
will be exactly reconstructed. Finally, the quantity of data is independent of the number and resolutions of
Our contributions include the application of a flexible progressive transmission scheme to provide a progressive
and flexible acquisition at various resolutions. Moreover, the accuracy of the full resolution image is preserved,
and the acquired data are encrypted and resilient to packet-loss.
We present a novel approach to adaptive and progressive image transmission, based on the decomposition of an image into compositions and superpositions of monovariate functions. The monovariate functions are iteratively constructed and transmitted, one after the other, to progressively reconstruct the original image: the progressive transmission is performed directly in the 1D space of the monovariate functions and independently of any statistical properties of the image. Each monovariate function contains only a fraction of the pixels of the image. Each new transmitted monovariate function adds data to the previously transmitted monovariate functions. After each transmission step, by using the updated monovariate functions the image is reconstructed with an increased resolution. Finally, once all the monovariate functions have been transmitted, the original image is reconstructed exactly. This approach is characterized by its flexibility and robustness to packet loss: any numbers of intermediate transmissions and reconstructions are possible, and in case of packet loss, the global appearance of the transmitted image is preserved. Moreover, the intermediate images can be reconstructed at any resolution, and for any number of intermediate reconstructions, the original image will be exactly reconstructed. Finally, the quantity of data to be transmitted only depends on the image size and is independent of the number of intermediate reconstructions. Our main contributions are the modification of the decomposition scheme defined by the Kolmogorov superposition theorem to enable multiresolution image reconstructions and its application for progressive image transmission, using successively increasing resolutions. We illustrate this approach on several images and evaluate the reconstruction quality, decomposition flexibility, and error resilience during transmission.
This paper presents a complete system for 3D digitization of objects assuming no prior knowledge on its shape. The
proposed methodology is applied to a digitization cell composed of a fringe projection scanner head, a robotic arm with
6 degrees of freedom (DoF), and a turntable. A two-step approach is used to automatically guide the scanning process.
The first step uses the concept of Mass Vector Chains (MVC) to perform an initial scanning. The second step directs the
scanner to remaining holes of the model. Post-processing of the data is also addressed. Tests with real objects were
performed and results of digitization length in time and number of views are provided along with estimated surface
In this paper, we propose a novel active 3D recovery method based on dynamic (de)focused light. The method
combines both depth from focus (DFF) and depth from defocus (DFD) techniques. With this approach, optimized
illumination pattern is projected on the object in order to enforce strong dominant texture on the surface. The
imaging system is specifically constructed to keep the whole object sharp in all captured images. Consequently,
only projected patterns experience the defocused deformation according to an object depth. Projected light
pattern images are acquired within certain focused ranges similar to DFF approach, while the focus measures
across these images are calculated for depth estimation by using DFD manner. This guarantees that at least
one focus or near-focus image within depth of field exists in the computation. Therefore, the final reconstruction
is supposed to be prominent to the one obtained from DFD and also less computational extensive compared to
We propose a new compression approached based on the decomposition of images into continuous monovariate
functions, which provide adaptability over the quantity of information taken into account to define the monovariate
functions: only a fraction of the pixels of the original image have to be contained in the network used
to build the correspondence between monovariate functions. The Kolmogorov Superposition Theorem (KST)
stands that any multivariate functions can be decomposed into sums and compositions of monovariate functions.
The implementation of the decomposition proposed by Igelnik, and modified for image processing, is combined
with a wavelet decomposition, where the low frequencies will be represented with the highest accuracy, and the
high frequencies representation will benefit from the adaptive aspect of our method to achieve image compression.
Our main contribution is the proposition of a new compression scheme, in which we combine KSTand
multiresolution approach. Taking advantage of the KST decomposition scheme, we use a decomposition into
simplified monovariate functions to compress the high frequencies. We detail our approach and the different
methods used to simplify the monovariate functions. We present the reconstruction quality as a function of the
quantity of pixels contained in monovariate functions, as well as the image reconstructions obtained with each
This paper deals with the decomposition of multivariate functions into sums and compositions of monovariate
functions. The global purpose of this work is to find a suitable strategy to express complex multivariate functions
using simpler functions that can be analyzed using well know techniques, instead of developing complex Ndimensional
tools. More precisely, most of signal processing techniques are applied in 1D or 2D and cannot easily
be extended to higher dimensions. We recall that such a decomposition exists in the Kolmogorov's superposition
theorem. According to this theorem, any multivariate function can be decomposed into two types of univariate
functions, that are called inner and external functions. Inner functions are associated to each dimension and
linearly combined to construct a hash-function that associates every point of a multidimensional space to a
value of the real interval [0, 1]. Every inner function is the argument for one external function. The external
functions associate real values in [0, 1] to the image by the multivariate function of the corresponding point of
the multidimensional space.
Sprecher, in Ref. 1, has proved that internal functions can be used to construct space filling curves, i.e. there
exists a curve that sweeps the multidimensional space and uniquely matches corresponding values into [0, 1]. Our
goal is to obtain both a new decomposition algorithm for multivariate functions (at least bi-dimensional) and
adaptive space filling curves. Two strategies can be applied. Either we construct fixed internal functions to obtain
space filling curves, which allows us to construct an external function such that their sums and compositions
exactly correspond to the multivariate function; or the internal function is constructed by the algorithm and is
adapted to the multivariate function, providing different space filling curves for different multivariate functions.
We present two of the most recent constructive algorithms of monovariate functions. The first method is
due to Sprecher (Ref. 2 and Ref. 3). We provide additional explanations to the existing algorithm and present
several decomposition results for gray level images. We point out the main drawback of this method: all the
function parameters are fixed, so the univariate functions cannot be modified; precisely, the inner function
cannot be modified and so the space filling curve. The number of layers depends on the dimension of the
decomposed function. The second algorithm, proposed by Igelnik in Ref. 4, increases the parameters flexibility,
but only approximates the monovariate functions: the number of layers is variable, a neural networks optimizes
the monovariate functions and the weights associated to each layer to ensure convergence to the decomposed
We have implemented both Sprecher's and Igelnik's algorithms and present the results of the decompositions
of gray level images. There are artifacts in the reconstructed images, which leads us to apply the algorithm on
wavelet decomposition images. We detail the reconstruction quality and the quantity of information contained
in Igelnik's network.
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate
geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques
require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for
each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time
consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement
errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we
significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to
obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then
automatically fitted by mathematical representation. Such a representation offers several interesting properties:
geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of
stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
Simple representation of complex 3D data sets is a fundamental problem in computer vision. From a quality
control perspective, it is crucial to use efficient and simple techniques do define a reference model for further
recognition or comparison tasks. In this paper, we focus on reverse engineering 3D data sets by recovering
rational supershapes to build an implicit function to represent mechanical parts. We derive existing techniques
for superquadrics recovery to the supershapes and we adapt the concepts introduced for the ratioquadrics to
introduce the rational supershapes. The main advantage of rational supershapes over standard supershapes is
that the radius is now expressed as a rational fraction instead of sums and compositions of powers of sines and
cosines, which allows simpler and faster computations during the optimization process. We present reconstruction
results of complex 3D data sets that are represented by an implicit equation with a small number of parameters that can be used to build an error measure.
The current threats to U.S. security both military and civilian have led to an increased interest in the development of technologies to safeguard national facilities such as military bases, federal buildings, nuclear power plants, and national laboratories. As a result, the Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory at The University of Tennessee (UT) has established a research consortium, known as SAFER (Security Automation and Future Electromotive Robotics), to develop, test, and deploy sensing and imaging systems for unmanned ground vehicles (UGV). The targeted missions for these UGV systems include -- but are not limited to --under vehicle threat assessment, stand-off check-point inspections, scout surveillance, intruder detection, obstacle-breach situations, and render-safe scenarios. This paper presents a general overview of the SAFER project. Beyond this general overview, we further focus on a specific problem where we collect 3D range scans of under vehicle carriages. These scans require appropriate segmentation and representation algorithms to facilitate the vehicle inspection process. We discuss the theory for these algorithms and present results from applying them to actual vehicle scans.