PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A digital naive plane can be represented by repetition of specific voxels sets. These pieces can be (n, m)-cubes, set composed of n X m adjacent voxels. We can also encounter 18-connected voxels sets or more generally sets of p voxels. The aim of this paper is to define a link between the parameters of a naive plane and its topology. We will look for the class of naive planes generated by a same list of voxels sets. Planes are ordered using Farey series coding and we prove the relationship between the segmentation issued from the Farey net and configurations of the voxels sets. This is an original contribution offering a new method for digital plane characterization and recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we study point location of a regular 3D hexahedral grid. This is useful for applications that modelize wave propagation in a spatial 3D subdivision by the finite difference method. The current numerical solvers, like those employed in seismic wave propagation, can treat a billion points. It is thus necessary to resort to powerful localization methods in time. We propose a new particularly fast method based on results of discrete geometry. The principle of this method is based on a discretization of the faces of this 3D subdivision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the authors has proposed a study of homotopy and simplicity in partially ordered sets (or posets). The notion of unipolar point was introduced: a unipolar point can be seen as an 'inessential' element for the topology. Thus, the iterative deletion of unipolar points constitutes a first thinning algorithm. We show in this paper, that such an algorithm does not 'thin enough' certain images. This is the reason why we use the notion of (alpha) -simple point, introduced in the framework of posets, in Ref. 1. The definition of such a point is recursive. As we can locally decide whether a point is (alpha) -simple, we can use classical techniques (such as a binary decision diagram) to characterize them more quickly. Furthermore, it is possible to remove in parallel (alpha) -simple points in a poset, while preserving the topology of the image. Then, we discuss the characterization of end points in order to produce various skeletons. Particularly, we propose a new approach to characterize surface end points. This approach permits us to keep certain junctions of surfaces. Then, we propose very simple parallel algorithms based on the deletion of (alpha) - simple points, consisting in the repetition of two subiterations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present in this paper a generic algorithm to compute the skeleton of an n-dimensional binary object. Considering the cartesian hypercubic grid, we provide a mathematical framework in which are given the explicit Boolean conditions under which the iterative thinning procedure removes a point. This algorithm preserves the topology in a sense which matches the properties usually used in 2D and 3D. Furthermore, it is based on an original kind of median hypersurface that gives to the skeleton good behavior with respect to both shape preservation and noise sensitivity. The algorithm is fully parallel, as no spatial subiterations are needed. The latter property, together with the symmetry of the boolean n-dimensional patterns leads to a perfectly isotropic skeleton. The logical expression of the algorithm is extremely concise, and in 2D, a large comparative study shows that the overall number of elementary Boolean operations needed to get the skeleton is smaller than for the other iterative algorithms reported in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several geometric concepts from affine geometry have their counterparts in digital geometry. We define and discuss the digitization of three of important concepts: parallelism, colinearity, and concurrency of digital straight lines. Their main characteristic is that in the digital plane these properties become Helly-type theorems, which means that they express a geometric relation holding for an entire collection of geometric objects in terms of simpler geometric relations that must hold for subcollections. For example, in the digital plane we can show that a collection of digital lines is parallel if and only if each of its 2-membered subcollections consists of two digital lines that are parallel. Thus parallelism in the digital plane is more complicated than it is in ordinary affine geometry. Appropriate definitions for digital parallelism and concurrency have many applications in digital image processing. For example, they provide an appropriate setting for verifying whether lines detected in a digital image satisfy the constraints imposed by a perspective projection. Furthermore, the existence of Helly-type properties has important implications from a computational viewpoint. In fact, these theorems ensure that in the digital plane parallelism, colinearity, and concurrency can be detected in polynomial time by standard algorithms developed within the field of computational geometry. We illustrate this with several algorithms, where each algorithm solves a particular geometric problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to a general definition of discrete curves, surfaces, and manifolds (Li Chen, 'Generalized discrete object tracking algorithms and implementations, ' In Melter, Wu, and Latecki ed, Vision Geometry VI, SPIE Vol. 3168, pp 184 - 195, 1997.). This paper focuses on the Jordan curve theorem in 2D discrete spaces. The Jordan curve theorem says that a (simply) closed curve separates a simply connected surface into two components. Based on the definition of discrete surfaces, we give three reasonable definitions of simply connected spaces. Theoretically, these three definition shall be equivalent. We have proved the Jordan curve theorem under the third definition of simply connected spaces. The Jordan theorem shows the relationship among an object, its boundary, and its outside area. In continuous space, the boundary of an mD manifold is an (m - 1)D manifold. The similar result does apply to regular discrete manifolds. The concept of a new regular nD-cell is developed based on the regular surface point in 2D, and well-composed objects in 2D and 3D given by Latecki (L. Latecki, '3D well-composed pictures,' In Melter, Wu, and Latecki ed, Vision Geometry IV, SPIE Vol 2573, pp 196 - 203, 1995.).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new descriptor for two- and three- dimensional digital curves using the theory of free groups. A spatial digital curve is expressed as a word which is an element of the free group which consists from three elements. These three symbols correspond to the directions of the orthogonal coordinates, respectively. Since a digital curve is treated as a word which is a sequence of alphabetical symbols, this expression permits us to describe any geometric operation as rewriting rules for words. Furthermore, the symbolic derivative of words yields geometric invariants of digital curves for digital Euclidean motion. These invariants enable us to design algorithms for the matching and searching procedures of partial structures of digital curves. Moreover, these symbolic descriptors define the global and local distances for digital curves as an editing distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moments have been widely used in shape recognition and identification. In general, the (k,1)-moment, denoted by mk,l(S), of a planar measurable set S is defined by mk,l(S) equals (integral) S(integral) xkyl dx dy. We assume situations in image analysis and pattern recognition where real objects are acquired (by thresholding, segmentation, etc.) as binary images D(S), i.e. as digital sets or digital regions. For a set S, in this paper its digitization is defined to be the set of all grid points with integer coordinates which belong to the region occupied by the given set S. Since in image processing applications, the exact values of the moments mk,l(S) remain unknown, they are usually approximated by discrete moments (mu) k,l(S) where (mu) k,l(S) equals (summation)/(i,j)(epsilon) D(S) ik (DOT) jl equals (summation)/i,j are integers (i,j)(epsilon) S ik (DOT) jl. Moments of order up to two (i.e. k + l less than or equal to 2) are frequently used and our attention is focused on them, i.e. on the limitation in their estimation from the corresponding digital picture. In this paper is it proved that mk,l(S) - 1/rk+l+2 (DOT) (mu) k,l(r (DOT) S) equals (Omicron) (1/r15/11+(epsilon )) approximately equals (Omicron) (1/r1.363636...) for k + l less than or equal to 2, where S is a convex set in the plane with a boundary having continuous third derivative and positive curvature at every point, r is the number of pixels per unit (i.e. 1/r is the size of the pixel), while r (DOT) S denotes the dilation of S by factor r.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper details two linear-time algorithms, one for the partition of the boundary line of a digital region into digital straight segments, and one for calculating the minimum length polygon within an open boundary of a digital region. Both techniques allow the estimation of the length of digital curves or the perimeter of digital regions due to known multigrid convergence theorems. The algorithms are compared with respect to convergence speed and number of generated segments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes how to propagate approximately additive random perturbations through any kind of vision algorithm step in which the appropriate random perturbation model for the estimated quantity produced by the vision step is also an additive random perturbation. We assume that the vision algorithm step can be modeled as a calculation (linear or non- linear) that produces an estimate that minimizes an implicit scalar function of the input quantity and the calculated estimate. The only assumption is that the scalar function be non-negative, have finite first and second partial derivatives, that its value is zero for ideal data, and that the random perturbations are small enough so that the relationship between the scalar function evaluated at the ideal but unknown input and output quantities and evaluated at the observed input quantity and perturbed output quantity can be approximated sufficiently well by a first order Taylor series expansion. The paper finally discusses the issues of verifying that the derived statistical behavior agrees with the experimentally observed statistical behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ray-tracing is notorious of its computational requirement. There were a number of techniques to speed up the process. However, a famous statistic indicated that ray-object intersections occupies over 95% of the total image generation time. Thus, it is most beneficial to work on this bottle-neck. There were a number of ray-object intersection reduction techniques and they could be classified into three major categories: bounding volume hierarchies, space subdivision, and directional subdivision. This paper introduces a technique falling into the third category. To further speed up the process, it takes advantages of hierarchy by adopting a MX-CIF quadtree in the image space. This special kind of quadtree provides simple objects allocation and ease of implementation. The text also included a theoretical proof of the expected performance. For ray-polygon comparison, the technique reduces the order of complexity from linear to square-root, O(n) -> O(2(root)n). Experiments with various shape, size and complexity were conducted to verify the expectation. Results shown that computational improvement grew with the complexity of the sceneries. The experimental improvement was more than 90% and it agreed with the theoretical value when the number of polygons exceeded 3000. The more complex was the scene, the more efficient was the acceleration. The algorithm described was implemented in the polygonal level, however, it could be easily enhanced and extended to the object or higher levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we formulate steerable filters as the representations of transformation groups. The transformation groups on the image plane considered thus far concentrate on rotation, scaling, and translation. So we construct the steerable filter for the projective rotation group based on the representation of the projective rotation group. We further discuss the steerability for the 3D Euclidean motion group from the standpoint of the representation of the Euclidean motion group. We also show a given image can be expanded with steerable functions, which is a special case of Fourier transform on Lie groups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of matching 3D objects has been studied extensively in computer vision. Here we consider the case of polyhedral objects extracted from range images and presented in terms of 3D lines. Given two sets of line segments A and B the problem is to find a rigid body transformation T that minimizes a given dissimilarity measure between T(A) and B. We discuss two practical approximate solutions to the segment matching problem in 3D that can be used to recognize planar- faced objects from range data. The first method is based on indexing and the second is a minimization of the Hausdorff distance. We show the advantage of an integrated use of the two strategies improving the overall performance without degradation in the quality of the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several recent papers have produced successively faster running times for solution of the Point Set Matching Problem (PSPM) (finding all congruent copies of a pattern in a sample set) in Euclidean 3-space, R3. Here, we obtain further improvement in the speed with which this problem can be solved. The source of the improvement over the previous fastest running time is a better upper bound for the volume of output, derived from a recent result. The previous best sequential running time for PSPM in R3 is O{kn2[(lambda) 6(n)/n$RT0.5 log n }, where k is the size of the pattern and n is the size of the sample set. Here, we give a sequential algorithm for the PSPM in R3 running in O[n2log n + kn1.8 log n (log*n)O(1)$RT time. Solutions to related problems are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3-D image modeling of volumetric data can be useful to biomedical research, medical therapy, surgery planning, and simulation of critical surgery (i.e. cranio-facial). This paper proposes an algorithm which is based on a tetrahedral domain instead of cubes. The initial tetrahedral domain is constructed by using Delaunay tetrahedrization algorithm. In the case of ambiguity, asymptotic decider is used instead of sphere criterion. In sphere criterion, only positional information is considered, by asymptotic decider allows to count the functional information. In other words, the construction of the tetrahedral domain is based on whether or not connecting vertices are joined by a component of the hyperbolic arc. Linear trivariate interpolation is performed through the tetrahedral domain. We call this new algorithm Marching Tetrahedra for the purpose of comparing it to Marching Cubes algorithm. The main difference between two algorithms is that tetrahedra are used, instead of cubes, in the process. Marching Tetrahedra algorithm allows the scattered data which can't be accepted by Marching Cubes algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D surface reconstruction is a key problem in computer vision. To obtain the 3D surface reconstruction of an object from its 2D image sequence one needs to get dense matching between these images. However the pixels constituting the 2D images have an unequal importance. When people watching a scene, his eyes will been firstly focused on the outlines of the object and then on its details, a fact corresponding with the characteristic of human vision. So, it is reasonable to introduce a coarse-to-fine strategy to 3D surface reconstruction. In this paper we propose a hierarchical approach. Based on Wavelet transformation of the 2D images, we can get the dense matching in different levels. As a result, we can get 3D surface reconstruction with various approximation qualities. Experiments with real images show that our algorithm is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss a new geometric hashing method for searching large databases of 2D images (or 3D objects) to match a query built from geometric information presented by a single 3D object (or single 2D image). The goal is to rapidly determine a small subset of the images that potentially contain a view of the given object (or a small set of objects that potentially match the item in the image). Since this must be accomplished independent of the pose of the object, the objects and images, which are characterized by configurations of geometric features such as points, lines and/or conics, must be treated using a viewpoint invariant formulation. We are therefore forced to characterize these configurations in terms of their 3D and 2D geometric invariants. The crucial relationship between the 3D geometry and its 'residual' in 2D is expressible as a correspondence (in the sense of algebraic geometry). Computing a set of generating equations for the ideal of this correspondence gives a complete characterization of the view of independent relationships between an object and all of its possible images. Once a set of generators is in hand, it can be used to devise efficient recognition algorithms and to give an efficient geometric hashing scheme. This requires exploiting the form and symmetry of the equations. The result is a multidimensional access scheme whose efficiency we examine. Several potential directions for improving this scheme are also discussed. Finally, in a brief appendix, we discuss an alternative approach to invariants for generalized perspective that replaces the standard invariants by a subvariety of a Grassmannian. The advantage of this is that one can circumvent many annoying general position assumptions and arrive at invariant equations (in the Plucker coordinates) that are more numerically robust in applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently Latecki and Lakamper (Computer Vision and Image Understanding 73:3, March 1999) reported a novel process for a discrete curve evolution. This process has various application possibilities, in particular, for noise removal and shape simplification of boundary curves in digital images. In this paper we prove that the process of the discrete curve evolution is continuous: if polygon Q is close to polygon P, then the polygons obtained by their evolution remain close. This result follows directly from the fact that the evolution of Q corresponds to the evolution of P if Q approximates P. This intuitively means that first all vertices of Q are deleted that are not close to any vertex of P, and then, whenever a vertex of P is deleted, then a vertex of Q that is close to it is deleted in the corresponding evolution step of Q.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The invariance and covariance of extracted features from an object under certain transformation play quite important roles in the fields of pattern recognition and image understanding. For instance, in order to recognize a three dimensional (3D) object, we need specific features extracted from a given object. These features should be independent of the pose and the location of an object. To extract such feature, one of the authors has presented the 3D vector autoregressive (VAR) model. This 3D VAR model is constructed on the quaternion, which is the basis of SU(2) (the rotation group in two dimensional complex space). Then the 3D VAR model is defined by the external products of 3D sequential data and the autoregressive (AR) coefficients, unlike the conventional AR models. Therefore the 3D VAR model has some prominent features. For example, the AR coefficients of the 3D VAR model behave like vectors under any three dimensional rotation. In this paper, we present the recursive computation of 2D VAR coefficients and 3D VAR coefficients. This method reduce the cost of computation of VAR coefficients. We also define the partial correlation (PARCOR) vectors for the 2D VAR model and 3D VAR model from the point of view of data compression and pattern recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As we published in the last few years, when the pattern vectors used in the training of a novel neural network satisfy a generalized N-dimension convexity property (or the novel PLI condition we derived), the neural network can learn these patterns very fast in a NONITERATIVE manner. The recognition of any UNTRAINED patterns by using this learned neural network can then reach OPTIMUM ROBUSTNESS if an automatic feature extraction scheme derived from the N-dimension geometry is used in the recognition mode. The simplified physical picture of the high-robustness reached by this novel system is the automatic extraction of the most distinguished parts in all the M training pattern vectors in the N-space such that the volume of the M-dimension parallelepiped spanned by these parts of the vectors reaches a maximum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient algorithm for recognizing general curved shapes by their contours is proposed. In this algorithm, a convex hull is constructed on the test shape so as to extract the extreme points as the correspondence points between the image and the model. Criteria are devised for choosing four of these extreme points to form a canonical frame onto which all other points are projected. By so doing, invariant curves of a shape, which are robust to noise, can be attained. For saving processing time, we propose a specific distance measure on the extreme points of the curves from a major axis of the canonical frame. These invariant distances can be used to measure the similarity between the shapes of interest. The advantage of this strategy is that there is no requirement for re-parameterization, which is time consuming. Compared to backprojection of the whole shape, our proposed method is more efficient and more flexible. Results show that different resolutions of the sampling points on a contour do not affect our distance measure. Recognition rate can reach as high as 99%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As known from the works of Serra, Ronse, Haralick and Shapiro, the connectivity relations are found to be useful in filtering binary images. But it can be used also to find roadmaps in robot motion planning, i.e. to build discrete networks of simple paths connecting points in the robot's configuration space capturing the connectivity of this space. This paper generalizes and puts together the notion of a connectivity class introduced by Serra, and the notion of a separation relation. This gives an opportunity to introduce approximate epsilon-connectivity, and thus we show the relation between our approach and the Epsilon Geometry introduced by Guibas, Salesin and Stolfi. The duality between the notions 'connectivity class' and 'separation relation' has been established. As an application we consider the problem of cleaning drop-out noise from binary images by morphological closing filter. Ronse and Serra have defined connectivity analogs on complete lattices with certain properties. As a particular case of their work we consider the connectivity of fuzzy compact sets, which is a natural way to study the connectivity of gray-scale images. This idea can be transferred also in planning robot trajectories in the presence of uncertainties. Since based on fuzzy sets theory, our approach is intuitively closer to the classical set oriented approach, used for binary images and robot path planning in known environment with obstacles. This makes our theory much easier to implement, compare to the direct application of the beautiful and more general approach based on connectivity in functional spaces as presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent work, we introduced topological notions for grayscale images based on a cross-section topology. In particular, the notion of destructible point, which corresponds to the classical notion of simple point, allows to build operators that simplify a grayscale image while preserving its topology. In this paper, we introduce new notions and operators in the framework of the cross-section topology. In particular, the notion of (lambda) -destructible point allows us to selectively modify the topology, based on a local contrast parameter (lambda) . By combining homotopic and non-homotopic operators, we introduce new methods for filtering, thinning, segmenting and enhancing grayscale images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently an interesting image analysis by scale-space method is given by Sporring and Weickert. They considered Renyi entropy at each scale to estimate the extents of the lighter pattern and the darker pattern in a given image. On the other hand, there is another generalized entropy such as Tsallis entropy, which has a physical meaning like Boltzmann entropy and is also famous for its usefulness in physics. In this paper, after giving a brief review of Tsallis entropy, we adopt Tsallis entropy as an information measure at each level for the scale-space method to elucidate what the difference between Renyi entropy and Tsallis entropy causes in result. It is also shown that Tsallis entropy is a more natural information measure than Renyi entropy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We define a peak of an image I at the grey level g to be a connected region of pixels pij with the grey values g(pij) greater than or equal to g. The peaks form a tree structure, called the peak-tree (Pi) (I). It generalizes the notion of the grey-level histogram of I and partially captures the spatial distribution of grey values in the image. Thereby, it opens up new methods for simplifying an image as illustrated here. We give an efficient O(NV) algorithm for constructing the peak-tree, where N equals #(pixels in I) and V equals #(distinct grey levels in I).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a comparative study of several algorithms developed for digital image rotation. No losing generality we studied gray scale images. We have tested methods preserving gray values of the original images, performing some interpolation and two procedures implemented into the Corel Photo-paint and Adobe Photoshop soft packages. By the similar way methods for rotation of color images may be evaluated also.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article proposes a novel solution for projective reconstruction for computer vision with non-metric camera. After a brief introduction of projective transformation as well as projective invariant and coordinates, the fundamental matrix is expressed as a production of two matrices, the projective base line matrix and the projective rotation matrix. This derivation is based on photogrammetric concepts and thus close similarity is found to the decomposition of the essential matrix for metric camera. A projective coefficient, which is proven to be the cross ratio of lengths of two conjugate projective rays, is therefore derived to calculate the homogeneous coordinates and projective coordinates (cross ratio of volume elements) of the projective model. In order to reconstruct the object from its projective model, as an extension to the well-known 2-D direct linear transformation (2-D DLT) solution for metric camera in conventional photogrammetry, a 3-D DLT solution is proposed. The reconstruction is linearly completed with minimum five conjugate known object points. Test results and analyses verify the solution and methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we analyze a specific problem within the context of recovering the geometric shape of an unknown surface from multiple noisy shading patterns generated by consecutive parallel illuminations by different light-sources. Shading- based single-view shape recovery in computer vision often leads to vector fields (i.e. estimated surface normals) which have to be integrated for calculations of height or depth maps. We present an algorithm for enforcing the integrability condition of a given non-integrable vector field which ensures a global suboptimal solution by local optimizations. The scheme in question relies neither on a priori knowledge of boundary conditions nor on other global constraints imposed on the so-far derived noise contaminated gradient integration techniques. The discussion is supplemented by examples illustrating algorithm performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an iterative algorithm computing a global optimum for a large system of linear equations enforcing the so-called integrability condition for a given noisy non-integrable vector field. The algorithm will be applied to Photometric Stereo shape reconstruction. The scheme in question (a 2-D Leap-Frog Algorithm) relies neither on a prior knowledge of boundary conditions nor on other global constraints imposed on the so-far derived gradient integration techniques for noise-contaminated data. The proposed algorithm is an improvement of the recently developed Lawn-Mowing Algorithm, which computes a suboptimal solution to the above mentioned problem. The backbone of the proposed algorithm is a generalization of the 1-D Leap-Frog Algorithm derived for finding geodesic joining two points on a Riemannian manifold. The discussion is supplemented by examples illustrating the performance of the 2-D Leap-Frog Algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The model-based coding technique has become one of the hottest object-based coding members recently. There are vast amount of research on feature extraction, motion tracking and model- synthesizing techniques, but little effort has been paid to texture extraction. In this paper, we present a robust texture extraction technique which can generate a single spherical texture map for a generic model in model-based video coding. The proposed technique is robust to the object's orientation in the view and can be efficiently implemented with parallel processing technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The new geometrical constraints, based on the geometrical analyzing of synthetic method, are developed to estimate Fundamental Matrix (F matrix). Firstly, applying the new constraints, the four parameters of Fundamental matrix could be estimated, and these four parameters are the coordinates of the two epipoles. Secondly the other four parameters of the fundamental matrix could be solved by solving the linear equations with the other new constraint, and these parameters represent the homography between the two pencils of epipolar lines. Experiments with the simulated and real data show that our algorithm performs very well in terms of robustness to outliers and noises, high accuracy of the fundamental matrix and high stability of the epipoles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Document skew is a distortion mainly concerning the orientation of text lines and occurring when digitizing the paper documents. Its visual effect is a slope of text lines, which are normally horizontal for such scripts as Latin or Cyrillic, with respect to the X-axis. Many available document recognition systems, however, require properly aligned text liens for accurate text segmentation and recognition. It means that the skew, if present, should be estimated and compensated before further processing. The Hough transform is one of the popular techniques for skew detection. To lower its computational cost, it is usually applied to a small number of representative points of each character or its bounding box. However, a problem with this method is that different characters have different heights. As a result, the representative points of characters belonging to the same line often do not fit well to a straight line and this often leads to errors in skew detection by using the Hough transform. In this paper, we propose a new algorithm to overcome this problem. It only uses the bounding boxes of the connected components of characters and a number of simple tests in order to obtain the skew angle estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a novel use of neural networks for extracting three-dimensional shape of the objects based on image focus. The conventional shape from focus methods are based on piece-wise constant, or piece-wise planar approximation of the focused image surface (FIS) of the object, so they fail to provide accurate shape estimation for objects with complex geometry. The proposed scheme is based on representation of three-dimensional shape of FIS in a window in terms of the neural network weights. The neural network is trained to learn the shape of the FIS that maximizes the focus measure. The SFF problem has thus been converted to an ordinary optimization problem in which a criterion function (focus measure) is to be optimized (maximized) with respect to the network weights. Gradient accent method has been used to optimize the focus measure over the three-dimensional FIS. Experiments were conducted on three different types of objects to compare the performance of the proposed algorithm with that of traditional SFF methods. Experimental results demonstrate that the method of SFF using neural networks provides more accurate depth estimates than those by the traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.