All four point sets in the plane are congruent in the max metric to one of twelve types. Given interpoint distances, it can be determined from the betweenness relations (or lack thereof) which type is required. Then it is evident which further points may be embedded.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Gordon and Udupa published a fast boundary-tracking algorithm for 3-dimensional finite binary images and confirmed the soundness of their algorithm by observation of its behavior on numerous experimental images. Higher-dimensional analogs of Gordon and Udupa's algorithm are now being used in medical imaging software. This paper presents a proof that these higher-dimensional boundary-tracking algorithms are also sound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This paper continues the study of the topological model of the support of a digital image published by Kronheimer in 1992. There, he interpreted the generation of the support D of the image from a topological space S by means of some 'discretization' as the construction of a quotient space (Delta) of S, which represents the set D an d has a reasonable (non-discrete) topology. Under some conditions the space (Delta) is an Alexandrov space. Having in mind the practical example S equals R^{n} and D equals Z^{n} we speak of 'n-dimensional images', although there is no dimension on the space (Delta) . We define in this paper a so-called Alexandrov dimension for arbitrary Alexandrov spaces. Under this definition an image which was sampled from a function defined on R^{n} has dimension n. If the Alexandrov space (Delta) is T_{0}, then it corresponds to a canonical partially ordered set ((Delta) , ≤). We prove, that in this case the Alexandrov dimension coincides with the height of ((Delta) , ≤).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A corner detection scheme for chain coded curves is proposed that significantly improves on the performance of current algorithms. The proposed scheme measures the number of links to either side of a point that can produce the largest digital straight line. That value is used as an indication of curvature at that point with very high curvature being indicative of a corner. A modification of the proposed algorithm can further reduce the false detection rate with virtually no affect on the number of corners missed by omitting certain patterns that can arise from contours with arbitrarily small curvature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In digital geometry we study the properties of discrete representations of geometrical sets; in general, a discrete representation consists of a set of digital points on a rectangular grid. In this paper we consider discrete representations that can be specified by linear inequalities. For example, a digital straight line, and more generally, a digital hyperplane can be specified by an expression that involves two inequalities. First, we describe an elimination method to solve systems of inequalities; it is based on a theorem on convex sets due to Helly. Next, we discuss how this method can be used to derive properties of digital sets. Finally, we illustrate this approach for digital curves. In particular, we show how the chord property for digital straight lines can be extended to digital curves of arbitrary order.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The information content of a metric can be modified by grouping the values into bands for reasons of economy or efficiency. Various forms of truncated metrics can carry local information efficiently. The ceiling function groups the values of the Euclidean metric on the digital plane into bands to form an integer valued metric with special properties which can be used to ease calculations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The face-centered cubic lattice and its analogues in dimensions 4 and 5 are known to solve the sphere-packing problem in those dimensions. This property makes these lattices important for image processing in situations where high angular resolution is required, or if a good distance approximation is required. Given that the approximation to Euclidean distance is better for these lattices than for the standard Cartesian lattice, it is necessary to provide a distance measure for the lattices. We show how to construct two different metrics for the face-centered cubic lattice, and how this metric can be extended to higher dimensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Let d be a metric on the n-D digital space Z^{n}. The hypersphere H_{d}(r) of radius r, r integer, and center at origin is defined as H_{d}(r) equals {x : x (epsilon) Z^{n} & d(x) ≤ r}. For example, Das and Chatterji studied the structure, volume and surface of such digital hyperspheres for the m-neighbor distance d^{n}_{m}. On generalization to d^{n}_{m} the N-sequence distance d(B) was proposed with the contention that these will define hyperoctagons in n-D. However, the hyperoctagonality of d(B)'s has not been established so far except for the special cases of 2- and 3-D and for d^{n}_{m}'s in n- D. In this paper we explore the structure of the hyperspheres of d(B)'s in n-D to show that they truly are hyperoctagons. In particular we derive a formula to compute the corners of such hyperoctagons given a B and a r.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The notion of m-Neighbor Distance d^{n}_{m}, 1 ≤ m ≤ n, m integer, in the n- D digital geometry has been extended under the name of real m-Neighbor Distance (delta) ^{n}_{m}, in this paper, to n-D real space. Complete analyses of the hyperspheres H(m,n;r) of (delta) ^{n}_{m} have been carried out to show that the maxima of the absolute and relative errors between this metric and the Euclidean norm E_{n} minimizes at certain extreme symmetric points on the hypersphere. The coherence between these results and those already available in the digital domain has been mentioned to project (delta) ^{n}_{m} as a powerful tool in metric analyses in digital geometry. The paper also makes fundamental contributions in the study of non-Euclidean metric spaces, extending the L_{1} equals (delta) ^{n}_{1} and L_{INF} equals (delta) ^{n}_{n} norms in a natural yet non- Minkowski way. Finally it is shown that real m-neighbor distance has direct applications in scheduling problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Given a convex polygon P, an m-envelope is a convex m-sided polygon that contains P. Given any convex polygon P, and any sequence of m >= 3 angles A equals ((alpha) _{1}, (alpha) _{2}, ..., (alpha) _{m}) we consider the problem of computing the minimum area m- envelope for P whose counterclockwise sequence of exterior angles is given by A. We show that such envelopes can be computed in O(nm log m) time. The main result on which the correctness of the algorithm rests is a flushness condition stating that for any locally minimum enclosure with specified angles, one of its sides must be collinear with one of the sides of P.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, data reduction for the acquired contour is discussed. Lyche's knot-removal algorithm is introduced. For the given sets of points on the contour, we reduce the number of data points without distorting the original contour by more than a given tolerance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Given a polygonal curve P equals [p_{1}, p_{2}, ..., p_{n}], the polygonal approximation problem considered in this paper calls for determining a new curve P^{'} equals [p^{'}_{1}, p^{'}_{2}, ..., p^{'}_{m}] such that (i) m is significantly smaller than n, (ii) the vertices of P^{'} are a subset of the vertices of P and (iii) any line segment [p^{'}_{A},p^{'}_{A}+1] of P^{'} that substitutes a chain [p_{B}, ..., p_{C}] in P is such that for all i where B <= i <= C, the approximation error of P_{i} with respect to [p^{'}_{A},p^{'}_{A}+1], according to some specified criterion and metric, is less than a predetermined error tolerance. Using the parallel- strip error criterion, we study the following problems for a curve P in R^{d}, where d >= 2: (1) minimize m for a given error tolerance and (ii) given m, find the curve P^{'} that has the minimum approximation error over all curves that have at most m vertices. These problems are called the min-# and min-(epsilon) problems, respectively. For R^{2} and with any one of the L_{1}, L_{2} or L_{(infinity)} distance metrics, we give algorithms to solve the min-# problem in O(n^{2}) time and the min-(epsilon) problem in O(n^{2} log n) time, improving the best known algorithms to date by a factor of log n. When P is a polygonal curve in R^{3} that is strictly monotone with respect to one of the three axes, we show that if the L_{1} and L_{INF} metrics are used then the min-# problem can be solved in O(n^{2}) time and the min-(epsilon) problem can be solved in O(n^{3}) time. All our algorithms exhibit O(n^{2}) space complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In an attempt to overcome the inefficiency of handling data transfer operations over long distances, mesh-connected computers have recently been augmented by the addition of various types of bus systems. One of the most efficient such augmented computer systems is referred to as the reconfigurable mesh, that is, a mesh-connected computer overlaid with a reconfigurable bus system. In this paper we propose constant-time algorithms on reconfigurable meshes for a number of important computational geometry tasks relevant to computer vision. These include testing an arbitrary polygon for convexity, computing the union and intersection of two convex polygons, testing whether two convex polygons are separable and, if so, constructing a separating line, solving the polygon containment problem for convex polygons, and others. Our solutions rely on a recently developed VLSI-optimal sorting algorithm for reconfigurable meshes and on a number of novel data movement techniques. The proposed algorithms translate immediately into constant-time algorithms that work on binary images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The indecomposable sets are those which cannot be expressed as a Minkowski sum in any nontrivial manner. In this paper we concentrate on the indecomposability problem for sets in the domain of binary images. We show that, it is possible to express a binary image as a hypercomplex algebraic number. More interestingly, if we restrict our domain of binary images then Minkowski addition (direct sum) (also called dilation) turns out to be the addition of two such hypercomplex numbers. In that process the indecomposability problem is transformed into a number theoretic problem. As a by-product our treatment of the problem produces an efficient algorithm for computing Minkowski addition of two binary images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The Voronoi diagram of a sample set obtained from a shape boundary can be analyzed to perform morphological erosions, dilations, openings, and closings of the shape with a scaled unit disk as the structuring element. These operations collectively comprise isotropic morphology--meaning the fundamental morphological operations with a parameterized disk operator. The isotropic morphology operations are the basis of multi-scale morphological shape analysis. For instance, features obtained from a series of openings and closings by disks of varying size can be used to characterize a shape over a range of scales. In general, the isotropic morphology operations are a significant class of operations with broad potential applications. The new, Voronoi-diagram-based isotropic morphology algorithm has four significant advantages over existing algorithms: (1) the Voronoi diagram need only be computed once, and then an entire series of scale-based analyses can be performed at low incremental cost; (2) the time/space complexity of the algorithm is independent of the disk radius and depends only on the number of boundary samples; (3) the scale parameter (disk radius) can assume non-integral values; (4) the implied metric is the Euclidean metric, rather than an approximation thereof.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Melter and Rosenfeld posed the following question: If a continuous line is digitized and a least square line fits (a straight line that minimizes the sum of squares of distances over all points from the line) is applied to the set of points that is the image of a given line, can the original line be recovered? In this paper we prove that distinct digital line segments on a given interval correspond to distinct least square line fits. We then give a new simple representation (x_{1}, n, b_{0}, b_{1}) of a digital line segment, where x_{1} and n are the x-coordinate of the left endpoint and the number of digital points, respectively, while b_{0} and b_{1} are the coefficients of the least square line fit Y equals b_{0} + b_{1}X for the given digital line segment. An O(nK) time (linear in practice) algorithm for obtaining a digital line segment from its least square line fit is described, where K is the number of digits of accuracy in the slope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Study of digital plane segments (dps) is fundamental to three dimensional digital geometry. In this paper we present an optimal algorithm to characterize a given set S of 3-D digital points as a dps and to construct its upper supporting plane. We have also formulated a number of area estimators for dps's and have measured their performances analytically and experimentally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Assume that a black/white n X n image is stored one per element of a vector of length n^{2}. We consider determining some characteristics of such images using the scan model of parallel computation. The scan model is introduced by G.E. Blelloch and is a single instruction multiple data (SIMD) vector model of computation. The primitive operation of the model work on vectors (one dimensional arrays) of values, with three types of primitive operations: elementwise arithmetic and logical operations, permutation operation, and scan operation, a type of prefix computation (a scan operation takes a binary associative operator (direct product) and a vector [a_{1}, ..., a_{n}] and returns the vector [a_{1}, a_{1}(direct product)a_{2}, ..., a_{1}(direct product)a_{2}(direct product) ... (direct product)a_{n}]). We show that many important characteristics of binary images can be determined in constant time on the scan model of parallel computation. These include the convex hull construction, diameter, width, smallest enclosing box, perimeter, area, detecting digital convexity, parallel and point visibility (determining for each pixel of the image the portion that is visible, i.e. not obstructed by any black pixel, in given direction from infinity or from given point, respectively) of an image, smallest, largest and Hausdorff distances between two images, linear separability of two images, and the recognition of digital lines, rectangles and arcs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Connectivity preservation is a concern in the design of parallel reduction processes for 2D and 3D image processing algorithms. Algorithm designers need efficient and available connectivity preservation tasks to prove algorithm correctness. Although efficient 2D tests are known, efficient 3D tests still need to be developed. We review earlier results for 2D connectivity preservation tests and demonstrate several 'design spaces' for classes of parallel reduction operators including subiteration and subfields approaches. We then extend certain 'path based' tests from the 2D to the 3D case and show efficient realizations for all but one test for fully parallel reduction operators. Very efficient tests are determined for 3D subfields reduction operators.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Vectorizing the skeleton of a binary digital image is important for recognizing drawings and maps. A new vectorization method, based on the energy minimization principle, is proposed in this paper. The method involves thinning, line segment approximation, and geometric transformation. The skeleton line, obtained after the line segment approximation process, includes shape distortion caused by thinning. In the geometric transformation process, the line segments are shifted for minimizing the value, defined in the neighbor of each corner, junction, and so on, by the smoothness and the difference between line segments and skeleton of an original image. The proposed method has an advantage of reducing the various kinds of distortions, compared with the conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The detection of lines and curves in binary and gray level images is based in the Hough Transform. This detection is a vital first step in the detection and classification of shapes in images. The Linear Array with Reconfigurable Global Bus System has been shown to efficiently accomplish a number of low and medium level image processing tasks including the identification of connected components in an image. The efficient utilization of the Hough Transform to further classify the properties components of an image is presented here for an n X n image. The Hough transform is accomplished in O(k_{(theta)} log n) time using n^{2} processing elements. Where k_{(theta)} is the number of angles in the (theta) search space. We also examine a customization of the Hough Transform specifically adapted for the Linear Array with Reconfigurable Global Bus System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Recovering the 3D shape of an object from its 2D image contour is an important problem in computer vision. This paper begins with a survey and critique of some of the previous shape from contour methods that have been proposed. The author then develops two object-based heuristics and uses them to recover shape constraints from contour. The basic assumption guiding these object-based heuristics is that objects tend to be highly structured and can be parametrized with respect to an object-centered, orthogonal coordinate system. The structured nature of objects is the motivation for the non-accidental alignment criterion: parallel lines within the object's bounding contour are significant and are related to the 3D object-centered coordinate system. The regularity and symmetry inherent in many man-made objects, coupled with a human preference for an orthogonal perceptual space, is the motivation for the orthogonal basis constraint: an oblique set of coordinate axes in the image is presumed to be the projection of an orthogonal set of coordinate axes in the scene. These object-based heuristic methods are demonstrated on both real and synthetic image contours.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The literature on computer vision contains hundreds of references to thinning or skeletonizing, but the only consistent definition for what is meant by a digital skeleton uses the medial axis transform. Most vision researchers would agree that the MAT does not yield an ideal, or in some cases even acceptable, skeleton. For example, single pixel irregularities can produce gross changes in an otherwise simple skeleton. The problem of defining what is meant by skeleton and skeletal pixel is one that has been rarely addressed, but seems crucial. Also, a great amount of past effort has gone into speeding up the thinning process without as much attention to the quality of the result, possibly because no good definition of a skeleton exists. By looking at the extensive literature on the subject some common properties of a good skeleton can be collected, but the process by which the skeleton is found should initially be ignored. This article will describe the properties of the skeleton of a binary object, both historically and by introducing new suggestions. Methods of producing skeletons from this definition will then be discussed, and simple metrics will be used to characterize the 'goodness' of the skeletons. The basic idea is that a skeleton is a global property of a binary object, and that the boundary should be used to locate the skeletal pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The notion of topological spatial relations is introduced. It corresponds to the topological type of a pair of intersecting sets in a topological space. The previously introduced 4-intersection (boundary-boundary, interior-interior, boundary-interior, and interior-interior), a topological invariant of topological spatial relations, is further refined by considering the number of components and the dimension of each component in each intersection in the 4-intersection. Properties of these invariants are examined. We show how the topological spatial relations between two 2-disks in the plane can be classified using these invariants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In many cases much more information can be extracted from two different images of a common object scene than from just a single image. One important example is the derivation of the 3-D structure of the scene from a stereo pair of images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Stereo vision is the use of two images of an object to determine its three dimensional position. In this paper, an analysis is presented which explicitly depicts the relationship between a three dimensional point position and its corresponding view, in a skewed focal aces stereo imaging system. The corresponding relationship between a parallel stereo system and a skewed stereo system is described. The computational errors of range measurement for several skewed angles are presented, given pixel quantization errors. In addition, the probability distribution of error within a given range is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Estimating motion/geometry parameters using optical flow information is an important research issue in computer vision. A two-stage approach is often taken: first, computing a flow-field, and then estimating the motion/geometry parameters based on this field. The major shortcomings here are the high computational requirements and the low estimation accuracy, due to the artificial imposition of 'smoothness constraints' for flow computability. A one-stage approach is presented in this paper. The basic idea is that with a rigid imaged surface, image flow arises solely because of the sensor's self motion in both rotation and translation relative to the imaged world. By assuming simple sensor and world models, we can then establish constraint relationships between the motion/geometry parameters and the image intensity measurements, and then estimate, in a single stage, the optimal parameters that account for the observed image flow, in a least-squares modern estimation framework. This new approach simultaneously reduces computational requirements and improves overall estimation accuracy. Using computer generated imagery and actual video imagery, we demonstrate system performance across a range of system design and operational parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
The paper presents a geometric scheme suggested by the segmentation process on digitized images. The scheme constitutes a general intermediate representation satisfying both the descriptive and the procedural adequacy. It is constituted by: general primitives which allow the basic shape description; the k-pendulum description model which allows to describe all polygonal shapes at any generality degree; a set of operators which act on polygons transforming the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In this paper, we study the families of dynamical systems which can be applied to learning. We first introduce three classifications of dynamical systems, based on attractor topology, space complexity of parameter space, and information capacity. Then, we demonstrate how each class of dynamical systems can be applied to learning. Thirdly, we show that there are three different types of geometry which are related to vision, the image geometry, the information geometry, and the dynamical system geometry. Finally, we discuss the relations between vision and all three geometries. This study helps in understanding the capabilities and limitations of a family of dynamical systems. Digital image geometry has been studied for a long time. Information geometry is proposed by Amari recently which study the manifold formed by free parameters of dynamical systems and its relation to vision. Dynamical system geometry abstractly studies the behavior of dynamical systems, and this study is independent of a detailed model, like neural networks or iterated function systems. A model, like neural network, usually occupies a subspace in the space of dynamical systems. In this paper, we also explore the relations among these three geometries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A unique parameterization of the perspective projections in all whole-numbered dimensions is reported. The algorithm for generating a perspective transformation from parameters and for recovering parameters from a transformation is a modification of the Givens orthogonalization algorithm. The algorithm for recovering a perspective transformation from a perspective projection is a modification of Roberts' classical algorithm. Both algorithms have been implemented in Pop-11 with call-out to the NAG Fortran libraries. Preliminary monte-carlo tests show that the transformation algorithm is highly accurate, but that the projection algorithm cannot recover magnitude and shear parameters accurately. However, there is reason to believe that the projection algorithm might improve significantly with the use of many corresponding points, or with multiple perspective views of an object. Previous parameterizations of the perspective transformations in the computer graphics and computer vision literature are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Skeletonization is a process by which a binary pattern of a character is transformed into another binary pattern consisting of its skeleton in order to reduce and extract information for further transmission or recognition. It is very important to preserve the shape of the original pre-thinned pattern in order to keep the necessary information for further processing. However, skeletonizing transformation generally results in distorted representation of the original pattern, and usually affects the possibility of correct recognition in the corresponding character. In this paper, a systematical algorithm is applied to detect and correct the distorted part of the skeleton by geometrical analysis. Hence, a more accurate stroke segmentation can be achieved to tell how the lines (strokes) cross (overlap) each other. Our method significantly reduces the rejection rate and substitution rate for character recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.