PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper summarizes the principle and the main steps in the design of the a Visual-Basic program to compute the convexity and to identify the boundary vectors of a set of ten 5D real vectors generated by a random numerical process. Then, as found EXPERIMENTALLY using this program, it reports in detail many interesting, novel, 5D-geometrical properties. These include such properties as the ring-structures of the boundary vectors, topological symmetries of the boundaries, different ways of joining the boundary planes (4D hyper-planes) in the 5-space, topological order of the extreme edges and that of the boundary planes located around the cone, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce the perspex machine which unifies projective geometry and the Turing machine, resulting in a supra-Turing machine. Specifically, we show that a Universal Register Machine (URM) can be implemented as a conditional series of whole numbered projective transformations. This leads naturally to a suggestion that it might be possible to construct a perspex machine as a series of pin-holes and stops. A rough calculation shows that an ultraviolet perspex machine might operate up to the petahertz range of operations per second. Surprisingly, we find that perspex space is irreversible in time, which might make it a candidate for an anisotropic spacetime geometry in physical theories. We make a bold hypothesis that the apparent irreversibility of physical time is due to the random nature of quantum events, but suggest that a sum over histories might be achieved by sampling fluctuations in the direction of time flow. We propose an experiment, based on the Casimir apparatus, that should measure fluctuations of time flow with respect to time duration- if such fluctuations exist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rational, general-linear transformations can be computed exactly using rational, matrix arithmetic. A subset of these transformations can be expressed in QR form as the product of a rational, orthogonal matrix Q and a rational, triangular matrix R of homogeneous co-ordinates. We present here a derivation of a half-tangent formula that encodes all of the rational rotations. This presentation involves many fewer axioms than in previous, unpublished work and reduces the number of transrational numbers in the total trigonometric functions from three to two. The practical consequence of this is that rotational sensors, such as computer vision cameras, gyroscopes, lidar, radar, and sonar can all be calibrated in terms of rational half-tangents, hence all subsequent, general-linear, numerical computations can be carried out exactly. In this case the only error is sensor error, so computations can be carried out precisely to the physical limits of the sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial neural network models are typically useful in pattern recognition and extraction of important features in large data sets. These models are implemented in a wide variety of contexts and with diverse type of input-output data. The underlying mathematics of supervised training of neural networks is ultimately tied to the ability to approximate the nonlinearities that are inherent in network’s generalization ability. The quality and availability of sufficient data points for training and validation play a key role in the generalization ability of the network. A potential domain of applications of neural networks is in analysis of subjective data, such as in consumer science, affective neuroscience and perception of chemical senses. In applications of ANN to subjective data, it is common to rely on knowledge of the science and context for data acquisition, for instance as a priori probabilities in the Bayesian framework. In this paper, we discuss the circumstances that create challenges for success of neural network models for subjective data analysis, such as sparseness of data and cost of acquisition of additional samples. In particular, in the case of affect and perception of chemical senses, we suggest that inherent ambiguity of subjective responses could be offset by a combination of human-machine expert. We propose a method of pre- and post-processing for blind analysis of data that that relies on heuristics from human performance in interpretation of data. In particular, we offer an information-theoretic smoothing (ITS) algorithm that optimizes that geometric visualization of multi-dimensional data and improves human interpretation of the input-output view of neural network implementations. The pre- and post-processing algorithms and ITS are unsupervised. Finally, we discuss the details of an example of blind data analysis from actual taste-smell subjective data, and demonstrate the usefulness of PCA in reduction of dimensionality, as well as ITS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is two-fold: (a) to demonstrate that pattern recognition methods in image processing leads to a noticeable improvement in bioinformatics, specifically, analysis of microarray data in gene expression correlated to cancer; (b) to bring the utility of geometric methods in pattern recognition to attention of researchers in bioinformatics and molecular biologists. Our method of analysis is seen to readily provide great improvement over the latest published methods in bioinformatics. We also hope that in the process, we provide mathematical insight into the problems of microarray data analysis from the point of view of signal processing and learning theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Marching Cubes algorithm is a popular method which allows the rendering of 3D binary images, or more generally of iso-surfaces in 3D digital gray-scale images. Yet the original version does not give satisfactory results from a topological point of view, more precisely the extracted mesh is not a coherent surface. This problem has been solved in the framework of digital topology, through the use of different connectivities for the object and the background, and the definition of ad-hoc templates. Another framework for discrete topology is based on an heterogeneous grid (introduced by E.D. Khalimsky) which is an order, or a discrete topological space in the sense of P.S. Alexandroff. These spaces possess nice topological properties, in particular, the notion of surface has a natural definition.
This article introduces a Marching Chains algorithm for the 3D Khalimsky grid H3. Given an object X which is a subset of H3, we define, in a natural way, the frontier of X which is also an order. We prove that this frontier order is always a union of surfaces. Then we show how to use frontier order to design a Marching Cubes-like algorithm. We discuss the implementation of such an algorithm and show the results obtained on both artificial and real objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the fields of digital image processing, computer vision and pattern recognition, the application of edge detection algorithms is important for the extraction of multiple touching regions in images. However, the enclosure of the regions with discrete contours is not straightforward in general. A novel region-enclosing contour method is therefore proposed in this paper. Topics such as (dilated) contour enclosure of edge pixels in 2D binary images, geometric thinning (skeletonization) of shapes, gap closure in contour networks, and down-sampling of contour supporting point sets are discussed and new techniques are proposed for the first time. Most of the newly found techniques depend heavily on the application of a Delaunay tessellation. The resulting set of novel shape processing tools is applied here to an image taken from a metal surface in electron backscattered diffraction experiments in order to provide an accurate characterization of grain boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The general problem of single-view recognition is central to many image understanding and computer vision tasks. In previous work, we have shown how to approach the general problem of recognizing three dimensional geometric configurations from a single two dimensional view. Our methods make use of techniques from algebraic geometry, notably the theory of correspondences, and a novel "equivariant" geometric invariant theory. The machinery gives us a way to understand the relationship that exists between the 3D geometry and its "residual" in a 2D image. Exploiting this, one can compute a set of fundamental equations in the 3D and 2D invariants, which generate the ideal of the correspondence, and which completely describe the mutual 3D/2D constraints. We have chosen to call these equations "object/image equations". They can be used in a number of ways. For example, from a given 2D configuration, we can determine a set of non-linear constraints on the geometric invariants of 3D configurations capable of producing the given 2D configuration, and thus arrive at a test for determining the object being viewed. Conversely, given a 3D geometric configuration (features on an object), we can derive a set of equations that constrain the images of that object.
One difficulty has been that the usual numerical invariants get expressed as rational functions of the geometric parameters. As such they are not always defined. Moreover their definition tends to rely on special position assumptions that treat the features asymmetrically. This leads to degeneracies and numerical difficulties in algorithms based on these invariants. We show how to replace these invariants by certain toric subvarieties of Grassmannians where the object/image equations become "resultant-like" expressions for the existence of a non-trivial intersection of these subvarieties with certain Schubert varieties in the Grassmannian. We also explain how to obtain a shape space by making use of the Chow coordinates of these varieties. We call this approach the "global invariant" approach. It greatly increases the robustness and numerical stability of the methods.
Finally we will show how the recognition problem for point configurations in full perspective can be decoupled into a linear and non-linear part, and we will use this decoupling to give the most general solution possible to this recognition problem. We also will consider various natural metrics on our analog of the shape space for this problem and show how they can be derived from certain natural metrics on the Grassmannians, namely the Fubini-Study metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A polygon Pis said to be rectilinear if all interior angles of P belong to the set {π/2, 3π/2}. In this paper we establish the mapping
R(P)=(π/(π-2x√2))⊗(maxα∈[0,2π] ((P1(P,α)/√2⊗P2(P))-((2√2)/π))
where P is an arbitrary polygon, P2(P) denotes the
Euclidean perimeter of P, while P1(P,α) is the perimeter in the sense of l1 metrics of the polygon obtained by the rotation of P by angle α with the origin as the center of the applied rotation. It turns out that R(P) can be used as an estimate for the rectilinearity of P. Precisely, R(P) has the following desirable properties:
- any polygon P has the estimated rectilinearity R(P) which is a number from [0,1];
- R(P)=1 if and only if P is a rectilinear polygon;
- infp∈II R(P) = 0, where II denotes the set of all polygons
- a polygon's rectilinearity measure is invariant under similarity
transformations.
The proposed rectilinearity measure can be an alternative for the
recently described measure R1(P)1. Those rectilinearity measures are essentially different since there is no monotonic function f, such that f(R1(P))= R(P), that holds for all P ∈ II.
A simple procedure for computing R(P) for a given polygon P is described as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality evaluations in agriculture and food processing, and medical imaging often require accurate non-destructive surface area and volume measurements. A computer vision technique has been developed to achieve this task for objects with irregular shape. An imaging system was built to acquire and store images of multiple projections of the objects. Each of these images was taken at the same angular interval. Image segmentation and tracing algorithms were implemented to provide the x and y coordinates of the boundary points for each image projection. These boundary points were then used to generate a three-dimensional wire-frame model. The result of surface fitting and approximation on the wire-frame model was used to calculate the object surface area in this research. Object volume, although not covered in this research, can also be measured. Examples of agriculture and food processing applications using this vision system for surface area measurement are included in this paper. Two major challenges of this research work, system calibration and surface approximation, are discussed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of completeness for invariant size functions is studied. Families of size functions are introduced, which allow recognition of some classes of plane curves up to transformations of increasing generality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for generating a three-dimensional underwater map that is created in low visibility conditions using a diver with a wrist mounted depth gauge as the digitizer. Although the original sampling method can be sparse and prone to error, by extracting the depth profile from the dive computer and applying fractal image interpolation algorithms, detailed surface topology is reconstructed. This approach has utility for other seismic or remote sensing applications where two or three-dimensional imagery is to be reconstructed from sparse sample points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper gives a survey of shape representation techniques for discrete image analysis based on discrete geometry and topology. The relationships of such shape representations and their evaluation criteria are also shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Edinburgh Mouse Atlas Project (EMAP) at the Medical Research Council (MRC) Human Genetics Unit (Edinburgh) is developing a spatio-temporal framework of mouse development from the single cell stage through to birth. This consists of a time series of 3D (voxel) models of mouse embryos which define the spatio-temporal framework onto which is mapped spatial data such as the recognized anatomy and the patterns of gene expression from many embryos. A multi-modal warping algorithm has been devised to achieve this mapping. Matching sections are found in the atlas (target) and experiment (source) images, from which tie--points are computed using a novel algorithm. Geometric models, in which connectivity is represented explicitly, are built by extracting maximal gradient edges from the source and target images. The source model is then progressively decomposed into smaller connected fragments, while each of the fragments is affine registered to the target model using an iterative closest point (ICP) algorithm. From the target model, the decomposed fragments of the source model and the associated affine transforms, tie--points are computed. These are then used to define a radial basis function warp transformation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convex bodies are frequently used as atoms of more complex objects in computer vision. The aim of this paper is to present the Minkowski functionals as a tool for convex shape analysis for single bodies. We define ratios of these functionals to elaborate a mapping from 3D bodies into the unit square. Thus we can discriminate between filaments, pancakes and ribbons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A problem with wavefront reconstruction is that smaller details are often smeared out in the background of larger noise. A least squares fit tries to distribute measurement errors over the entire region. Features of the size of the RMS error tend to get obscured. We present a reconstruction algorithm and framework that redresses some of these problems. Small patches are reconstructed and then joined together. We discuss how this is implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3-D image modeling of volumetric scattered data is highly demanded for automated visual inspection and non-destructive testing. Scattered data is defined as a collection of data that have little specified connectivity between data points. For example, supersonic data are much different from cuberille MRI (magnetic resonance imaging) or CT (computed topography) data. One possible way of interpolating volumetric scattered data is with a piecewise linear interpolation. The quality of a piecewise linear interpolation over tetrahedral depends on the specific tetrahedrization of the data points. Therefore, one main task is to provide the best 3-D domain consisting tetrahedral. In this paper, we proposed the idea to solve the case of ambiguity in the process of tetrahedral domain formation. Asymptotic decider criterion is used instead of sphere criterion. The construction of the tetrahedral domain is based on whether or not connecting vertices are joined by a component of the tetrahedral domain. To apply asymptotic decider criterion to tetrahedrization, we need to find the forth point of quadrilateral. Side-vertex method is used for approximate the forth point. A method based on asymptotic decider criterion for computing iso-surfaces of trivariate scattered data interpolation is discussed. Asymptotic decider criterion, considers not only functional value, but also positional data. The results of prevaricated scattered data interpolation is visualized through an iso-surface rendering. The visualization algorithm of this study was implemented on an O2 workstation of Silicon Graphics Systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with three-dimensional (3D) surface reconstruction from unorganized set of points. Several methods have been developed in the literature. Most of them are based on the 3D Delaunay triangulation. We propose a new method that transforms a set of points into contours in 3D space. This transformation seems to be interesting because the problem of surface reconstruction from contours was widely studies. The triangulation process is composed of the following steps: slicing, 3D curve extraction and tiling. The tiling step is an extension of a well known algorithm used generally for connecting parallel adjacent planar slices. In order to evaluate the performance of our method, we compare it with two well known 3D Delaunay triangulation based methods called α-shape and 3D Crust. We focus on three different comparison criteria: the complexity, the visualization quality and two additional geometrical criteria: the triangulation quality and the rugosity degree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new technique for the reconstruction of three dimensional data cloud points using Delaunay triangulation and octrees. The main idea is to sculpt the three dimensional Delaunay triangulation as in α-shapes, using octrees for removing inconsistent triangles, generating the desiring surface. This technique can be used to reconstruct an object from a non-uniform density of data points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mesh surfaces are an important form of 3D object representation which has become increasingly significant in recent years due to evolving 3D acquisition techniques, computing power, and commercial needs. Feature edge extraction in polygonal meshes has numerous applications ranging from scene segmentation and object recognition to mesh compression. The method proposed in this paper for feature edge extraction in polygonal mesh surfaces follows the signal processing approach to edge detection in images at multiple scales and in the presence of noise. Namely, multiple smoothing iterations in the spatial domain are used to produce several versions of the original mesh at different scales, whereas the same topology is maintained throughout the scales. Edges are extracted at each scale by applying a differential operator to an underlying locally smooth surface represented by the mesh at that scale so as to evaluate the curvature at each vertex of the mesh. The proposed approach is based on the identification of multiple local support planes that facilitate the representation of an arbitrary mesh surface by a set of overlapping Monge patches. Such a representation enables the application of curvature estimation techniques commonly employed for range image analysis, and provides an alternative to Laplacian smoothing by local quadratic surface fitting. The proposed approach is suitable for irregular and non-dense meshes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.