It has become apparent that the compactly supported wavelet bases of Daubechies are susceptible to far-reaching generalizations which provide a remarkable synthesis and unification of digital signal processing and applied mathematics. Examples of this wider class of wavelet-unified digital filters and mathematical representations have already proved themselves in applications as diverse as signal compression for imagery and audio, channel coding, and the numerical solution of partial differential equations. In this paper we systematize the definitions and illustrate some of the associated wavelet functions and their principal properties.
The recently introduced compactly supported wavelets of Daubechies have proven to be very useful in various aspects of signal processing, notably in image compression. The fundamental idea in the application to image compression was to take a digitized image and to use wavelets to provide a multiscale representation of it, and to then discard some information at some scales, and leave information more intact at other scales. The Daubechies wavelets have differentiability properties in addition to their compact support and orthogonality properties. A number of authors have used wavelet system for solving various problems in differential and integral equations. An additional point of view for approaching boundary value problems in partial differential equations were introduced, in which the boundary, the boundary data, and the unknown solution (of a boundary value problem) on the interior of a domain are all uniformly represented in terms of compactly supported wavelet functions in an extrinsic ambient Euclidean space. In this paper we describe multiscale representations of domains and their boundaries and obtain multiscale representations of some of the basic elements of geometric calculus (line integrals, surface measures, etc.) which are then in turn useful for specific numerical calculations in problems in approximate solutions of differential equations. This is similar to the spectral method in solving differential equations (essentially using an orthonormal expansion), but here we use the localization property of wavelets to extend this orthonormal representation to boundary data and geometric boundaries. In this paper we want to indicate how to carry this out.
This work describes a degree reduction method for Bezier simplexes with the following properties: (1) Symmetry: The degree reduction method is symmetric with respect to the corner points of the Bezier simplex. (2) Restriction: The degree reduction method restricted to the boundary of a Bezier simplex yields the same result as the boundary of the degree-reduced Bezier simplex. (3) Interpolation: The degree-reduced simplex of degree e interpolates the value and the first [e-1/2] derivatives at the corner points of the original Bezier simplex. (4) Optimal order of approximation: The order of approximation of the given simplex by the degree-reduced simplex is O(he+1), (where h is the diameter of the domain simplex), which is optimal for functional approximation. The method, restricted to Bezier surface, yields a new technique for degree reduction, which is easy to implement.
It is shown how to construct G2-continuous spline with arcs of cubics. Each arc is a piece of the oval of a cubic and it is controlled locally by a triangle tangent to the arc at both endpoints. Formulas for mixed interpolation of further points and tangents are given in terms of geometrically meaningful shape parameters. It is shown that under certain restrictions, the numerical values of the curvatures may be prescribed at the joints. Some new shape handles are developed for the local control of each arc of the spline. Intersection problems are easily handled. The main advantage of algebraic splines is that they are completely parametrization free.
The problems of computing the intersection of curves and surfaces are fundamental to computer graphics and geometric and solid modeling. Earlier algorithms were based either on subdivision methods or algebraic techniques. They are generally restricted to simple intersections. It has been generally regarded that algebraic approach is impractical due to numerical problems on higher degree curves and surfaces (beyond cubics). In our earlier work we have applied Elimination theory and reduced the problem of intersections of curves and surfaces to matrix computations. These include Gauss elimination, matrix inversion, singular value decomposition, eigenvalues and eigenvectors etc. In this paper, we present robust algorithms based on techniques from linear algebra and numerical analysis for computing the zero dimensional intersections of curves and surface. These include intersection of parametric and algebraic curves and curve-surface intersections. In particular, we present a robust algorithm for computer higher order intersections.
A well-known strength of the parametric representation of a curve or surface is the ease by which a piecewise-linear approximation is generated. This is often true in geometric design, where only smooth segments or patches are considered over a well-chosen polynomial basis. Visualizing an arbitrary and possibly discontinuous parametric surface is useful but non-trivial for algebraic surfaces defined by rational parameter functions. Such surfaces have pole curves in their domain, where the denominators of the parameter functions vanish, domain base points that correspond to entire curves on the surface, and other features that cause display algorithms to fail. These are ubiquitous problems occurring even among the natural quadrics. Sophisticated but unsuspecting display techniques (e.g. those implemented in Maple V, Mathmatica) produce completely unintelligible results. We provide a general solution and discuss our implementation. First, projective domain transformations are applied that map the entire surface from a finite domain region. Then, domain pole curves are identified and numerically approximated. Using Delaunay triangulation, a special decomposition of the domain is constructed that avoids discontinuities. This is then mapped onto the surface and clipped against a 3D box. The implementation can display very complicated surfaces, as we shall illustrate. Our techniques generalize in a straightforward way to rational varieties of any dimension.
The problem of numerical robustness in Solid Modelling involves proving theorems about the subset of E3 defined by the output of an algorithm using finite-precision floating point arithmetic. However, both input and output sets are usually specified by symbolic data and geometric data that are possibly inconsistent, and the latter comprise geometric vertices in E3, surface intersection curves, and surface patches defined by approximation techniques that leads to the definition of precise subsets of E3, such that the stored input sets are consistent with the symbolic data, and typically closer to the user's input sets that can be warranted by the inherent uncertainty in the data. This definition will permit rigorous proofs, based on the fundamental equation fl(xoy) equals xoy(1 + (eta) ), of theorems providing a backward error analysis; that is, it will be possible to show that, if a problem is well conditioned, we have the exact solution for a problem defined by slightly perturbed sets. Here, the perturbation is measured by the maximum of the Hausdorff distance between the exact and the perturbed sets, and the Hausdorff distance between the boundaries of these two sets. It will also be possible to show that the error is small as measured by a certain pseudo-distance reflecting relative variation of the boundaries. Furthermore, in circumstances where the piecewise linear Schoenflies theorem in E3 is true, it will follow that there is a space homeomorphism from E3 onto E3 relating the two sets; that is, in a very strong sense, they have the same topological form. The interpolatory technique is described in the curvilinear case, but so far we have used this approach only to prove results about stability of algorithms for sets defined by planar surface patches with inconsistent geometric vertex information. An illustrative example is given.
In this paper we describe the geometric components of our model-based approach to 3D rigid object recognition and positioning from range data that have potential applications in Graphics, Geometric Modeling, and Computer Aided Geometric Design. As in many other object recognition systems, due to occlusion, objects are recognized and located by comparing and geometrically matching small regions of the data set with corresponding regions of known models stored in a database. In our case, a known object is represented in the database as a hierarchical collection of regions, each of them approximated by an algebraic surface. The preliminary recognition and matching is based on comparing euclidean invariant functions of the coefficients of the corresponding polynomials. The final recognition and matching is based on determining how well the data fits a stored model. Although we have not implemented a complete system yet, towards the implementation of an object recognition and position estimation system based on this structure, a number of computational problems associated with algebraic curves and surfaces have been analyzed and solved. These problems, described in this paper, are: (1) how to fit unconstrained algebraic curve and surfaces to data, (2) how to fit bounded algebraic curves and surfaces, (3) how to efficiently compute Euclidean invariants of polynomials, and (4) how to define an intrinsic coordinate system of a polynomial, generalizing the notion of center and principal axes of a nonsingular quadric curve or surface. The intrinsic coordinate system of a polynomial can be used to transform its coefficients to a normal form. The coefficients of a polynomial in normal form constitute a complete set of Euclidean invariants.
Implicit higher degree polynomials in x, y, z (or in x, y for curves in images) have considerable global and semiglobal representation power for objects in 3D space. (Spheres, cylinders, cones and planes are special cases of such polynomials restricted to second degree). Hence, they have great potential for object recognition and position estimation and for object geometric-property measurement. In this paper we deal with four problems pertinent to using these polynomials in real world robust systems: (1) Characterization and fitting algorithms for the subset of these algebraic curves and surfaces that is bounded and exists largely in the vicinity of the data; (2) The aposteriori distribution for the possible polynomial coefficients given a data set. This measures the extent to which a data set constrains the coefficients of the best fitting polynomial; (3) Geometric Invariants for determining whether one implicit polynomial curve or surface is a rotation and translation of another, or whether one implicit polynomial curve is an affine transformation of another; (4) A Mahalanobis distance for comparing the coefficients or the invariants of two polynomials to determine whether the curves or surfaces that they represent are close over a specified region. In addition to handling objects with easily detectable features such as vertices, high curvature points, and straight lines, the polynomials and tools discussed in this paper are ideally suited to smooth curves and smooth curved surfaces which do no have detectable features.
A new algorithm based on curve tracing and decomposition techniques is presented for computing the convex hull of an algebraic curve defined implicitly by f(x,y) equals 0; the curve may have multiple components as well as singular points. The output is an ordered collection of line segments and sections of the curve represented by a sample point and interval bounds; this representation is suitable for rendering the convex hull by classical curve tracing techniques. Additionally, we present a point classification function for the convex hull based on Sturm sequences. Progress toward extending these results to algebraic surfaces is briefly discussed.
This paper reviews an evolving family of surface modeling primitives for use in computer vision, graphics, CAGD, and data analysis. Dynamic surfaces offer significantly greater power to these applications domains than do conventional, geometric surface modeling primitives. The benefits stem from computational physics: Simulated forces influence the shapes and motions of dynamic surface according to the basic mechanical principles of nonrigid bodies with given mass and damping distributions and deformation energies. Dynamic surfaces are applicable both to static surface reconstruction problems and to more general surface estimation from structured or unstructured time-varying data. In particular, efficient, recursive solutions to time dependent problems result from a Kalman estimation approach in which dynamic surfaces serve as nonstationary system models.
Computer aided design of freeformed surfaces is strongly biased towards input and optimization of surfaces. Input modules are based on digitizing drawings or placing and manipulating spline control vertices. Design, especially during the idea generation (or conceptual) design phase, is poorly supported. We present a system based on direct manipulation of shaded images of the surfaces. The designer sketches profiles on a tablet. The profiles are positioned in object space with a spaceball (6D joystick). A network of crossing curves is built interactively. The system constructs patches over this network in realtime. The designer can correct a profile by sketching. The affected surfaces are updated immediately. Patches are defined by the curves and estimated cross-boundary derivatives. They connect with G1 continuity. Our prototype surface modeler avoids the need for exact dimensions and precise coordinates, as seen in traditional systems. Instead, it supports fast, intuitive generation and evaluation of surfaces. We discuss a comparison with other systems regarding the time needed to model shapes, and some opinions of professional industrial designers.
For surfaces, such as Bezier or B-splines, or NURBS with positive weights, which are defined by networks of control points and for which identifiable pieces of surface are known to lie within the convex hull of a subset of the control points, recursive division is one of the most robust interrogation techniques available, guaranteeing except in very singular circumstances to find all components of the intersection. Offset surfaces, used where an object has a small non-zero thickness, and to determine cutter center loci where a cutting point must lie on a given surface, have not had this option. This paper describes a technique for applying recursive division interrogation to offset surfaces, where the offset is either constant or a function of surface normal. Sections I and II recapitulate the standard theory of recursive subdivision interrogation and the definition of offset surfaces. Section III explores the concept of procedural interface, and section IV introduces that of a Quantized Hull. Sections V to VII suggest a naive method of recursive division interrogation, discover why it does not work, and show how it may be salvaged.
Design and manipulation of curves and surfaces are required in many different areas of technology, for example in the definition of products such as cars and the casings for electrical devices. Fourier methods have long been used in image processing but have seen much less use in computer aided design (CAD). This paper suggests the use of Fourier methods for the generation of blends, which provide smooth transitions between model surfaces, and fairings, which smooth large regions of the surface of an object. A description is given of the use of Fourier methods to allow predictable and simple control over curved shape in a two- dimensional system, and results of the system are presented and analyzed. This allows informed assertions to be made regarding the extension to a surface smoothing system, which, at the time of writing, has not been developed. In particular we note that under certain circumstances the Fourier smoothing methods reduce to spline production, and the connection with spline theory is made clear where applicable.
We present the foundations of a method to generate blend surfaces. The idea of this method is to consider functionals depending on parametrized surfaces and their partial derivatives up to order two. The blend surface then is the surface which minimizes the functional. In section 2 we briefly describe the mathematical foundation. We discuss the variational problem associated to certain bilinear functionals in the Sobolov space of order 2. In section 2 we outline in detail the procedure how to generate blend surfaces by this method. Starting with a primary parameter space and a primary bilinear functional, one has to determine the final bilinear functional. The blend surface then is chosen as the tensor spline surface which minimizes the quadratic form corresponding to this bilinear functional.
In this paper we discuss two problems. One problem is: how to approximate the rolling ball blend by a series of bicubic patches? The other problem is: what to do if the rolling ball blend surface is self intersecting?
Space biarcs are curves in Ed, d>= 2, composed of two smoothly joining circular arcs. The existence and properties of space biarcs as Hermite interpolants are studied, i.e. two distinct points in Ed and two target directions specified at the two points are interpolated at the two ends of a space biarc. Our approach makes use of some recent results on spherical biarcs [Wan92], which are special space biarcs lying on the unit sphere Sd is contained in Ed+1, and the one-to-one correspondence between space biarcs in Ed and spherical biarcs on Sd given by the stereographic projection from Sd to Ed.
Based on the idea of Natural Neighbor Interpolant presented by Sibson in 1980, this paper shows how combining it with the concept of Covering Spheres, new and more practical algorithms can be made as well as a more complete theory.
Reconstruction of shapes from partial information is a problem arising in many scientific and engineering applications. We present a method for reconstructing a two-dimensional manifold from an unstructured collection of sampled points. The algorithm consists of two major steps. In the first step, we estimate the topological type of the manifold and also obtain a crude estimate of its geometry. In the second step, we improve the fit of the estimate to the data points, while keeping the topological type fixed.
In surface reconstruction a mathematical description is created from sampled data points. While the displayed image may appear to be correct, there remains the analytical question as to whether it can be proven that the stored surface model and the original artifact have the same topology. Previous approaches to investigate all possible topological configurations were cumbersome and offered little insight as to how those topological configurations were generated. An alternative algorithm is offered, where the mathematics of the algorithm does offer insight into the structure of these configurations.
This article describes a method that uses deformable curves and deformable surfaces to generate a C1 continuous parametric surface from unorganized data points. The triangular patch was used as the surface element for generating an n-sided surface. A new method for generating the topology is also presented. The concept of elastically deformable shapes attracted toward data points by spring-like forces is used to seek the optimal shape which minimizes an associated energy functional.
Computer aided design and manufacturing processes frequently map planar triangulations onto surfaces. Due to their properties relevant to finite element analysis, Delaunay triangulations have become popular for triangulating arbitrary planar domains that are subsequently mapped onto surfaces. Although surface triangulations cannot enjoy all the properties of planar triangulations, utilization of as many of the properties of Delaunay triangulations as possible in algorithms for triangulating surfaces can have advantages as well as disadvantages. This paper focuses on the advantages and disadvantages of using Delaunay criteria in triangulating surfaces.
Given scattered measurements from a surface, a method for reconstruction of the surface using rational Gaussian surfaces is described. Rational Gaussian surfaces are globally defined parametric surfaces that adapt well to local data. Examples demonstrating reconstruction of 3- D scenes from scattered depth measurements obtained from stereo disparity, reconstruction of 3-D shapes from noisy range data, and segmentation of volumetric images for the extraction of generalized cylinders are described.
The three-dimensional (3D) skeleton is developed as a geometric modeling tool. The relationship of the Voronoi diagram to the skeleton is exploited to produce a new 3D skeleton algorithm. This algorithm has several substantial advantages over existing 3D skeleton computation techniques. For instance, the resulting skeleton is a graph that contains coordinate and disk radius information sufficient for exact regeneration, yet it is also amenable to graphical display and further analysis. The resulting skeleton is also homotopically equivalent to the original shape. It is based on the Euclidean metric and requires only a discrete sample set of the shape boundary as input. The new algorithm also reveals deficiencies in the underlying theory of the 3D skeleton. The process of removing these deficiencies leads to the discovery of a new general relationship between local skeleton structure and boundary features which is then exploited to classify and simplify the skeleton.
In this paper we introduce a data structure for surface representation, called Handle-Edge, which involves only an intrinsic cell decomposition, and define structural operators, called Morse Operators, which are based on the Handlebody Representation of orientable surfaces with or without boundary. As the description is a topological one it leaves open the dimension of the euclidean space, where the surface is embedded. We apply this representation to the visualization of implicit surfaces embedded in high-dimensional spaces.
In computer graphics and geometric modeling one generally faces the problem to integrate a variety of curve and surface types in to a single program. Object-oriented design offers the opportunity to use the inherent hierarchical structure of curves and surfaces to solve this problem. This paper presents an object-oriented framework together with its C++ implementation that starts from an abstract class of general differentiable curves and surfaces and in turn refines this design to curves and surfaces that are explicitly given in either parametric or implicit form. Most of the standard curve and surface types are derived from these classes. Examples that visualize the differential geometry of curves and surfaces illustrate the approach.
Adaptive chain coding algorithms based on using multiple templates are developed. One algorithm differentially encodes a curve using a directional template whose angle is dynamically scaled to accommodate a variety of curvature properties. A second algorithm differentially encodes using a template with small angular range with another template occasionally used for reorientation if abrupt changes occur in the properties of the underlying curve. By exploiting the piecewise regularity of most curves, our techniques provide substantially more accurate and efficient encodings compared with standard chain coding algorithms.
This paper presents an efficient technique to compress chain-encoded line drawings. The technique, called the address chain code, constructs a codebook containing vectors of chains which recur frequently in the chain-encoded line drawings. The recurrent vectors are encoded as their corresponding addresses in the codebook preceded by header bits. The encoding procedure and an efficient way to organize and label the vectors, which turns out to be somewhat similar to the generalized chain codes, are described in this paper.
Traditional modeling techniques, with roots in CAD systems, do not provide a rich enough modeling environment for computer vision. The models themselves describe the structure rather than appearance of objects, and rarely provide facilities for the recording of the additional information required by a vision system. Encoding appearance explicitly ensures quick access and use of the model, and yields model features that correspond to observable data features. We describe the Suggestive Modelling System (SMS) which has been designed specifically for vision applications, combining the geometric object model with vision-specific annotations. Among SMS's features are: (1) A novel separation of surface shape, extent and position; (2) Encoding of underconstrained positions for subcomponents such as spheres and discs; (3) Incorporation of uncertain property values; (4) Cheap encoding of viewpoint- dependent information in addition to the body-centered model; (5) Hierarchical models; (6) Symbolic labels for each primitive; and (7) Parallel curve, surface, and volume-based representations simplify project management. We will describe how this approach reflects more faithfully the capabilities of current scene analysis algorithms than traditional methods. Results from the Imagine 2 vision system demonstrate the applicability of the models to complex real-world industrial inspection and recognition tasks. In addition a number of other vision-related applications in which the SMS paradigm has proved useful will be discussed.
A computational model for generating a graphical deformation process of a 3-D object from an initial state to a final state is proposed. The deformation process is represented by several intermediate states which are interpolated from the two original states of an object. The generated processes can be used for filling the missing information between two given states, visualizing the changing process, or conveying the underlying idea more clearly through a pictorial sequence. The model includes a generic scheme for correspondence establishment and a physically-based process generator.
In this paper we examine the localization criterion for edge detection and determine the density function describing the edge location error, i.e. the displacement of the detected edge position from the true edge location. Canny defines the measure of localization as the reciprocal of the root-mean-square edge location error and formulates an expression of this measure for local maximum detectors. However, Tagare and deFigueiredo point out that an incorrect assumption is made in the calculation. The same procedure is used by Sarkar and Boyer for their localization measure for zero-crossing detectors. We modify the analysis and obtain a closed form solution of the probability density function of the edge location error. Examination of the density function indicates the variance of the edge location error does not exist, and hence can not be used as a measure of localization. A new localization measure is proposed, characterized by a percentile level on the absolute value of the edge location error.
A popular approach to joining surface patches smoothly across a given boundary curve is to choose the transversal derivative as an average of the tangents at the end points. However, it can be observed that this type of rational linear reparametrization forces the center transversal derivatives outside the convex hull of the remaining boundary derivatives unless certain tangent ratios are equal. The result are undesirable features in the surface. To circumvent the problem a composite patch is introduced. It consists of tensor-product patches that separate the original boundary tangents, so that constant G(superscript 1$ joins can be used.