PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We propose to use an active shape model for correcting a road map stably. Active shape model can deform itself preserving a given basic shape by restricting the deformation to an affine transformation. In order to consider a topological connections for road map, we apply the deformation to @ not single road but simultaneously several roads, one step at a time. By iterating the deformation step, road network is refined gradually. Finally, experimental results will show that proposed method can refine the existing road map correctly by fitting aerial image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, with an increasing FPD market, automatic detection of the mura in the manufacturing process has become a critical issue for manufactures interested in increasing their TFT-LCD quality. But segmentation based detection algorithms deviate from human visual perception model. To supplement the detection error produced by deviation, the mura is re-inspected through a visual inspection during manufacturing process. If we could objectively quantify each mura's defect degree, then based on some threshold of defect degree, we could reduce the number of re-inspection. We call this degree line muras defect level. Our approach is an attempt to quantify the ideal defect level of line mura, that for each individual could vary because of subjectivity, based on multiple features crucial in the detection of line mura. In the process, we approximated what we call JND surface that passes through the middle of feature points with mean
mura visibility of 0.5. Then Index function, which measures distance from JND surface, is employed to measure the objective defect level of each candidate mura.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The shape classification methods derived from similarity measures based on the shape-transformation-variant descriptors often require shape normalization/standardization that involves complicated computations and contour or code matching schemes. In this paper, we introduce a quantitative similarity measure and a new model-based shape classification method which uses exclusively the shape-transformation-invariant descriptors. This method eliminates all possible variations and potential problems caused by shape transformation, and complicated contour matching and/or shape normalization/standardization procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The representation and characterization of planar shapes or regions has important consequences for image processing and image content understanding. Numerous approaches have been developed to provide image analysis with efficient methods to represent shapes. We present a region-based approach to extract a refined axial regional shape description in the form of a skeleton, which is based on the use of the constrained Delaunay triangulation (CDT) and the chordal axis transform (CAT). We elaborate on the exploitation of the approximate edge co-circularity criterion that is used to refine CAT-produced skeletons. The co-circularity criterion enables efficient evaluation of CDT-generated triangle edges lying inside regions (chords), and filters out non-salient chords. The application of this criterion produces smoother skeletons, allows skeleton to have vertices with branching degree higher than three that was due to the use of CDT, and significantly reduces number of skeleton segments. In this paper, in contrast with the chord strength evaluation of the original skeleton rectification algorithm, where chord strength evaluation does not include strengths of its neighboring chords, we introduce smoothing operator to evaluate chord strength by processing chord strengths within local neighborhood. The result of region characterization based on the proposed smoothing-based chord evaluation is that skeleton is more authentic (sensitive) to original shape, while at the same time it preserves all the advantages of the original skeleton rectification scheme. A number of examples, including comparison with skeletons generated by the original CAT-skeleton rectification algorithm, are presented to demonstrate our approach at work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Matching geometric objects is a fundamental problem in computational geometry with applications in many other areas, such as computer vision, biology,and archaelogy. In this paper, we study an important partial matching problem motivated from applications in several such areas. The input is in the form of sets of under-sampled slices of one (or more) unknown 3D objects, possibly generated by slicing planes of arbitrary orientations, the question we are interested in is whether it is 'possible' that two under-sampled sets have been taken from the same object. Alternatively, can we determine with 'certainty' that the given input samples cannot be from the same object. We present efficient algorithms for addressing these questions. Our algorithm is based on interesting geometric techniques and enables answering these queries either as plausible or a certain negative.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel algorithm for line segment matching over two views. Two image pyramids are built by applying wavelet decomposition to the two images. After defining supporters for a line segment to be the edge points lying close to it, line segments are matched from the coarsest to the finest level. At each level the supporters are matched using the cross-correlation techniques. This method can match line segments over two uncalibrated views, in which the line segments need not be the images of the same section of a 3D line segment. The hierarchical strategy helps to reduce the computational complexity. Wavelet is adopted to build the hierarchical frame for its built-in multi-scale structure and fast decomposition algorithm. Furthermore, it overcomes the flattening-out problem in the traditional multi-scale Gaussian pyramid technique. Experiments on real image pairs are given to demonstrate the effectiveness and the robustness of our algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the relationship between the tilt angle and altitude of a PTZ camera and the 3D-to-2D coordinate
transformation is built. A vision-based approach is then proposed to accomplish the estimation of tilt angle and altitude
based on the observation of some simple objects lying on a horizontal plane, with known intrinsic parameters of the PTZ
camera. The patterns could be a circle, a corner with known angle, more than two corners with unknown but equal
angles, or more than two line segments with unknown but equal lengths. Furthermore, if we are given a few line
segments with known but not necessarily equal length, the estimation of tilt angle and altitude can also be achieved via
an optimization procedure. The impacts of parameter fluctuations are analyzed via computer simulations. Experimental
results on real images conform to our analysis and demonstrate the efficiency of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The perspex machine arose from the unification of the Turing machine with projective geometry. The original, constructive proof used four special, perspective transformations to implement the Turing machine in projective geometry. These four transformations are now generalised and applied in a compiler, implemented in Pop11, that converts a subset of the C programming language into perspexes. This is interesting both from a geometrical and a computational point of view. Geometrically, it is interesting that program source can be converted automatically to a sequence of perspective transformations and conditional jumps, though we find that the product of homogeneous transformations with normalisation can be non-associative. Computationally, it is interesting that program source can be compiled for a Reduced Instruction Set Computer (RISC), the perspex machine, that is a Single Instruction, Zero Exception (SIZE) computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmentation of independent motions from video sequences is a challenging problem that can be a prelude to many further applications in computer vision. In this paper, we present an accurate and efficient approachfor automatic segmentation of all the independently moving objects in the scene. The system begins with an
initialization module which provides initial partition of the scene, computes 2D motions, and includes some necessary preparing work. Then, we propose a novel object classification method by analyzing and clustering motions. To achieve the best robustness, and minimize the total computation load, we choose to work on multi key frames simultaneously to obtain global optimal classification. Our approach achieves accurate object classification and avoids the uncertainty in detection of moving objects. We demonstrate high stability, accuracy
and performance of our algorithm with a set of experiments on real video sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present paper, we introduce an arithmetical definition of discrete circles with a non-constant thickness and we exhibit different classes of them depending on the arithmetical discrete lines. On the one hand, it results in the characterization of regular discrete circles with integer parameters as well as J. Bresenham's circles. As far as we know, it is the first arithmetical definition of the latter one. On the other hand, we introduce new discrete circles, actually the thinnest ones for the usual discrete connectedness relations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given a general digital shape of high-resolution, its analog perimeter of the pre-digitized shape (APPS) can be adequately estimated using the classic chain-code based methods. For a digital shape of low resolution, however, these methods may not provide accurate or consistent estimates due to the variations of APPS with shape orientations, which are inversely proportional to the digitization resolutions. Aiming at improving the accuracy and consistency in estimating APPS for low-resolution digital shapes, this paper presents a method to find the median values of digital perimeter curves (DPC) obtained from rotating the shapes. Examples are given to illustrate the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image and video analysis, distance maps are frequently used. They provide the (Euclidean) distance (ED) of background pixels to the nearest object pixel. Recently, the Fast Exact Euclidean Distance (FEED) transformation was launched. In this paper, we present the three dimensional (3D) version of FEED. 3D-FEED is compared with four other methods for a wide range of 3D test images. 3D-FEED proved to be twice as fast as the fastest algorithm available. Moreover, it provides true exact EDs, where other algorithms only approximate the ED. This unique algorithm makes the difference, especially there where time and precision are of importance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for estimating the analog surface area and the volume of a general 3D shape is presented. The proposed method takes advantage of known boundary estimation techniques for 2D shapes, which would eliminate most, if not all, of the complex interpolation procedures and estimation schemes employed by the conventional methods. After taking into consideration the variation of digital boundary with shape orientations, the estimated perimeters are used to measure the lateral area of the 3D shape through single integration. The same approach can be applied to estimating the volume of the 3D shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the classical RANSAC approach is considered for
robust matching to remove mismatches (outliers) in a list of
putative correspondences. We will examine the justification for
using the minimal size of sample set in a RANSAC trial and propose
that the size of the sample set should be varied dynamically
depending on the noise and data set. Using larger sample set will
not increase the number of iterations dramatically but it can
provide a more reliable solution. A new adjusting factor is added
into the original RANSAC sampling equation such that the equation
can model the noisy world better. In the proposed method, the noise
variances, percentage of outliers and number of iterations are all
estimated iteratively. Experimental results show that the estimated
parameters are close to the ground truth. The modification can also
be applied to any sampling consensus methods extended from RANSAC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an image matching algorithm combining a SVD matching approach and scale invariant measure is proposed to relate images with large-scale variations. To obtain a better performance on handling redundant points, we modify the SVD matching approach which enforces the condition of minimal distance between the structures of point patterns at the same time ensures the likeliness of the matched points. Together with the adoption of scale invariant features, the proposed method can match features undergoing significant scale changes and provide a set of matches containing a high percentage of correct matches without any statistical outlier detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, two new approaches for the topological feature matching problem are proposed. The first one consists of estimating a combinatorial map between block structures (pixels, windows) of given binary images which is then analyzed for topological correspondence using the concept of homology of maps. The second approach establishes a matching by using a similarity measure between two sets of boundary representations of the connected components extracted from two given binary images. The similarity measure is applied on all oriented boundary components of given features. A number of experiments are carried out on both synthetic and real images to validate the two approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In previous papers we examined several fundamental problems related to object recognition for the generalized weak perspective model of image formation and offered a complete solution to the problems for configurations of point features. In this paper we convert those theoretical solutions to working algorithms and examine the robustness of the algorithms in the face of various combinations of errors and noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we introduce GSIFT (Geometric Scale Invariant Terrain Feature Transform), geometric descriptors that are invariant to translation, rotation, and scaling. SIFT (Scale Invariant Feature Transform) descriptors have been found to be very successful in a variety of computer vision tasks. GSIFT expands SIFT by using the novel technique of binding features at different scales to associate priorities/importance to features based on its scale and persistence among different scales. As a first step to obtain scale invariance, we create a multi-scale pyramid for detecting important features such as maxima, minima, and saddle points for 1D and 2D height fields. We use relative height histograms as GSIFT with support regions determined by the priority of the feature. We use symmetric chi-square as the similarity measure to compare geometric features. Experiments with GSIFT and SIFT (on images constructed from height fields) on both synthetic and real height field data sets show that GSIFT provide comparable and in some cases, better results for data registration. GSIFT has the added advantage that it can be used by scientists in compressing and registering (or non-registering) height data, where it is important for them to understand which features to keep/discard or why the registration is good/poor, when the data is obtained at different scales from independent sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The shapes of many natural and man-made objects have planar and curvilinear surfaces. The images of such curves usually do not
have sufficient distinctive features to apply conventional feature-based reconstruction algorithms. In this paper, we describe a method of reconstruction of a quadratic curve in 3-D space as an intersection of two cones containing the respective projected curve images. The correspondence between this pair of projections of the curve is assumed to be established in this work. Using least-square curve fitting, the parameters of a curve in 2-D space are found. From this we are reconstructing the 3-D quadratic curve. Relevant mathematical formulations and analytical solutions for obtaining the equation of reconstructed curve are given. The result of the described reconstruction methodology are studied by simulation studies. This reconstruction methodology is applicable to LBW decision in cricket, path of the missile, Robotic Vision, path lanning etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scattered data is defined as a collection of data that have little specified connectivity among data points. In this case, Marching Cubes algorithm is no longer applicable. In real world, many applicable data are available in a manner of scattered data rather than cuberille grid data. The alpha shapes of finite points set are a polytope that is uniquely determined by the set and a real number α. Alpha shapes express the intuitive notion of the shape of the point set, and α is a parameter that controls the level of detail reflected by the polytope. However, alpha-shapes give good results for points of roughly uniform density, it does not give for non-uniform point sets. In reconstructing a surface from scattered data it is rarely the case that the points are uniformly dense everywhere on the surface. In order to be effective in non-uniform point sets, it needs to change the value of alpha (radius of sphere) locally depending on the intensity of a point set. The weighted alpha shapes is defined for a finite set of weighted points. We need to investigate the way to achieve different levels of detail in a single shape by assigning weights to the data points. One of the ways to assign weight can be considered by using Inverse Distance Weighted methods. This paper describes how to assign the weight for each data points. The quality of interpolant of volumetric scattered data depend on the way of assigning weights. To achieve the reasonable way of assigning weights, we need to consider not only the positional information (inverse distance), but also intensity information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new, generic method called POSS (Parameterization by Optimization in Spectral Space) to efficiently obtain parameterizations with low distortions for 3D surface meshes. Given a mesh, first we compute a valid initial parameterization using an available method and then express the optimal solution as a linear combination of the initial parameterization and an unknown displacement term. The displacement term is approximated by a linear combination of the eigenvectors with the smallest eigenvalues of a mesh Laplacian. This approximation considerably reduces the number of unknowns while minimizing the deviation from the optimality. Finally, we find a valid parameterization with low distortion using a standard constrained nonlinear optimization procedure. POSS is fast, flexible, generic, and hierarchical. Its advantage has been confirmed by its application to planar parameterizations of surface meshes that represent complex human cortical surfaces. This method has a promising potential to improve the efficiency of all parameterization techniques which involve constrained nonlinear optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large and detailed 3D polygon mesh with standard representation results in files of gigantic size. The need for more compact representations and a parallel implementation is clear. But the compressed gigantic mesh applied in parallel rendering to achieve high performance is still an unexplored area.
In this work, we present a mesh compression scheme employed in parallel rendering system. It includes two parts: Mesh segmentation and segments compression.
Firstly, the multilevel graph partitioning idea is adopted to separate the mesh into large patches with less curvature. Then the large patches are farther partitioned with MeTiS[1] to get small patches with a balanced vertex counts. As the multi-level mesh produced by the feature preserved mesh simplification procedure, our segmentation algorithm takes both the segment's flatness and balanced vertex counts into account.
Secondly, each patch is compressed separately. The vertices in the patch are classified into two distinguish types, namely boundary vertex and inner vertex, different compression algorithm are applied to them. The boundary vertexes are compressed with a novel compression algorithm PMC proposed in this work. To avoid decoding the whole boundary during every frame rendering, the boundary vertexes are compressed piecewise. The successive boundary edges shared by the same patches can act as a piece element when it is compressed. During sorting only the encoded boundary in the view-frustum needs to be loaded and decompressed.
Experiments show that the encoded mesh can be partitioned in compression-domain in parallel rendering system. It reduces the communication bandwidth requirement significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The three-dimensional shape measurement has been widely used in a lot of applications such as traffic, entertainment, architecture design, manufacturing and archeology. The paper simplifies the principle of structured-light triangulation with the constraints of light-plane and takes the radial lens distortion during CCD imaging into account, which is able to improve the system accuracy. In order to release the limit of spatial and temporal stereo in the structured light system and improve the process rate and accuracy, the system utilizes the space-time stripe boundary coded patterns. ICP (Iterative Closest Points) is widely used for geometric alignment of three-dimensional models when an initial estimate of the relative pose is given, or the relative motion is small. According to the features of data from structured light acquisition system, the paper utilizes the advanced matching algorithm, which is based on projection. This algorithm is easer and more accurate than conventional ICP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, multiview calibration system for an effective 3D display is proposed. This calibration system can obtain non-calibrated 4-view image from multiview camera calibration system. Also it can be rectify camera's lens distortion, an error of brightness and color, and distortion of geometry. In this paper, the miss-matching of the brightness and the colors are calibrated by extracting the feature point and correspondence point. In addition, the difference of brightness is calibrated by using the differential map of brightness from each camera's image. A spherical lens distortion is corrected by extracting the pattern of the multiview camera images. Finally the camera error and size among the multiview cameras is calibrated by removing the distortion which is removing error components caused by the housing, the location of censor, and the distortion of lenses. Accordingly, this proposed multiview camera calibration system enables to effect mutiview 3D display and acquire realistic 3D image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The perspex machine is a continuous, super-Turing machine which, in previous work, was simulated programatically on a
digital computer in the AI language Pop11. Here we present a C++ simulation of the perspex machine, along with a
graphical user interface, that can be used to implement, edit, visualise, instrument, and run perspex programs interactively.
The interface uses a number of different projections to make 4D perspex-space more accessible to the user.
We also present a new proof of the Walnut Cake Theorem that has much weaker conditions than the previous proof and is,
therefore, much more widely applicable. It predicts non-monotonicities in numerical algorithms with sub-quadratic
convergence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The perspex machine arose from the unification of projective geometry with the Turing machine. It uses a total arithmetic, called transreal arithmetic, that contains real arithmetic and allows division by zero. Transreal arithmetic is redefined
here. The new arithmetic has both a positive and a negative infinity which lie at the extremes of the number line, and a
number nullity that lies off the number line. We prove that nullity, 0/0, is a number. Hence a number may have one of four
signs: negative, zero, positive, or nullity. It is, therefore, impossible to encode the sign of a number in one bit, as floating-point arithmetic attempts to do, resulting in the difficulty of having both positive and negative zeros and NaNs.
Transrational arithmetic is consistent with Cantor arithmetic. In an extension to real arithmetic, the product of zero, an
infinity, or nullity with its reciprocal is nullity, not unity. This avoids the usual contradictions that follow from allowing
division by zero. Transreal arithmetic has a fixed algebraic structure and does not admit options as IEEE, floating-point
arithmetic does. Most significantly, nullity has a simple semantics that is related to zero. Zero means "no value" and
nullity means "no information." We argue that nullity is as useful to a manufactured computer as zero is to a human computer.
The perspex machine is intended to offer one solution to the mind-body problem by showing how the computable aspects
of mind and, perhaps, the whole of mind relates to the geometrical aspects of body and, perhaps, the whole of body. We
review some of Turing's writings and show that he held the view that his machine has spatial properties. In particular, that
it has the property of being a 7D lattice of compact spaces. Thus, we read Turing as believing that his machine relates
computation to geometrical bodies.
We simplify the perspex machine by substituting an augmented Euclidean geometry for projective geometry. This leads to
a general-linear perspex-machine which is very much easier to program than the original perspex-machine. We then show
how to map the whole of perspex space into a unit cube. This allows us to construct a fractal of perspex machines with the
cardinality of a real-numbered line or space. This fractal is the universal perspex machine. It can solve, in unit time, the
halting problem for itself and for all perspex machines instantiated in real-numbered space, including all Turing machines.
We cite an experiment that has been proposed to test the physical reality of the perspex machine's model of time, but we
make no claim that the physical universe works this way or that it has the cardinality of the perspex machine. We leave it
that the perspex machine provides an upper bound on the computational properties of physical things, including
manufactured computers and biological organisms, that have a cardinality no greater than the real-number line.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.