This paper will talk about the technologies that have been emerging over the 25 years since the Human Vision and Electronic Imaging conference began that the conference has been a part of, and that have been a part of the conference, and will look at those technologies that are emerging today, such as social networks, haptic technologies, and still emerging imaging technologies, and what we might look at for the future.Twenty-five years is a long time, and it is not without difficulty that we remember what was emerging in the late 1980s. Yet to be developed: The first commercial digital still camera was not yet on the market, although there were hand held electronic cameras. Personal computers were not displaying standardized images, and image quality was not something that could be talked about in a standardized fashion, if only because image compression algorithms were not standardized yet for several years hence. Even further away were any standards for movie compression standards, there was no personal computer even on the horizon which could display them. What became an emergent technology and filled many sessions later, image comparison and search, was not possible, nor the current emerging technology of social networks- the world wide web was still several years away. Printer technology was still devising dithers and image size manipulations which would consume many years, as would scanning technology, and image quality for both was a major issue for dithers and Fourier noise.From these humble beginnings to the current moves that are changing computing and the meaning of both electronic devices and human interaction with them, we will see a course through the changing technology that holds some features constant for many years, while others come and go.
This paper discusses a simulation that was created of a model presented three years ago at this conference of
a neuron as a micro machine for doing metaphor by cognitive blending. The model background is given, the
difficulties of building such a model are discussed, and a description of the simulation is given based on texture
synthesis structures and texture patches. These are glued together using Formal Concept Analysis. Because
of this and because of the hyperbolic and Euclidean geometry intertwining and local activation, an interesting
fundamental connection between analogical processing and glial and neural processing is discovered.
The Haboku Landscape of Sesshu Toyo is perhaps one of the finest examples of Japanese and Chinese monk
landscapes in existence. We analyze the factors going into this painting from an artistic and aesthetic perspective,
and we model the painting using MPEG-7 description. We examine the work done in rendering ink landscapes
using computer-generated NPR. Finally we make some observations about measuring aesthetics in Chinese and
Japanese ink painting.
In this paper, we lay groundwork for a model of extended dendritic processing based on temporal signalling using a model in hyperbolic space. The intended goal is to create a processing environment in which metaphorical and analogical processing is natural to the components. A secondary goal is to create a processing model which is naturally complex, naturally based in fractal and complex flows, and creates communication based on a compatibility rather than a duplication model. This is a still a work in progress, but some gains are made in creating the background model.
We look at a characterization of metaphor from cognitive linguistics, extracting the salient features of metaphorical processing. We examine the neurobiology of dendrites, specifically spike timing-dependent plasticity (STDP), and the modulation of backpropagating action potentials (bAPs), to generate a neuropil-centric model of cortical processing based on signal timing and reverberation between regions. We show how this model supports the basic features of metaphorical processing previously extracted. Finally, we model this system using a combination of euclidean, projective, and hyperbolic geometries, and show how the resulting model accounts for this processing, and relates to other neural network models
The Retinex algorithm, in its incarnation as McCann'99, presents an interesting mix of a locally connected iterative algorithm and a multiresolution analysis of the image. By recasting the algorithm, using wavelets, the behavior of the algorithm comes to light. This allows generalizations to be proposed, by changes in both the multiresolution structure and the iterative update structure.
We present the results of a study of the plausibility of inverting the image analysis done in the human visual system. We examine the arguments for such inversion in recall and visual imagery, and look at the requirements for creating imagery by inverting one or multiple layers of function in neural nets. We then show how such a reversal of visual system processing can take place in stages, or all at once. This is done by means of finite Radon transforms on certain geometries, and we examine the possibility that such situations exist in the human visual system. We create a dual system to certain feed forward network models of visual processing, and show its application to such processing, and to non-image processing applications.
Proc. SPIE. 3644, Human Vision and Electronic Imaging IV
KEYWORDS: Visual process modeling, Optical spheres, Detection and tracking algorithms, Data modeling, Visualization, Sensors, 3D modeling, Human vision and color perception, Systems modeling, Fuzzy logic
We present a method for reconstructing multidimensional scaling (MDS) as a biologically plausible algorithm for storing object data. To do so, we must make modifications in the definitions of stress, and in locating the process. We make these modifications by appealing to physical definitions of stress and deformation. In this system, classical MDS becomes the system in which these are modeled on perfectly elastic deformation, and a variety of systems can be created or trained which are, by contrast, viscoelastic. The resultant model is useful in applications in which the relationship between stress and the underlying metric used for MDS is complicated by local phenomena, or in which these quantities need to be modeled as learned or changing attributes.
We present a generalization of the Radon transform which fits many tasks in image processing, and is useful in modeling the human visual system. In an analogy with wavelets, we propose a transform localized at points, and translated over the image plane, and refer to this as a parallel sensor transform. We examine the relationship between this transform and wavelet transforms developed to describe visual system processes. The transform captures the image data in that it is injective. Using this starting point, we present a continuous analog of the four stage edge detection breakdown by Bezdek et al., and arrive at a framework for casting many kinds of image processing algorithms in a biologically plausible manner: as feed forward, receptive field based algorithms using known operations. We show how this leads to an optimization scheme for Radon transform based algorithms and show the results of applying this theory to biologically plausible algorithms for motion and color processing.
Given the great diversity of pathways into which the visual system signal splits after arriving at area V1, many researchers have proposed solutions to the 'binding problem' of reunifying the information after specialized processing. Most solutions require pathways to maintain synchrony and share information, which in turn requires some similarity of mechanisms and/or the spaces in which they operate. We examine the extent to which such similarity can occur between motion and color processing pathways, by using a multiple stage motion detection algorithm for processing color change. We first review the motion algorithm chosen, then we present a model for certain changes in hue, discuss the possible uses for such processes in the visual system, and present results of applying this model to both motion and color in this manner.