Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows a
real-time virtual massive connectivity between huge number neurons, located on different chips. By exploiting high
speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed,
while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate
"events" according to their activity levels. More active neurons generate more events per unit time, and access the
interchip communication channel more frequently, while neurons with low activity consume less communication
bandwidth. When building multi-chip muti-layered AER systems, it is absolutely necessary to have a computer interface
that allows (a) reading AER interchip traffic into the computer and visualizing it on the screen, and (b) converting
conventional frame-based video stream in the computer into AER and injecting it at some point of the AER structure.
This is necessary for test and debugging of complex AER systems. In the other hand, the use of a commercial personal
computer implies to depend on software tools and operating systems that can make the system slower and un-robust.
This paper addresses the problem of communicating several AER based chips to compose a powerful processing
system. The problem was discussed in the Neuromorphic Engineering Workshop of 2006. The platform is based
basically on an embedded computer, a powerful FPGA and serial links, to make the system faster and be stand alone
(independent from a PC). A new platform is presented that allow to connect up to eight AER based chips to a Spartan 3
4000 FPGA. The FPGA is responsible of the network communication based in Address-Event and, at the same time, to
map and transform the address space of the traffic to implement a pre-processing. A MMU microprocessor (Intel
XScale 400MHz Gumstix Connex computer) is also connected to the FPGA to allow the platform to implement eventbased
algorithms to interact to the AER system, like control algorithms, network connectivity, USB support, etc. The
LVDS transceiver allows a bandwidth of up to 1.32 Gbps, around ~66 Mega events per second (Mevps).
Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows real-time
virtual massive connectivity among huge number of neurons located on different chips.[1] By exploiting high
speed digital communication circuits (with nano-seconds timing), synaptic neural connections can be time multiplexed,
while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Neurons generate "events"
according to their activity levels. That is, more active neurons generate more events per unit time and access the interchip
communication channel more frequently than neurons with low activity. In Neuromorphic system development, AER
brings some advantages to develop real-time image processing system: (1) AER represents the information like time
continuous stream not like a frame; (2) AER sends the most important information first (although this depends on the
sender); (3) AER allows to process information as soon as it is received.
When AER is used in artificial vision field, each pixel is considered like a neuron, so pixel's intensity is represented like
a sequence of events; modifying the number and the frequency of these events, it is possible to make some image
filtering.
In this paper we present four image filters using AER: (a) Noise addition and suppression, (b) brightness modification,
(c) single moving object tracking and (d) geometrical transformations (rotation, translation, reduction and
magnification). For testing and debugging, we use USB-AER board developed by Robotic and Technology of Computers
Applied to Rehabilitation (RTCAR) research group. This board is based on an FPGA, devoted to manage the AER
functionality. This board also includes a micro-controlled for USB communication, 2 Mbytes RAM and 2 AER ports
(one for input and one for output).
Smoke detection and monitoring is required for the implementation of advanced forest fire fighting strategies and validation of smoke dispersion models. The latter involve the measurement of smoke column properties. The method proposed in this paper is based on the application of computer-based image processing techniques to visual images taken from fire-spread tests. The method presented involves the application of wavelets and optical flow for fire smoke detection and monitoring. A set of experimental results are reported in the paper, showing the interest of the presented system.
KEYWORDS: 3D modeling, Image processing, Visualization, Cameras, Geographic information systems, 3D image processing, 3D metrology, Computing systems, Visual process modeling, Data modeling
This paper presents a system for forest fire monitoring using aerial images. The system uses the images taken from a helicopter, the GPS position of the helicopter, and information from a Geographic Information System (GIS) to locate the fire and to estimate in real-time their properties. Currently, the images are taken by a non-stabilized camera. Then, image processing for image stabilization and movement estimation is applied to cancel the vibration and to estimate the change in the camera orientation. Another image processing stage is the computation of the fire front and flame height features in the images. This process is based on color processing and thresholding, followed by contour computation. Finally, the fire front is automatically geo-located by projecting the features over the terrain model obtained from the GIS. Furthermore, an estimation of the flame height is obtained. The aerial image processing, automatic georeferencing and measurement has been integrated in a forest fire fire monitoring system in which several moving or fixed visual and infrared cameras can be used. The system provides in real-time the evolution of the fire-front and the flame height, and obtains a 3D perception model of the fire. The paper shows some results obtained with the application with images taken in real forest-fire experiments, in the framework of the INFLAME project funded by the European Commission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.