In this paper we present a novel FPGA implementation of the Consultative Committee for Space Data Systems Image Data Compression (CCSDS-IDC 122.0-B-1) for performing image compression aboard the Polarimetric Helioseismic Imager instrument of the ESA’s Solar Orbiter mission. This is a System-On-Chip solution based on a light multicore architecture combined with an efficient ad-hoc Bit Plane Encoder core. This hardware architecture performs an acceleration of ~30 times with respect to a software implementation running into space-qualified processors, like LEON3. The system stands out over other FPGA implementations because of the low resource usage, which does not use any external memory, and of its configurability.
The CAFADIS camera is a new sensor patented by Universidad de La Laguna (Canary Islands, Spain): international
patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can measure the wavefront phase and the
distance to the light source at the same time in a real time process. It uses specialized hardware: Graphical Processing
Units (GPUs) and Field Programmable Gates Arrays (FPGAs). These two kinds of electronic hardware present an
architecture capable of handling the sensor output stream in a massively parallel approach. Of course, FPGAs are faster
than GPUs, this is why it is worth it using FPGAs integer arithmetic instead of GPUs floating point arithmetic.
GPUs must not be forgotten, as we have shown in previous papers, they are efficient enough to resolve several problems
for AO in Extremely Large Telescopes (ELTs) in terms of time processing requirements; in addition, the GPUs show a
widening gap in computing speed relative to CPUs. They are much more powerful in order to implement AO simulation
than common software packages running on top of CPUs.
Our paper shows an FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera. This
is done in two steps: the estimation of the telescope pupil gradients from the telescope focus image, and then the very
novelty 2D-FFT over the FPGA. Time processing results are compared to our GPU implementation. In fact, what we are
doing is a comparison between the two different arithmetic mentioned above, then we are helping to answer about the
viability of the FPGAs for AO in the ELTs.
ELT laser guide star wavefront sensors are planned to handle an expected amount of data to be overwhelmingly large
(1600×1600 pixels at 700 fps). According to the calculations involved, the solutions must consider to run on specialized
hardware as Graphical Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs), among others.
In the case of a Shack-Hartmann wavefront sensor is finally selected, the wavefront slopes can be computed using
centroid or correlation algorithms. Most of the developments are designed using centroid algorithms, but precision ought
to be taken in account too, and then correlation algorithms are really competitive.
This paper presents an FPGA-based wavefront slope implementation, capable of handling the sensor output stream in a
massively parallel approach, using a correlation algorithm previously tested and compared to the centroid algorithm.
Time processing results are shown, and they demonstrate the ability of the FPGA integer arithmetic in the resolution of
The selected architecture is based in today's commercially available FPGAs which have a very limited amount of
internal memory. This limits the dimensions used in our implementation, but this also means that there is a lot of margin
to move real-time algorithms from the conventional processors to the future FPGAs, obtaining benefits from its
flexibility, speed and intrinsically parallel architecture.
Proc. SPIE. 6589, Smart Sensors, Actuators, and MEMS III
KEYWORDS: Microelectromechanical systems, Sensors, Interfaces, Field programmable gate arrays, Control systems, Telecommunications, Transducers, Data communications, Smart sensors, Iterated function systems
The main objective of this paper is to develop a distributed architecture for integrating MEMS based on a hierarchical
communications system governed by a master node. A micro-electromechanical system (MEMS) integrates a sensor
with its signal conditioner and communications interface, thus reducing mass, volume and power consumption. In
pursuing this objective, we have developed an Intellectual Propriety (IP) model with VHSIC Hardware Description
Language (VHDL) for the bus interface that can be easily added to the micro-system. The connection between the
MEMS incorporating this module and the sensor network is straightforward.
The core thus developed contains an Interface File System (IFS) that supplies all the information related to the microsystem
that we wish to connect to the net, allowing the specific characteristics to be isolated to the micro-instrument.
This allows all the nodes to have the same interface.
In order to support complexity management and composability, there are a real-time service interface and a not timecritical
configuration interface. So the design includes a new node integration VHDL module.
The design has been implemented in a Field Programmable Gate Array (FPGA) and was successfully tested. The FPGA
implementation makes the designed nodes small-size, flexible, customizable, reconfigurable and reprogrammable with
advantages of well-customized, cost-effective, integration, accessibility and expandability. The VHDL hardware solution
is a key feature for size reduction. The system can be resized according to its needs taking advantages of the VHDL
A microelectromechanical system (MEMS) merges integrated sensors, microactuators, and low-power electronics. These systems normally have a local sensor communication bus managed by a master node. The purpose of this work is to implement a communication interface that permit connect the integrated MEMS local bus (through the master node) with on a high level microinstrument communication bus. The basic philosophy of this development has been to create an IP model with VHDL for the bus module interface. This interface can be added easily to a microsystem and from of point of view of microinstrument design methodology, MEMS based in this interface module could be easily plugged with the other microsystems on microinstrument architecture. The IP developed is based on the concept of Interface File System (IFS) that contains all relevant information of the microsystem. The use of the IFS in integrated microsystems design permits to insulate its particular characteristics from the whole of microinstrument. Also, this IFS has associated a communication model that allows different views of the system, such as, real-time or command service view, configuration and diagnostic service view. Implementation experiences presented in this paper show that the IFS reduces the complexity of microinstrument applications and make easy the MEMS reuse in other microinstruments. The communication module based on IFS was successfully tested between microsystems based on local sensor bus namely IBIS (Interconnection Bus for Integrated Sensors) and a generic real time microinstrument bus.