Proc. SPIE. 10792, High-Performance Computing in Geoscience and Remote Sensing VIII
KEYWORDS: Reliability, Computing systems, Field programmable gate arrays, Data processing, Software development, Distributed computing, Space operations, Computer architecture, Commercial off the shelf technology, Standards development
Future onboard computing applications require significantly greater computing performance than is currently provided by standard space qualified onboard computers. Examples of such applications include onboard data analysis and (rendezvous) navigation tasks. Therefore, the German Aerospace Center is currently developing Scalable On-board Computing for Space Avionics (ScOSA). The aim of the ScOSA onboard computing platform is to deliver high performance, reliability, scalability and cost-efficiency. <p> </p>To reach these properties, a distributed computing platform approach is used, by which reliable radiation hardened computing nodes (LEON3's) are combined with several high performance computing nodes (Xilinx Zynq), connected over a high-bandwidth SpaceWire network. The execution platform consists of a distributed task-based framework. <p> </p>In this paper, the architecture, features and capabilities of the ScOSA onboard computing platform are presented from an application developer's view. A brief summary of the design goals and the general hardware and software architecture of the ScOSA system will be introduced. This is followed by a description of the programming model and the application interface, with a focus on how the distributed nature of the ScOSA system is handled. It will also be shown how an existing application can be integrated in the ScOSA system. <p> </p>The main part of this paper will focus on the computing performance attainable from the ScOSA platform. There will be a comparison of the computing performance of an example application executed on the ScOSA system versus a standard PC. It will also be demonstrated how the performance of an application can be improved by adapting it to the distributed computing architecture of the ScOSA platform. Furthermore, a short overview of the failure detection and recovery features of the ScOSA platform are described and how they can be integrated into an application.
Motivated by politics and economy, the monitoring of the world wide ship traffic is a field of high topicality. To detect illegal activities like piracy, illegal fishery, ocean dumping and refugee transportation is of great value. The analysis of satellite images on the ground delivers a great contribution to situation awareness. However, for many applications the up-to-dateness of the data is crucial. With ground based processing, the time between image acquisition and delivery of the data to the end user is in the range of several hours. The highest influence to the duration of ground based processing is the delay caused by the transmission of the large amount of image data from the satellite to the processing centre on the ground. One expensive solution to this issue is the usage of data relay satellites systems like EDRS. Another approach is to analyse the image data directly on-board of the satellite. Since the product data (e.g. ship position, heading, velocity, characteristics) is very small compared to the input image data, real-time connections provided by satellite telecommunication services like Iridium or Orbcomm can be used to send small packets of information directly to the end user without significant delay. The AMARO (Autonomous real-time detection of moving maritime objects) project at DLR is a feasibility study of an on-board ship detection system involving a real-time low bandwidth communication. The operation of a prototype on-board ship detection system will be demonstrated on an airborne platform. In this article, the scope, aim and design of a flight experiment for an on-board ship detection system scheduled for mid of 2018 is presented. First, the scope and the constraints of the experiment are explained in detail. The main goal is to demonstrate the operability of an automatic ship detection system on board of an airplane. For data acquisition the optical high resolution DLR MACS-MARE camera (VIS/NIR) is used. The system will be able to send product data, like position, size and a small image of the ship directly to the user’s smart-phone by email. The time between the acquisition of the image data and the delivery of the product data to the end-user is aimed to be less than three minutes. For communication, the SMS-like Iridium Short Burst Data (SBD) Service was chosen, providing a message size of around 300 Bytes. Under optimal sending/receiving conditions, messages can be transmitted bidirectional every 20 seconds. Due to the very small data bandwidth, not all product data may be transmittable at once, for instance, when flying over busy ships traffic zones. Therefore the system offers two services: a query and a push service. With the query service the end user can explicitly request data of a defined location and fixed time period by posting queries in an SQL-like language. With the push service, events can be predefined and messages are received automatically, if and when the event occurs. Finally, the hardware set-up, details of the ship detection algorithms and the current status of the experiment is presented.
The detection of ships from remote sensing data has become an essential task for maritime security. The variety of application scenarios includes piracy, illegal fishery, ocean dumping and ships carrying refugees. While techniques using data from SAR sensors for ship detection are widely common, there is only few literature discussing algorithms based on imagery of optical camera systems. A ship detection algorithm for optical pushbroom data has been developed. It takes advantage of the special detector assembly of most of those scanners, which allows apart from the detection of a ship also the calculation of its heading out of a single acquisition. The proposed algorithm for the detection of moving ships was developed with RapidEye imagery. It algorithm consists mainly of three steps: the creation of a land-watermask, the object extraction and the deeper examination of each single object. The latter step is built up by several spectral and geometric filters, making heavy use of the inter-channel displacement typical for pushbroom sensors with multiple CCD lines, finally yielding a set of ships and their direction of movement. The working principle of time-shifted pushbroom sensors and the developed algorithm is explained in detail. Furthermore, we present our first results and give an outlook to future improvements.
Connected Component Labeling (CCL) is a basic algorithm in image processing and an essential step in nearly every application dealing with object detection. It groups together pixels belonging to the same connected component (e.g. object). Special architectures such as ASICs, FPGAs and GPUs were utilised for achieving high data throughput, primarily for video processing. In this article, the FPGA implementation of a CCL method is presented, which was specially designed to process high resolution images with complex structure at high speed, generating a label mask. In general, CCL is a dynamic task and therefore not well suited for parallelisation, which is needed to achieve high processing speed with an FPGA. Facing this issue, most of the FPGA CCL implementations are restricted to low or medium resolution images (≤ 2048 ∗ 2048 pixels) with lower complexity, where the fastest implementations do not create a label mask. Instead, they extract object features like size and position directly, which can be realized with high performance and perfectly suits the need for many video applications. Since these restrictions are incompatible with the requirements to label high resolution images with highly complex structures and the need for generating a label mask, a new approach was required. The CCL method presented in this work is based on a two-pass CCL algorithm, which was modified with respect to low memory consumption and suitability for an FPGA implementation. Nevertheless, since not all parts of CCL can be parallelised, a stop-and-go high-performance pipeline processing CCL module was designed. The algorithm, the performance and the hardware requirements of a prototype implementation are presented. Furthermore, a clock-accurate runtime analysis is shown, which illustrates the dependency between processing speed and image complexity in detail. Finally, the performance of the FPGA implementation is compared with that of a software implementation on modern embedded platforms.
The present work has to be seen in the context of real-time on-board image evaluation of optical satellite
data. With on board image evaluation more useful data can be acquired, the time to get requested information
can be decreased and new real-time applications are possible. Because of the relative high processing power in
comparison to the low power consumption, Field Programmable Gate Array (FPGA) technology has been chosen
as an adequate hardware platform for image processing tasks. One fundamental part for image evaluation is
image segmentation. It is a basic tool to extract spatial image information which is very important for many
applications such as object detection. Therefore a special segmentation algorithm using the advantages of
FPGA technology has been developed. The aim of this work is the evaluation of this algorithm. Segmentation
evaluation is a difficult task. The most common way for evaluating the performance of a segmentation method
is still subjective evaluation, in which human experts determine the quality of a segmentation. This way is not
in compliance with our needs. The evaluation process has to provide a reasonable quality assessment, should
be objective, easy to interpret and simple to execute. To reach these requirements a so called Segmentation
Accuracy Equality norm (SA EQ) was created, which compares the difference of two segmentation results. It
can be shown that this norm is capable as a first quality measurement. Due to its objectivity and simplicity the
algorithm has been tested on a specially chosen synthetic test model. In this work the most important results of
the quality assessment will be presented.