Translator Disclaimer
Open Access Paper
11 February 2020 Recognition analysis issues for tactical unmanned aerial vehicles based on optical photographs and SAR scans
Author Affiliations +
Proceedings Volume 11442, Radioelectronic Systems Conference 2019; 1144214 (2020) https://doi.org/10.1117/12.2564987
Event: Radioelectronic Systems Conference 2019, 2019, Jachranka, Poland
Abstract
In this paper we discuss Imagery Intelligence (IMINT) process based on the images from a short-range Tactical Unmanned Aerial Vehicle (TUAV) with EO/IR system and Synthetic Aperture Radar (SAR) payload. Properly planned TUAV’s flight path is needed to obtain a proper data for further processing and analysis required by IMINT analyst. Path planning process must additionally take cognizance of individual sensor requirements. We also present a waypoint generation method which is a base for planning the TUAV’s flight path as it allows to take into consideration dynamic limitations of UAV.

1.

INTRODUCTION

In the modern battlefield, the importance of recognition systems that are able to provide data in near real-time is increasing. Imaging itself can be carried out in many ways, but due to the development of technologies related to Unmanned Aerial Vehicles (UAV), solutions in this category are gaining popularity. This provides many benefits from a user’s perspective:

  • UAV is repetitive and does not make mistakes - especially those related to fatigue / stress, so it can be sent on long missions even on the borders of danger zones,

  • UAV as an unmanned unit allows to avoid victims in the event of a breakdown or intentional damage to the machine,

  • UAV’s microprocessor system enables precise control of many devices and parameters at the same time. This allows it to prepare devices, such as the opto-electronic head, for the next operation during the implementation of the designed flight trajectory at a given time.

To design an efficient system to automatically support reconnaissance intelligence for tactical unmanned aircraft, many parameters of the system used must be taken into account. Not only related to the processing of images (i.e. distortion correction, overlapping frames and detection of differences), but also to the planning of missions, including the flight path trajectory, transition curves, algorithms for bypassing danger zones and so on.

The first factors discussed are the features and requirements of used sensors: opto-electronic/infrared camera (EO/IR) and radar with synthetic aperture (SAR). The EO camera, due to the mechanical system (with specialised gimbal) that allows you to change the direction and a wide range of distance at which the sharp optical image can occur, does not require specific conditions to perform it’s basic tasks. However, due to advanced functions, system must contain options of changing variety of parameters (e.g. switching between modes), but also an option to control UAV flight through camera-generated trajectories. In the case of SAR radar, the dominant problem is related to the parameters determination for scanning flight segment. Described are: parameters taken into account and algorithms that allow you to choose the orientation of the direction to the object in order to perform full scan cycle.

To accurately determine the duration of mission, the UAV flight path should be predicted accurately with consideration of dynamic parameters of the vehicle, it’s inaccuracy and external disruptions. For this reason, appropriate maneuvers should be considered and calculated for different conditions. For more complicated maneuvers than starting a turn after crossing a waypoint, it is required to specify the number of additional via-points. Appropriate implementation should also include support for scanning areas, to obtain a uniform scan of the whole area. The algorithm of bypassing the forbidden zones is also described.

The last part of systems work is the analysis of collected data. In the case of images (from both EO/IR camera and SAR radar), the analysis begins with the correction of aberrations in measuring systems (distortions), normalization and, the most important for further processing of image, overlay. When these operations are performed, the difference of images, or their appropriate synthesis (in the form of blending, or superimposing images from various measuring devices) can be given as the result.

2.

ELECTRO-OPTICAL CAMERA

The optoelectronic head has 360° angular coverage in azimuth and from –120° to + 30° in the elevation. Two types of imaging sensors enable recording of visible/near-infrared light and thermal images. In addition, a laser rangefinder and target illuminator are installed. Image stabilization is provided by a dedicated active stabilization system with internal vibration isolation. However, in order to determine your own position and recorded image metadata, a GPS receiver is used.

For observation in visible light, a CMOS sensor is used with HD resolution and the possibility of switching on a radiation filter in the near infrared range. The camera optics allow for a smooth change of focal length. The sharpness of the images can be adjusted automatically by the head.

When observing at night or other conditions with difficult visibility, a thermal sensor with SD resolution may be used. It h ability to work in two polarization modes that allow inversion of colors.

The head has built-in functions related to image processing, for example, the mode of mixing images from two sources is implemented: video and thermal. This allows in some situations better visibility of potential objects of interest. The head also has the AVT (Advanced Video Tracker) function that allows tracking of static and dynamic ground or air objects. This algorithm works well even in situations of changes in size, direction of movement, lighting or temporary obscuring the tracked object.

3.

SYNTHETIC APERTURE RADAR (SAR)

The synthetic aperture radar performs a continuous scan during a single passage along one straight line segment. During the movement, the radar carries out a series of measurements, which are finally processed by appropriate time-multiplexing algorithms to generate a single image.

During the operation, the relation between the accuracy of the radar positioning on the straight line and the quality of the final result is important. In a result turbulence causes the image loses sharpness, and in extreme conditions, may also prevent the final image synthesis. SAR radar scans are much more difficult to use in the process of automated difference detection between the obtained images.

It is worth noticing that due to the unconventional mode of operation, the SAR radar has different requirements for geometric flight parameters than the electro-optical head. Image resolution and pixel size depend on software settings, not UAV location (which must only be within a certain range of parameters - for example, for a resolution of 0.15m, the distance between UAV and the object must be between 5 km and 7.5 km per whole section).

Usually SAR radars, especially in the case of unmanned aerial vehicles, are permanently attached to the aircraft, therefore it is necessary to provide the possibility of recording the image from one side only. Requirements for flight geometry are shown in Figure 1 and Figure 2.

Figure 1.

UAV trajectory parameters in the plane perpendicular to the direction of flight.

00489_psisdg11442_1144214_page_3_1.jpg

Figure 2.

Parameters used for route planning for SAR scanning.

00489_psisdg11442_1144214_page_3_2.jpg
  • α– look-down angle (depends on the installation of the radar),

  • d– distance from the object (based on the radar parameters and the required scan resolution)

Resulting dimensions: h– height and a– projection distance on the plane from the object.

Symbols shown in Fig. 2: β– scanning angle, w– beam width (function of β and a), L– passage length for scanning (Lmin– minimum distance, Lmin = w).

4.

TARGET DEFINITION

UAV flight recognition objects are divided into:

  • a) point objects,

  • b) linear objects,

  • c) recognition areas,

  • d) hazardous areas.

Object and area data must be provided by the command system or be defined by the operator before route design (Fig. 3).

Figure 3.

Graphic visualization of an UAV flight trajectory example.

00489_psisdg11442_1144214_page_4_1.jpg

The mission planning system’s task is to determine the flight route taking into account the properties of the UAV. Fig. 3 shows an example of a fragment of the flight path that bypasses the hazardous area and runs through the recognition area. Modern systems supporting route planning automatically calculate flight trajectories based on the data provided and a specific reconnaissance task.

5.

DETERMINING OBJECT VISIBILITY AND CHOOSING RUNNING DIRECTION

In order to determine the possibility of observation with the sensor of the object from a given direction of raid, it should be checked that the straight line connecting the point with the UAV will not be crossed by any obstacle along the entire route. For this reason, a histogram of the ambient angular elevation is created in the reference system of the observed object. With it, it is possible to create a graph showing the minimum UAV (angular) elevation allowing observation of the object from this azimuth angle, Fig. 4.

Figure 4.

Graph showing the coordinate system used (spherical).

00489_psisdg11442_1144214_page_4_2.jpg

An example algorithm for calculating the histogram together with a graphic representation of its result (Fig. 5):

Algorithm 1

Simple algorithm for minimal ambient angular elevation

00489_psisdg11442_1144214_page_5_1.jpg

Figure 5.

An example of the result of the function: blue level - drone elevation, black - extreme raid ranges, green - available range of raid azimuth angles.

00489_psisdg11442_1144214_page_5_2.jpg

6.

AVOIDING PROHIBITED AREAS

There may be zones on the UAV flight route that UAV should not enter, so-called prohibited zones. The shortest possible alternative route should be indicated instead, which allows vehicle to bypass the indicated areas. We assume that all obstacles can be approximated with a convex polygon. Artificial via-points are defined around the vertices of these polygons. These points are then used to create a graph defining the potential paths the object can move on.

Searching through this graph allows to find the shortest route. In addition, it is recommended to define exterior a danger zone around each obstacle, which is a polygon surrounding the obstacle, to create safety margin. Hazardous area extension can be defined by user with any non-negative value.

Algorithm 2

Obstacle avoiding algorithm.

00489_psisdg11442_1144214_page_5_3.jpg

The effect of the algorithm is shown in Figure 6, where: the red line marks obstacles and the dangerous area, the via points are marked with blue dots and the section of the desired route is marked as blue line segment. The graph showing the possible roads is marked in green and the shortest route is marked in purple.

Figure 6.

Result of obstacle avoidance algorithm operation: red - hazardous area boundaries, purple - shortest route, result.

00489_psisdg11442_1144214_page_6_1.jpg

7.

AREA SCANNING

To scan the area, in addition to the edges themselves, entry points (ingress) and exit points (egress) must be defined. Then a trajectory is created consisting of straight sections and transition arcs, allowing the entire range to be scanned.

The distance between successive flights depends on the width of the photo-strip taken, which is associated with the camera parameters and the required resolution (given in the value of meters per pixel). For the same reason, there is also a requirement for a constant flight altitude throughout the entire scanned area.

The output of algorithm 3 - route through the region is shown in Fig. 7.

Algorithm 3

Obstacle avoiding algorithm.

00489_psisdg11442_1144214_page_6_2.jpg

Figure 7.

The result of the algorithm for the example region.

00489_psisdg11442_1144214_page_7_1.jpg

The program allows you to calculate a non-optimal route for YyMax. For the remaining conditions, the route does not contain redundant movements, and at the appropriate entry and exit points it can be optimal.

8.

DESIGN OF THE FLIGHT ROUTE

The obvious fact is that no object, especially an airplane, is able to change it’s direction at a point with zero radius without reducing speed. Therefore, when designing the flight route, all arches must be taken into account, and preferably also third degree transition curves. The circle arc allows the UAV to change direction without changing the speed (the acceleration associated with the movement of the circle is constant |a| = const, but does not affect the speed value). The transition curve, commonly used in civil engineering, takes into account the transition between rectilinear motion and circular motion, maintaining a constant jerk |ȧ| = const. It is worth noting that in the case of small UAV, the transition curves will be short due to the possibility of carrying out fast maneuvers (the distance needed to tilt the wings compared to the length of the arches is very small).

When designing the flight curves, the minimum turning radius is taken into account, which is a complex function of, among others, mass, speed, air density or UAV aerodynamic constants.

Algorithms that find optimal or near optimal solutions are shown in Fig. 8 - Fig. 10:

Figure 8.

Point-to-point connection: a) Dubin curve, b) passage through point, c) curve cornering.1

00489_psisdg11442_1144214_page_7_2.jpg

Figure 9.

Point-to-section / section-to-point connection: a) bend on the minimum curve, b) bend with extension. The requirement to travel straight along the entire length of the section is included.

00489_psisdg11442_1144214_page_8_1.jpg

Figure 10.

Section-section connection: a) movement along tangent arches, b) “filleting” a bend c) raid with “8” shape.

00489_psisdg11442_1144214_page_8_2.jpg

All maneuvers require certain geometrical conditions to be met. For example, Fig. 10a, the distance between parallel sections (d) must be greater than 2 lengths of minimal radius d ≥ 2rmin, if this requirement is not met, one of the lines should be extended to increase the distance over dmin or another algorithm should be used - the example is shown in Fig. 10c.

9.

IMAGES SUPERIMPOSING AND PROCESSING

A database containing measurements of the same areas, but collected at different times, can be used to detect changes in the field. In the traditional approach, this is done manually by an image analyst. Purely digital solutions in which specialized image processing algorithms detect changes are generally faster and reduce the amount of required work. Unfortunately, in very complex cases, machine vision algorithms cannot account for a number of possible factors. The most sensible approach is the synthesis of two methods, i.e. supporting human work with image processing algorithms.

Using the EO/IR head and SAR radar you have access to imaging in three different spectral ranges, starting from the shortest wavelengths:

  • visible spectrum (with the possibility of extending by near infrared in low light conditions),

  • medium infrared range (thermal imaging),

  • range of the X band of microwave radiation.

Images based on visible spectrum are the easiest to recognize and analyze for both human and computer. This is the way in which information about the outside world are received every day and the most widely studied image type in the computer vision. However, the multitude of acquisition conditions means that the recorded image of the same object can be characterized by quite different parameters. In the case of photos collected by UAV, one often deals with the image of the same area, but from different perspectives, in different atmospheric and lighting conditions. For images with sufficiently accurate georeference metadata, you can perform the required perspective transformations, map orientation, and then detect differences.

However, the problem may be more complex if the GPS signal is jammed and other location methods must be used. The information obtained in this way can give a rough location relative to the known location. To cover such images, the ground information contained in the image is used. The most popular approach is to search and overlap characteristic areas/points in both images, due to the low computational complexity.2 Based on a sufficient number of such areas, it is possible to easily transform the perspective and overlay images to detect differences.

Figures 11 and 12 show an example of a pair of images from different perspectives and time points. These images do not have corrected distortion. They were superimposed based on the characteristic areas detected in both images. Based on the best matches, a perspective transformation was made and the images were imposed.

Figure 11.

An example of detecting characteristic areas (features) on both images and their matching.

00489_psisdg11442_1144214_page_9_1.jpg

Figure 12.

The result of feature-based images superimposition.

00489_psisdg11442_1144214_page_9_2.jpg

This is the case of the ground information contained in the image is used to superimpose images without georeference metadata.

The process of detecting differences can also be performed in many ways. For perfectly superimposed images, the only problem is the difference in lighting conditions.

The feature detection problem is more complex for SAR images. There are two types of information in the output signal: amplitude and phase shift of the wave. The amplitude image can be easily interpreted by the analyst. If both photos are taken appropriately, it is possible to compare superimposed frames with universal methods. Phase information also carries useful information, which, unfortunately, for a person dealing with image recognition is impossible to analyze without proper processing. Advanced CCD (Coherent Change Detection) algorithms are used to detect changes in SAR images. Thanks to them, it is possible to detect, for example, traces left by vehicles that crossed a selected area in the time between subsequent scans considered.

10.

SUMMARY

The issue of effective use of autonomous aircraft for reconnaissance purposes is a complex and multi-level issue. At the first stage, it requires proper configuration of onboard sensor parameters, especially cameras or scanners used to supervise space with a clear declaration of mission requirements - e.g. flight times, image resolution. This allows to pre-set mission objectives in terms of UAV itinerary planning.

The next stages of mission planning must be carried out iteratively. Rejection of trajectories resulting, for example, from violation of the prohibited zone requires modification at the level of the declaration of transition points, or choosing another route. That forces all operations to be carried out from the beginning.

After clearly defining the mission objectives, there is a definition of points and sections (straight and arcs) that should be covered during the flight. The next step is to generate a graph that allows you to connect the vertices bypassing the forbidden areas. After a suitable search of the graph, the road is determined, on the basis of which a precise flight model is created, taking into account the transition curves, required speeds and other flight parameters. In case of violations of the set requirements, the input information/assumptions should be adjusted and the processing process restarted. This process can also be carried out during the mission to update missions objectives.

During the flight, the on-board computer constantly controls the UAV flight so that it reproduces the parameters and trajectories as precisely as possible. However, it should also consider mapping/location monitoring from the beginning of the flight in the event of a third party attempting to cut the GPS off using a drone neutralizer. This could allow to complete the mission, or return to the end point in case of GPS and communication systems fail. But also it may increase the accuracy of UAV location in the air using information fusion algorithms from different sensors.

All collected images will contain georeference data, however the accuracy of position estimation based on GPS signal/mapping may not be sufficient to superimpose images to detect differences. Therefore, image processing should be carried out allowing for distortion correction, removing noise, precise image overlay (also in the case of angular shifts) and the use of different detection algorithms.3 It allows to accelerate and increase the accuracy of analyzes of the examined objects.

REFERENCES

[1] 

Wyrozumski, W., PL]Air Navigation Handbook, Wydawnictwo Komunikacji i Lacznosci, Warsaw (1984). Google Scholar

[2] 

A. Kaehler, G. R. B., Learning OpenCV 3. Computer Vision in C+ + with the OpenCV Library, O’Reilly Media, Inc, USA (2017). Google Scholar

[3] 

L. Donghwa, K. Hanguen, M. H., “Image feature-based real-time rgb-d 3d slam with gpu acceleration,” Journal of Institute of Control, Robotics and Systems, (2013). Google Scholar
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Krzysztof Gromada and Piotr Nalewajko "Recognition analysis issues for tactical unmanned aerial vehicles based on optical photographs and SAR scans", Proc. SPIE 11442, Radioelectronic Systems Conference 2019, 1144214 (11 February 2020); https://doi.org/10.1117/12.2564987
PROCEEDINGS
10 PAGES


SHARE
Advertisement
Advertisement
Back to Top