In recent years the number of offshore wind farms is rapidly increasing. Especially coastal European countries are building numerous offshore wind turbines in the Baltic, the North, and the Irish Sea. During both construction and operation of these wind farms, many specially-equipped helicopters are on duty. Due to their flexibility, their hover capability, and their higher speed compared to ships, these aircraft perform important tasks like helicopter emergency medical services (HEMS) as well as passenger and freight transfer flights. The missions often include specific challenges like platform landings or hoist operations to drop off workers onto wind turbines. However, adverse weather conditions frequently limit helicopter offshore operations. In such scenarios, the application of aircraft-mounted sensors and obstacle databases together with helmet-mounted displays (HMD) seems to offer great potential to improve the operational capabilities of the helicopters used. By displaying environmental information in a visual conformal manner, these systems mitigate the loss of visual reference to the surroundings. This helps the pilots to maintain proper situational awareness. This paper analyzes the specific challenges of helicopter offshore operations in wind parks by means of an online survey and a structured interview with pilots and operators. Further, the work presents how our previously introduced concept of an HMD-based virtual flight deck could enhance helicopter offshore missions. The advantages of this system – for instance its “see-through the airframe”-capability and its highly-flexible cockpit setup – enable us to design entirely novel pilot assistance systems. The gained knowledge will be used to develop a virtual cockpit that is tailor-made for helicopter offshore maneuvers
In the past couple of years, research on display content for helicopter operations headed in a new direction. The already reached goals could evolve into a paradigm change for information visualization. Technology advancements allow implementing three-dimensional and conformal content on a helmet-mounted see-through device. This superimposed imagery inherits the same optical flow as the environment. It is supposed to ease switching between display information and environmental cues. The concept is neither pathbreaking nor new, but it has not been successfully established in aviation yet. Nevertheless, there are certainly some advantages to expect—at least from perspective of a human-centered system design. Within the following pages, the next generation displays will be presented and discussed with a focus on human factors. Beginning with recalling some human factor related research facts, an experiment comparing the former two-dimensional research displays will be presented. Before introducing the DLR conformal symbol set and the three experiments about an innovative drift, indication related research activities toward conformal symbol sets will be addressed.
A head-worn display combined with accurate head-tracking allows one to show synthetically generated symbols in a way that they appear as a part of the real world. Depending on the specific research context, different terms have been used for the ability to show display elements as parts of the outside world. These include contact analog, scene linked, augmented reality, and outside conformal. While the famous highway in the sky was one of the first applications in avionics, over the years more and more conformal counterparts have been devised for aircraft-related instruments. Among them are routing information, navigation aids, specialized landing displays, obstacle warnings, drift indicators, and many more. Conformal displays have been developed for more than 40 years. We present a review of some results, as well as look ahead to research trends for the next years. We suggest that naturalism is not the best choice for the design of conformal displays. Instead, more abstract representations often yield better pilot acceptance.
A combination of see-through head-worn or helmet-mounted displays (HMDs) and imaging sensors is frequently used to overcome the limitations of the human visual system in degraded visual environments (DVE). A visual-conformal symbology displayed on the HMD allows the pilots to see objects such as the landing site or obstacles being invisible otherwise. These HMDs are worn by pilots sitting in a conventional cockpit, which provides a direct view of the external scene through the cockpit windows and a user interface with head-down displays and buttons. In a previous publication, we presented the advantages of replacing the conventional head-down display hardware by virtual instruments. These virtual aircraft-ﬁxed cockpit instruments were displayed on the Elbit JEDEYE system, a binocular, see-through HMD. The idea of our current work is to not only virtualize the display hardware of the ﬂight deck, but also to replace the direct view of the out-the-window scene by a virtual view of the surroundings. This imagery is derived from various sensors and rendered on an HMD, however without see-through capability. This approach promises many advantages over conventional cockpit designs. Besides potential weight savings, this future ﬂight deck can provide a less restricted outside view as the pilots are able to virtually see through the airframe. The paper presents a concept for the realization of such a virtual ﬂight deck and states the expected beneﬁts as well as the challenges to be met.
Degraded visual environment is still a major problem for helicopter pilots especially during approach and landing. Particularly with regard to the landing phase, pilot’s eyes must be directed outward in order to find visual cues as indicators for drift estimation. If lateral speed exceeds the limits it can damage the airframe or in extreme cases lead to a rollover. Since poor visibility can contribute to a loss of situation awareness and spatial disorientation, it is crucial to intuitively provide the pilot with the essential visual information he needs for a safe landing. With continuous technology advancement helmet-mounted displays (HMD) will soon become a spreading technology, because look through capability is an enabler to offer monitoring the outside view while presenting flight phase depending symbologies on the helmet display. Besides presenting primary flight information, additional information for obstacle accentuation or terrain visualization can be displayed on the visor. Virtual conformal elements like 3D pathway depiction or a 3D landing zone representation can help the pilot to maintain control until touchdown even during poor visual conditions. This paper describes first investigations in terms of both en route and landing symbology presented on a helmet mounted display system in the scope of helicopter flight trials with DLR’s flying helicopter simulator ACT/FHS.
Helicopter operations require a well-controlled and minimal lateral drift shortly before ground contact. Any lateral speed exceeding this small threshold can cause a dangerous momentum around the roll axis, which may cause a total roll over of the helicopter. As long as pilots can observe visual cues from the ground, they are able to easily control the helicopter drift. But whenever natural vision is reduced or even obscured, e.g. due to night, fog, or dust, this controllability diminishes. Therefore helicopter operators could benefit from some type of “drift indication” that mitigates the influence of a degraded visual environment. Generally humans derive ego motion by the perceived environmental object flow. The visual cues perceived are located close to the helicopter, therefore even small movements can be recognized. This fact was used to investigate a modified drift indication. To enhance the perception of ego motion in a conformal HMD symbol set the measured movement was used to generate a pattern motion in the forward field of view close or on the landing pad. The paper will discuss the method of amplified ego motion drift indication. Aspects concerning impact factors like visualization type, location, gain and more will be addressed. Further conclusions from previous studies, a high fidelity experiment and a part task experiment, will be provided. A part task study will be presented that compared different amplified drift indications against a predictor. 24 participants, 15 holding a fixed wing license and 4 helicopter pilots, had to perform a dual task on a virtual reality headset. A simplified control model was used to steer a “helicopter” down to a landing pad while acknowledging randomly placed characters.
Helicopter operations require a well-controlled and minimal lateral drift shortly before ground contact. Any lateral speed exceeding this small threshold can cause a dangerous momentum around the roll axis, which may cause a total roll-over of the helicopter. As long as pilots can observe visual cues from the ground, they are able to easily control the helicopter drift. However, when visibility is reduced or even obscured, e.g. due to night, fog, or dust, this controllability diminishes. Therefore helicopter operators could benefit from some type of "drift indication" that mitigates the influence of degraded visual environment.
With continuous technology advancement helmet-mounted displays (HMD) will soon become a spreading technology. At the present state HMDs are still expensive and are mostly reserved for military operations. The symbol sets fielded are designed for well trained staff and special missions. Investigating some of those symbol sets revealed that lateral drift indication doesn’t live for what it promises. With practice these symbol sets assist well during the approach but lack of proper cues once the helicopter hovers. Present developments also focus on three dimensional symbol sets that are conformal with the environment. All of them present a virtual landing pad. These types of see-through synthetic vision displays allow several new methods of information visualization.
Generally humans derive ego motion by the perceived environmental optical flow. To enhance this perception a pattern motion was implemented in a conformal HMD symbol set which amplifies the measured own ship movement. The paper presents results from an experimental study with 18 pilots from civil and military operators. In this study the forward landing zone border was replaced by an animated dashed line for indicating the amplified ego motion.
Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird’s eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot’s mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.
Helicopter guidance in situations where natural vision is reduced is still a challenging task. Beside new available sensors, which are able to “see” through darkness, fog and dust, display technology remains one of the key issues of pilot assistance systems. As long as we have pilots within aircraft cockpits, we have to keep them informed about the outside situation. “Situational awareness” of humans is mainly powered by their visual channel. Therefore, display systems which are able to cross-fade seamless from natural vision to artificial computer vision and vice versa, are of greatest interest within this context. Helmet-mounted displays (HMD) have this property when they apply a head-tracker for measuring the pilot’s head orientation relative to the aircraft reference frame. Together with the aircraft’s position and orientation relative to the world’s reference frame, the on-board graphics computer can generate images which are perfectly aligned with the outside world. We call image elements which match the outside world, “visual-conformal”. Published display formats for helicopter guidance in degraded visual environment apply mostly 2D-symbologies which stay far behind from what is possible. We propose a perspective 3D-symbology for a head-tracked HMD which shows as much as possible visual-conformal elements. We implemented and tested our proposal within our fixed based cockpit simulator as well as in our flying helicopter simulator (FHS). Recently conducted simulation trials with experienced helicopter pilots give some first evaluation results of our proposal.
Landing under adverse weather conditions can be challenging, even if the airfields are well known to the pilots. This is true for civil as well as military aviation. Within the scope of this paper we concentrate especially on fog conditions. The work has been conducted within the project ALICIA. ALICIA is a research and development project co-funded by European Commission under the Seventh Framework Programme. ALICIA aims at developing new and scalable cockpit applications which can extend operations of aircraft in degraded conditions: All Conditions Operations. One of the systems developed is a head-up display that can display a generated symbology together with a raster-mode infrared image. We will detail how we implemented a real-time enabled simulation of a combined short-wave and long-wave infrared image for landing. A major challenge was to integrate several already existing simulation solutions, e.g., for visual simulation and sensors with the required data-bases. For the simulations DLRs in-house sensor simulation framework F3S was used, together with a commercially available airport model that had to be heavily modified in order to provide realistic infrared data. Special effort was invested for a realistic impression of runway lighting under foggy conditions. We will present results and sketch further improvements for future simulations.
Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges
within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types
of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR’s research helicopter FHS (flying
helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster
architecture acquires and fuses all the information to get one single comprehensive description of the outside situation.
While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced
sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential
moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is
very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter’s mission
timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted
features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper
describes applied feature extraction methods for moving object detection, as well as data fusion techniques for
combining features from TV/IR and Ladar data.
Within the recent years many different proposals have been published about the best design of display contents for
helicopter pilot assistance during DVE landing. The guidance cues are typically shown as an overlay possibly on top of
additional sensor or database imagery. This overlay represents the main information source for helicopter pilots for
landing. Display technology within this field applies two different principles: Multicolor head-down display (panel
mount), and monochrome head-up display (helmet-mounted). For both types the state-of-the-art imagery doesn't make
use of conformal symbol sets. They rather expose the pilots to mixed views (2D forward and bird's eye view). Even so
the trained pilots can easily interpret the presented data it doesn't seem to be the best design for head-up displays.
A study was realized to compare different proposed symbol sets (e.g. BOSS, DEVILA and JEDEYE). During approach
and landing trials in our helicopter simulator these different formats were presented to the pilots on head-down and
helmet-mounted displays. The evaluation of this study is based on measured flight guidance performance (objective
measures) and on questionnaires (subjective measures). The results can pave the way for the planned development of a
new conformal wide field of view perspective display for DVE landing assistance.
Project ALLFlight is DLR's initiative to diminish the problem of piloting helicopters in degraded visual conditions.
The problem arises whenever dust or snow is stirred up during landing (brownout/whiteout), eectively
blocking the crew's vision of the landing site. A possible solution comprises the use of sensors that are able
to look through the dust cloud. As part of the project display symbologies are being developed to enable the
pilot to make use of the rather abstract and noisy sensor data. In a rst stage sensor data from very dierent
sensors is fused. This step contains a classication of points into ground points and obstacle points. In a second
step the result is augmented with ground data bases and depicted in a synthetic head-down display. Regarding
the design, several variations in symbology are considered, including variations in color coding, continuous or
non-continuous terrain displays and dierent obstacle representations. In this paper we present the basic techniques
used for obstacle and ground separation. We choose a set of possibilities for the pilot display and detail
the implementation. Furthermore, we present a pilot study, including human factors assessment with focus on
usability and pilot acceptance.
The DLR project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites) is devoted to
demonstrating and evaluating the characteristics of sensors for helicopter operations in degraded visual environments.
Millimeter wave radar is one of the many sensors considered for use in brown-out. It delivers a lower angular resolution
compared to other sensors, however it may provide the best dust penetration capabilities. In cooperation with the NRC,
flight tests on a Bell 205 were conducted to gather sensor data from a 35 GHz pencil beam radar for terrain mapping,
obstacle detection and dust penetration. In this paper preliminary results from the flight trials at NRC are presented and a
description of the radars general capability is shown. Furthermore, insight is provided into the concept of multi-sensor
fusion as attempted in the ALLFlight project.
When used in conjunction with helmet mounted displays stereo camera views can provide invaluable advantages
in, for example, aviation uses. One of the most common setups is to mount cameras to both sides of the pilot's
helmet. However, since these cameras posses a larger disparity than the eyes distances to perceived objects are
misinterpreted by the pilot. This may cause irritations, even sickness when combined with enhanced displays.
Even in the best case the magnified disparity may lead to exaggerated distance estimations. In this paper simple
computations are presented that can correct hyperstereopsis "on the fly". With the availability of fast computer
hardware carrying out these computations in real time comes into reach. Furthermore, we sketch a series of
experiments to evaluate the effectiveness of our approach.
To improve the situation awareness of an aircrew during poor visibility, different approaches emerged during the past
couple of years. Enhanced vision systems (EVS - based upon sensor images) are one of those. They improve situation
awareness of the crew, but at the same time introduce certain operational deficits. EVS present sensor data which might
be difficult to interpret especially if the sensor used is a radar sensor. In particular an unresolved problem of fast
scanning forward looking radar systems in the millimeter waveband is the inability to measure the elevation of a target.
In order to circumvent this problem effort was made to reconstruct the missing elevation from a series of images. This
could be described as a "Stereo radar"-attempt and is similar to the reconstruction using photography (angle-angle
images) from different viewpoints to rebuilt the depth information. Two radar images (range-angle images) with
different bank angles can be used to reconstruct the elevation of targets.
This paper presents the fundamental idea and the methods of the reconstruction. Furthermore, experiences with real data
from EADS's "HiVision" MMCW radar are discussed. Two different approaches are investigated: First, a fusion of
images with variable bank angles is calculated for different elevation layers and picture processing reveals identical
objects in these layers. Those objects are compared regarding contrast and dimension to extract their elevation. The
second approach compares short fusion pairs of two different flights with different nearly constant bank angles.
Accumulating those pairs with different offsets delivers the exact elevation.
Pathway-in-the-sky displays enable pilots to accurately fly difficult trajectories. However, these displays may drive pilots' attention to the aircraft guidance task at the expense of other tasks particularly when the pathway display is located head-down. A pathway HUD may be a viable solution to overcome this disadvantage. Moreover, the pathway may mitigate the perceptual segregation between the static near domain and the dynamic far domain and hence, may improve attention switching between both sources. In order to more comprehensively overcome the perceptual near-to-far domain disconnect alphanumeric symbols could be attached to the pathway leading to a HUD design concept called 'scene-linking'. Two studies are presented that investigated this concept. The first study used a simplified laboratory flight experiment. Pilots (N=14) flew a curved trajectory through mountainous terrain and had to detect display events (discrete changes in a command speed indicator to be matched with current speed) and outside scene events (hostile SAM station on ground). The speed indicators were presented in superposition to the scenery either in fixed position or scene-linked to the pathway. Outside scene event detection was found improved with scene linking, however, flight-path tracking was markedly deteriorated. In the second study a scene-linked pathway concept was implemented on a monocular retinal scanning HMD and tested in real flights on a Do228 involving 5 test pilots. The flight test mainly focused at usability issues of the display in combination with an optical head tracker. Visual and instrument departure and approach tasks were evaluated comparing HMD navigation with standard instrument or terrestrial navigation. The study revealed limitations of the HMD regarding its see-through capability, field of view, weight and wearing comfort that showed to have a strong influence on pilot acceptance rather than rebutting the approach of the display concept as such.
Head-up displays (HUD) and helmet (or head)-mounted displays (HMD) aim at reducing the pilot's visual scanning cost in support of concurrent monitoring of both instrument information (near domain) and the outside environment (far domain). An HMD used in combination with a head tracker enables the assessment of the pilot’s head direction in real time allowing symbologies to remain spatially linked to elements of the outside environment. The paper examines the potential added benefits of improved flight path tracking to be expected by displaying symbologies of a virtual 3D perspective pathway plus predictor information on an HMD. Results of a high-fidelity flight-simulation experiment are reported that involved a series of curved approaches supported with such a pathway HMD. The study used a monocular retinal-scanning HMD and involved 18 pilots. Dependent human performance data were derived from flight path tracking measures, subjective measures of mental workload and situation awareness and pilot reactions in response to an unexpected rare event in the outside scene (intruding aircraft on the active runway for the intended landing). Comparison with a standard head-down ILS baseline condition revealed a mix of performance costs and benefits, which is consistent with most of the human factors literature on the general use of HUDs and of HUDs used in combination with pathway guidance: The pathway HMD promoted substantially better flight path tracking but caused also a delayed response to the unexpected event. This effect points to some disadvantages of HUDs referred to as 'attention capture', which may become exaggerated by the additional use of pathway guidance symbology.
During approach and landing the pilot performs a high-workload task of switching the attention between instrument in-formation and the outside scene. Superimposing both visual domains in head-up (HUD) or head-mounted displays (HMD) reduces the visual scanning load of this task. These displays are collimated at optical infinity; therefore, prevent the pilot's eye from permanent accommodation between both visual domains. Besides these performance benefits, visual clutter and attention fixation, i.e. inattentiveness to outside scene events while attending on HUD symbologies, are found to be performance cost factors. Conformal symbology and flight-phase adapted de-cluttering has been found to be prom-ising approaches to overcome these problems.
In pursuit of these two approaches, the current paper describes the design of a new pathway display on a monocular head-mounted retinal scanning display and its implementation in DLR's generic cockpit simulator. The pathway can be regarded as a means of linking an instrument symbology (the tunnel) with a virtual element of the outside scene (the in-tended flight path). Scene-linked symbology appear to be part of the outside world, e.g. an instrument reading like air-speed, heading, or altitude that is changing its display location conformal with the gate element of the tunnel symbology moving towards the pilot. Examples of flight phase-adaptive de-cluttering is to successively reduce or remove symbol-ogy when the conformal outside element becomes visible (e.g. the runway). In addition the display includes a conformal presentation of the terrain. A checker board pattern representing the terrain is dynamically generated from worldwide available SRTM-3 data.