This will count as one of your downloads.
You will have access to both the presentation and article (if available).
In this study we propose to use thermal camera images to (1) improve cloud detection and (2) to study visibility conditions during nighttime. For this purpose, we leverage FLIR A320 and FLIR A655sc Stationary Thermal Imagers installed in the city of Bern, Switzerland. We find that the proposed data provides detailed information about low clouds and the cloud base height that is usually not seen by satellites. However, clouds with a small optical depth such as thin cirrus clouds are difficult to detect as the noise level of the captured thermal images is high.
The second part of this study focuses on the detection of structural features. Predefined targets such as roof windows, an antenna, or a small church tower are selected at distances of 140m to 1210m from the camera. We distinguish between active targets (heated targets or targets with insufficient thermal insulation) and passive structural features to analyze the sensor's visibility range. We have found that a successful detection of some passive structural features highly depends on incident solar radiation. Therefore, the detection of such features is often hindered during the night. On the other hand, active targets can be detected without difficulty during the night due to major differences in temperature between the heated target and its surrounding non-heated objects. We retrieve response values by the cross-correlation of master edge signatures of the targets and the actual edge-detected thermal camera image. These response values are a precise indicator of the atmospheric conditions and allows us to detect restricted visibility conditions.
In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
Methods: First, 8 male subjects had to look for specific female targets within a heavily cluttered public area. Subjects were supported by differing amounts of markings that helped them to identify females in general. We presented video clips and analyzed the search patterns. Second, 18 subject matter experts had to identify targets on a heavily frequented motorway intersection. We presented them with video material from a UAV (Unmanned Aerial Vehicle) surveillance mission. The video image was subdivided in three zones: The central zone (CZ), a circle area of 10°. The peripheral zone (PZ) corresponding to a 4:3 format and the hyper peripheral zone (HPZ) which represented the lateral region specific to the 16:9 format. We analyzed fixation densities and task performance.
Results: We found an approximately U-shaped correlation between the number of markings in a video and the degree of structure in search patterns as well as performance. For the motorway surveillance task we found a difference in mean detection time for CZ vs. HPZ (p=0.01) and PZ vs. HPZ (p=0.003) but no difference for CZ vs. PZ (p=0.491). There were no differences in detection rate for the respective zones. We found the highest fixation density in CZ decreasing towards HPZ.
Conclusion: We were able to demonstrate that markings could increase surveillance operator performance in a cluttered environment as long as their number is kept in an optimal range. When performing a search task within a heavily cluttered environment, humans tend to show rather erratic search patterns and spend more time watching central picture areas.
View contact details
No SPIE Account? Create one