Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2
In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps) . Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.
Graphics Processing Units (GPUs) are powerful devices with lots of processing capabilities for parallel jobs. The detection of objects in a scene requires large amount of independent pixel operations on the video frames that can be done in parallel, making GPU a good choice for the processing platform. This paper only concentrates on Background Subtraction Techniques  to detect the objects present in the scene. The foreground pixels are extracted from the processed frame and compared to the corresponding ones of the model. Using a connected- component detector, neighboring pixels are gathered in order to form blobs which correspond to the detected foreground objects. The new blobs are compared to the blobs formed in the previous frame to see if the corresponding object moved.
Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras.
A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting
image is related to the number of source images, their brightness, and the used algorithm for their stitching
and blending. One of the algorithms that provides excellent results in terms of background color uniformity and
reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into
multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter
In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set
of five filters that showed the best results in both literature and our experiments. The set includes Gaussian
filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented
results of filter comparison are based on several no-reference metrics for image quality. We conclude that
5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered.
Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable
systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve
very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems
used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements
cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor’s image resolution.
Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented.
This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where
the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera
designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from
the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording
omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time
video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next
generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently
under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in
surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange
imaging which are beyond standard stitching and panorama generation methods.