Efficient and accurate real-time perception systems are critical for Unmanned Aerial Vehicle (UAV) applications that aim to provide enhanced situational awareness to users. Specifically, object recognition is a crucial element for surveillance and reconnaissance missions since it provides fundamental semantic information of the aerial scene. In this study, we describe the development and implementation of a perception frame-work on an embedded computer vision platform, mounted on a hexacopter for real-time object detection. The framework includes a camera driver and a deep neural network based object detection module and has distributed computing capabilities between the aerial platform and the corresponding ground station. Preliminary aerial real-time object detections using YOLO are performed onboard a UAV and a sequence of images are streamed to the base station where an advanced computer vision algorithm, referred to as Multi-Expert Region-based CNN (ME- RCNN), is leveraged to provide enhanced and fine-grained analytics on the aerial video feeds. Since annotated aerial imagery in the UAV domain is hard to obtain and not routinely available, we use a combination of aerial data as well as air-to-ground synthetic images, such as vehicles, generated by video gaming engines for training the neural network. Through this study, we quantify the level of improvements with the use of the synthetic dataset and the efficacy of using advanced object detection algorithms.
Development of machine vision systems to examine fruit for quality and contamination problems has been stalled due
to lack of an inexpensive, fast, method for appropriately orienting fruit for imaging. We recently discovered that apples
could be oriented based-on inertial properties. Apples were rolled down a ramp consisting of two parallel rails. When
sufficient angular velocity was achieved, the apples moved to a configuration where the stem/calyx axis was
perpendicular to the direction of travel. This discovery provides a potential basis for development of a commercially-viable
orientation system. However, many question remain concerning the underlying dynamic principles that govern
this phenomenon. An imaging system and software were constructed to allow detailed observation of the orientation
process. Sequential 640×480 monochrome images are acquired at 60 fps and 1/500 sec exposure. The software finds the
center of the apple in each image as well as the vertical movement of the track at a selected coordinate. Early tests
revealed that the compliance of the track played a significant role in the orientation process. These data will be used to
compare results from empirical tests with predictions of dynamic models.