Open Access
5 August 2016 Special Section Guest Editorial:Advances on Distributed Smart Cameras
Jorge Fernández-Berni, François Berry, Christian Micheloni
Author Affiliations +

1.

Introduction

This special section was aimed at bringing together the latest contributions to the exciting multidisciplinary field of distributed smart cameras. An open call for papers was issued, with relevant topics ranging from vision chips and dedicated real-time image processing hardware to high-level information processing and smart camera networks. Invitations were also issued for extended versions of selected works from the 2015 edition of the flagship academic event in this field, the International Conference on Distributed Smart Cameras, where the guest editors served as technical program chairs. A total of 20 manuscripts were submitted, out of which 12 papers were finally accepted—60% acceptance rate—after peer review by at least two experts in the field. These papers are briefly introduced in the sections below.

2.

Visual Sensor Networks and Distributed Computer Vision

Eldib et al. present a privacy-aware visual sensor network enabling behavior analysis for elderly care. The network operates in a real unstructured environment without camera calibration, rendering reliable mobility patterns useful for caregivers. The analysis extends along 10 months of real-life video recordings. Hanca et al. report a video coding suitable for wireless camera networks. The advances come from using a low resolution video sensor integrated in a light processing unit. They study performance-complexity trade-offs for feedback-channel removal. They also propose learning-based techniques for rate allocation and investigate various simplifications of side-information generation yielding real-time decoding. Nuger and Benhabib propose a fusion technique to estimate the three-dimensional (3-D) shape of a deformable object via a multicamera vision system. The fusion of stereo triangulation and visual hull allows prediction of the 3-D shape of unknown, markerless deforming objects. Comprehensive simulations and comparison demonstrate the effectiveness of the proposed approach with respect to the state of the art. Redondi et al. present a game theoretic framework to better exploit the overlapping fields of view of visual sensor networks. The objectives are twofold: improve accuracy by means of multiview cameras and reduce the energy consumption by using redundancy. Both simulated and real-life tests show that the proposed approach increases the lifetime of the system without considerable loss of accuracy.

3.

Smart Cameras and Vision Chips

The paper by Reichel et al. reports a simulation platform, mostly developed in SystemC, where various sources of nonidealities in mixed-signal vision chips can be evaluated in terms of their impact on computer-vision algorithms. This gives rise to a comprehensive design loop where parameters at different abstraction levels are intertwined. Kyrkou and Teocharides propose a feature-based visual search algorithm. It exploits motion, depth, and edge visual features to guide the process of object search to only the most meaningful image regions in an effort to constrain the overall data search space. As a result, the amount of data feeding the classifier is reduced. An evaluation on an FPGA-based platform for face detection indicates that the data search reduction reaches 95%. This results in a system being able to process up to 50 1024 × 768-px images per second with a notable reduction of false positives. Dziri et al. present a processing pipeline for tracking of multiple objects. It is implemented from inexpensive off-the-shelf components—Raspberry-Pi board and a RaspiCam camera—and tested on real scenarios. Despite the low complexity of the proposed methodology, the tracking quality is close to state-of-the-art results. Carraro et al. report an open-source software library dedicated to the new Kinect v2. This library is exploited in an embedded system, the NVidia Jetson TK1, giving rise to a cost-efficient RGB-D smart-camera for people detection and tracking. One of the major result is the point cloud generation performed at 22 Hz and the people detection achieved at 14 Hz. These frame rates are double and triple of those found in state-of-the-art works. Imran et al. propose solutions for various challenges in the field of embedded smart cameras. In particular, the authors investigate two low complexity and high-performance preprocessing architectures to be implemented on a FPGA for a multi-imaging node. The experiments show how such architectures can reach higher frame rate with lower memory and power consumption requirements.

4.

Emerging Applications

Fang et al. propose a new algorithm for segmentation of infrared (IR) images. The method combines region and edge information in order to fit an adaptable active contour model. The targeted application is IR ship target segmentation. Experiments prove robustness of the model with respect to heterogeneous regions, weak edges, and noises under complex background. The paper by Li et al. presents a low-cost approach for pose tracking using fused vision and inertial data. Experimental results show that the proposed system is accurate and robust against illumination changes and partial occlusions. Application scenarios like augmented reality and 3-D game control are explored. Ober-Gecks et al. present a method for the reconstruction of the photo hull. The proposed approach is based on a GPU implementation of voxel coloring in turn based on item buffering. This generates a speedup of the iterative carving procedure while considering all of the given pixel color information.

Acknowledgments

The guest editors want to express their greatest appreciation to all the authors who submitted papers to this special section and also to the authors of accepted papers for the hard work they have performed to address all reviewer comments. We want to thank all of our reviewers for their valuable and timely reports. Last but not least, we really appreciate the help and guidance provided by the JEI editorial staff to bring this special section together.

Biography

Jorge Fernández-Berni is a “Juan de la Cierva” senior researcher at the University of Seville. His main areas of interest are smart image sensors, vision chips, and embedded vision systems. He has authored/coauthored some 50 papers in refereed journals, conferences, and workshops in these fields. He is also the first author of a book and two book chapters as well as the first inventor of two licensed patents.

François Berry is researching smart cameras, active vision, embedded vision systems, and hardware/software co-design algorithms. He is the head of the DREAM (Research on Embedded Architecture and Multisensor) team at University Blaise Pascal. He has authored and coauthored more than 60 papers for journals, conferences, and workshops. He has been a cofounder of the Workshop on Architecture of Smart Camera (WASC) and Scabot (workshop in conjunction with IEEE IROS) and also of the startup WISP.

Christian Micheloni is an associate professor of computer vision at the University of Udine. He coauthored different scientific works published in international journals and refereed international conferences. His main interests involve active vision for the wide area scene understanding, neural networks for the classification and recognition of the objects moving within the scene, and camera networks to establish proper control protocols for improving cognition capabilities. He is also interested in pattern recognition and machine learning.

© 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
Jorge Fernández-Berni, François Berry, and Christian Micheloni "Special Section Guest Editorial:Advances on Distributed Smart Cameras," Journal of Electronic Imaging 25(4), 041001 (5 August 2016). https://doi.org/10.1117/1.JEI.25.4.041001
Published: 5 August 2016
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Visualization

3D image processing

Embedded systems

Image processing

Image segmentation

Infrared imaging

RELATED CONTENT


Back to Top