Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.
A method of estimating vehicle height, width and speed from images obtained by a monocular camera is presented. The method is based on the detection and tracking of vehicle license plates. The distance between the license plate and the camera is calculated from its pixel coordinates. The method makes no assumptions about the camera mounting height. The computational cost and the processing time are reduced by using tilt measurements provided by a microelectromechanical sensor and field-of-view data obtained prior to installation.