A combination between research lines of robotics and artificial intelligence and using computer vision, this consists of using robotic systems that can recognize and understand images and scenes, generally integrate the Areas: detection of objects image recognition and Image Generation Object detection is sophisticated and more in robotics due to countless applications that can develop through image processing. This article shows the implementation of the NVIDIA® Jetson development card in a remote control unmanned aerial system (RPAS) for object recognition, based on focal loss. Hence, it is a challenge to obtain results, which it faces to develop it will show in the expected final solution.
Technology development has allowed each day we have devices that can contain all the functionality of a Digital System on a single chip (SoC) and they have a very high scale of integration (VLSI) hundreds of millions of gates at very low costs. As well as the design, verification and synthesis tools offered by the development factories those make these SoC and FPGA components. This companies offer Integrated Development Environments with software tools to perform from the specification of the Design to its synthesis in C.I. and its verification in industry standard languages as Verilog and VHDL. This paper shows the advantages in design, verification, synthesis and testing that can be obtained by using HDL languages such as CHISEL, MyHDL for the processing of video processing in Real-Times and demonstrate its main advantages in both learning time and costs.
This article shows the application of the advantages offered by the SURF algorithm for the detection of points of interest in the video images, which are monitored in real time, of the concrete units that form a breakwater. This procedure of monitoring and analysis of the images allows determining the displacement suffered by the elements or shells of the breakwater and consequently the damages submerge the breakwaters. This technique is applied in modeling studies in hydraulic coastal laboratories. Damage can be weighted as a percentage of the total number of armor units on the slope of the breakwater, per unit of area covered by video camera monitoring and digital image processing with the SURF algorithm to determine the movements of the elements of the housing properly and efficiently.
In computational vision has a high computational cost, although, some algorithms had been implemented to get image features, that allow assorting, object and face recognition and so on. Some solutions have been developed in computers, DSP and GPU those that are not optimal with time. In order to improve the performance of these algorithms, we are implementing the SURF algorithm in embedded systems (FPGA) and applied to non-controller environments that require real-time response. In this work we development a SURF algorithm in order to improve time processing in video and image processing, we use an FPGA to apply that algorithm, we compare the time processing with different devices and the features found it into the images, this features will be invariant to scale, rotation and lighting, the SURF algorithm localize the interest points (features), its is using in facial recognition, object detection, stereo vision and so on. This algorithm has a high computational cost because of use a lot of data, in order to reduce the high cost we implemented LUTs and reduce time with code. With this work we try to find the best way to implement the algorithm into embedded systems, in order to use in non-controller environments and robots autonomous.