Due to their advantages of non-destructiveness, high accuracy, and high sensitivity, optical measurement techniques have been successfully applied to measure various important physical quantities in experimental mechanics, materials science, biomechanics, etc. In order to deal with increasingly larger amounts of data and increase accuracy, the computational burden of optical measurement techniques has become heavier. In the past decade, parallel computing devices have been applied to accelerate these techniques, among which graphics processing units (GPUs) have become mainstream due to their high parallelism, cost effectiveness, short development cycle, and transparent scalability. Additionally, compute unified device architecture (CUDA), invented by NVIDIA, provides an easy-to-use C/C++ programming interface, which has opened the possibility to program GPUs without having to learn complex shading languages and the graphics pipeline. This Spotlight not only demonstrates the power of GPUs in accelerating optical measurement algorithms but also provides a hands-on approach for the acceleration of existing sequential algorithms on CUDA-capable GPUs. Readers who understand basic C/C++ programming can then attempt to integrate CUDA with their existing optical algorithms for higher computing performance.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.