THz imaging and sensing has demonstrated a wide-ranging application potential. However, the transfer of such basic applicability observations to real-world application scenarios is severely obstructed by fundamental limitations imposed by the comparatively long wavelength of this analytic technique. In this presentation, an overview of recent signal processing developments for the enhancement of the analytic performance of THz imaging and sensing systems is presented. The first part of the presentation introduces advanced signal processing techniques to enhance the spectroscopic investigation capability of THz systems. Experiments are performed at particularly difficult application situations, including inter alia very thin material systems or measurements with strongly absorptive features beyond the signal to noise limitations of spectroscopic instrumentation. Model- based numeric procedures for spectroscopic investigation with pulsed THz systems are derived, which enhance the analytic material data quality by two orders of magnitude in comparison to established numeric procedures. Furthermore, computer vision based blind-deconvolution superresolution approaches are introduced, which allow the unassisted increase of imaging resolution beyond the diffraction limit. Experiments performed with a FMCW- based THz imaging system operating from 514 - 640 GHz demonstrate a resolution increase by a factor of 2.3 beyond the diffraction limit, without requiring any prior knowledge on the point-spread function size or shape of the imaging system, but based on a direct analysis of the imaging data of an unknown target sample.
Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.
We report on the development of an active stand-off imaging system operating in the 80 GHz - 110 GHz frequency
range. 3D real-time imaging is enabled by a combination of a mechanically scanned one-dimensional conventional
imaging projection with a rotating metallic reflector and a two-dimensional synthetic imaging reconstruction with a
linear array of transmitter (Tx) and receiver (Rx) elements. The system is conceived, in order to allow a resolution better
than 1cm both in lateral, as well as in range directions by using a multi-view imaging geometry with an aperture larger
than 2 m x 2 m. The operation distance is 8.5 - 9 m. The 2D synthetically reconstructed imaging planes are derived from
the correlation of 20 sources and 24 coherent detectors. Range information is obtained by operating in a frequency
modulated continuous wave (FMCW) mode. Real-time imaging is enabled by implementing the synthetic image
reconstruction algorithms on a general purpose graphics processing unit (GPGPU) system. A multi-view imaging
geometry is implemented, in order to enhance the imaging resolution and to reduce the influence of specular reflections.
The most common sellar lesion is the pituitary adenoma, and sellar tumors are approximately 10-15% of all intracranial
neoplasms. Manual slice-by-slice segmentation takes quite some time that can be reduced by using the appropriate
algorithms. In this contribution, we present a segmentation method for pituitary adenoma. The method is based on an
algorithm that we have applied recently to segmenting glioblastoma multiforme. A modification of this scheme is used
for adenoma segmentation that is much harder to perform, due to lack of contrast-enhanced boundaries. In our
experimental evaluation, neurosurgeons performed manual slice-by-slice segmentation of ten magnetic resonance
imaging (MRI) cases. The segmentations were compared to the segmentation results of the proposed method using the
Dice Similarity Coefficient (DSC). The average DSC for all datasets was 75.92%±7.24%. A manual segmentation took
about four minutes and our algorithm required about one second.
A growing number of modern applications such as position determination, online object recognition and collision
prevention depend on accurate scene analysis. A low-cost and fast alternative to standard techniques like laser scanners or stereo vision is the distance measurement with modulated, incoherent infrared light based on the Photo Mixing Device (PMD) technique. This paper describes an enhanced calibration approach for PMD-based distance sensors, for which highly accurate calibration techniques have not been widely investigated yet. Compared to other known methods, our approach incorporates additional deviation errors related with the variation of the active illumination incident to
the sensor pixels. The resulting calibration yields significantly more precise distance information. Furthermore, we present a simple to use, vision-based approach for the acquisition of the reference data required by any distance calibration scheme, yielding a light-weighted, on-site calibration system with little expenditure in terms