Laser vibrometry based on coherent detection allows non-contact measurements of small-amplitude vibration
characteristics of objects. This technique, commonly using the Doppler effect, offers much potential for short-range civil
applications and for long-range applications in defence and security. Most commercially available laser vibrometers are
for short ranges (up to a few tens of metres) and use a single beam from a low power HeNe laser source (λ = 632 nm).
Long-range applications need higher laser output power, and thus appropriate vibrometers typically operate at 1.5 μm, 2
μm or 10.6 μm to meet the laser safety regulations.
Spatially resolved vibrational information can be obtained from an object by using scanning laser vibrometers. To reduce
measuring time and to measure transient object movements and mode structures of objects, several approaches to multibeam
laser Doppler vibrometry have been developed, and some of them are already commercially available for short
In this paper we focus on applications in the field of defence and security such as target classification and identification,
including camouflaged or partly concealed targets, and the detection of buried land mines. Some examples of civil
medium-range applications are given also.
In the present publication we investigate 3D range-gated imaging in scattering environments. Experimental data
was obtained from measurements in a fog chamber as well as in the Baltic Sea using an underwater range-gated
imaging system. A detailed analysis of this data reveals that the reconstruction of 3D information is disturbed
by the impact of scattering environments. As a result a first model was set up for the propagation of light in the
range-gated imaging process. This model is the basis for the development of image post-processing algorithms
applying least-square curve-fitting methods. A pixel-wise least square curve-fitting of experimental data reveals
the contribution of diffuse scattered light from neighboring areas on the imaging process. Therefore, an extended
spatial-temporal model which took into account the diffuse scattered light was set-up and discussed.
In this publication we investigate the image coding method for 3D range-gated imaging. This method is based
on multiple exposure of range-gated images to enable a coding of ranges in a limited number of images. For
instance, it is possible to enlarge the depth mapping range by a factor of 12 by the utilization of 3 images and
specific 12T image coding sequences. Further, in this paper we present a node-model to determine the coding
sequences and to dramatically reduce the time of calculation of the number of possible sequences. Finally, we
demonstrate and discuss the application of 12T sequences with different clock periods T = 200 ns to 400 ns.
This paper investigates the prospects of "seeing around the corner" using active imaging. A monostatic active imaging
system offers interesting capabilities in the presence of glossy reflecting objects. Examples of such surfaces are windows
in buildings and cars, calm water, signs and vehicle surfaces. During daylight it might well be possible to use mirrorlike
reflection by the naked eye or a CCD camera for non-line of sight imaging. However the advantage with active imaging
is that one controls the illumination. This will not only allow for low light and night utilization but also for use in cases
where the sun or other interfering lights limit the non-line of sight imaging possibility. The range resolution obtained by
time gating will reduce disturbing direct reflections and allow simultaneous view in several directions using range
Measurements and theoretical considerations in this report support the idea of using laser to "see around the corner".
Examples of images and reflectivity measurements will be presented together with examples of potential system
Using laser imaging systems to represent 3-D scene becomes a referent prospective technology in the areas of guidance and
navigation. Measurements with high spatial resolution for significant range can be achieved, even in degraded visibility
conditions such as the Brown-White Out, rain, fog, sandstorms... Moreover, this technology is well suited for assisted
perception tasks (access to 3D information) and obstacle detection (telemetry of small objects). For airborne applications, it is
very complementary to conventional enhanced vision systems such as Forward Looking Infrared (FLIR) and millimeter wave
radar to provide images of land in environments with limited visibility. It also offers a 3D mapping of land or a single location
in relation to the environment, which means alone or coupled with others, can realign and secure real-time database of
information used such in a synthetic vision system (SVS). The objective of the work is to assess the impact of degraded
visibility conditions on the laser radiometric propagation of a 3D laser scanner as they directly influence the performance of the
ladar system .
The new generation laser-based FLASH 3D imaging sensors enable data collection at video rate. This opens up for realtime
data analysis but also set demands on the signal processing. In this paper the possibilities and challenges with this
new data type are discussed. The commonly used focal plane array based detectors produce range estimates that vary
with the target's surface reflectance and target range, and our experience is that the built-in signal processing may not
compensate fully for that. We propose a simple adjustment that can be used even if some sensor parameters are not
known. The cost for the instantaneous image collection is, compared to scanning laser radar systems, lower range accuracy.
By gathering range information from several frames the geometrical information of the target can be obtained. We
also present an approach of how range data can be used to remove foreground clutter in front of a target. Further, we
illustrate how range data enables target classification in near
real-time and that the results can be improved if several
frames are co-registered. Examples using data from forest and maritime scenes are shown.
In this paper we present a 3D reconstruction technique designed to support an autonomously navigated unmanned system. The algorithm and methods presented focus on the 3D reconstruction of a scene, with color and distance information, using only a single moving camera. In this way, the system may provide positional self-awareness for navigation within a known, GPS-denied area. It can also be used to construct a new model of unknown areas. Existing 3D reconstruction methods for GPS-denied areas often rely on expensive inertial measurement units to establish camera location and orientation. The algorithm proposed---after the preprocessing tasks of stabilization and video enhancement---performs Speeded-Up Robust Feature extraction, in which we locate unique stable points within every frame. Additional features are extracted using an optical flow method, with the resultant points fused and pruned based on several quality metrics. Each unique point is then tracked through the video sequence and assigned a disparity value used to compute the depth for each feature within the scene. The algorithm also assigns each feature point a horizontal and vertical coordinate using the camera's field of views specifications. From this, a resultant point cloud consists of thousands of feature points plotted from a particular camera position and direction, generated from pairs of sequential frames. The proposed method can use the yaw, pitch and roll information calculated from visual cues within the image data to accurately compute location and orientation. This positioning information enables the reconstruction of a robust 3D model particularly suitable for autonomous navigation and mapping tasks.
In 2004, a laser scanner device for commercial airborne laser scanning applications, the RIEGL LMS-Q560, was
introduced to the market, making use of a radical alternative approach to the traditional analogue signal detection and
processing schemes found in LIDAR instruments so far: digitizing the echo signals received by the instrument for every
laser pulse and analysing these echo signals off-line in a so-called full waveform analysis in order to retrieve almost all
information contained in the echo signal using transparent algorithms adaptable to specific applications. In the field of
laser scanning the somewhat unspecific term "full waveform data" has since been established. We attempt a
categorisation of the different types of the full waveform data found in the market. We discuss the challenges in echo
digitization and waveform analysis from an instrument designer's point of view and we will address the benefits to be
gained by using this technique, especially with respect to the so-called multi-target capability of pulsed time-of-flight
Correctly determining a measurement range in LIDAR instruments, based on time-of-flight measurements on laser
pulses, requires the allocation of each received echo pulse to its causative emitted laser pulse. Without further
precautions this definite allocation is only possible under specific conditions constraining the usability of range finders
and laser scanners with very high measurement rates. Losing the unambiguity of ranges in high repetition systems is well
known in RADAR and the term "multiple time around" (MTA) has been coined. However because of fundamental
differences between scanning LIDAR and RADAR, with respect to MTA processing, new approaches for resolving
range ambiguities in LIDAR are possible. In this paper we compare known and novel techniques for avoiding or even
resolving range ambiguities without any further user interaction required. Such techniques may be based upon measures
affecting hardware (e.g. spatial multiplexing or modulation of consecutive laser pulses), software (e.g. assumptions about
the true measurement range based on a rough DTM) or both hard- and soft-ware in order to achieve a high probability of
correctly resolved range ambiguities. Furthermore a comparison of different approaches is given, discussing their
specific (dis-) advantages and their current status of implementation.
In this work an algorithm of the tracking of the set of moving objects is described. The important features of the task are
crossings of the object trajectories and temporary screening of the objects by other objects. The source data for the
proposed algorithm is a list of the parameters of the binary regions extracted from each image of the sequence. The main
idea of the considered algorithm is to build a bipartite graph. The recoursive procedure is used to partition the graph into
connected graphs corresponding to five situations: detection of a new object, missing object, merging of the objects into
one region, division of the region and "simple" object tracking. These graphs are used to form a new list of the objects.
The experimental research of the algorithm shows a good tracking performance in both ground and aerial environments.
This contribution reports some of the fusion results from the EDA SNIPOD project, where different multisensor
configurations for sniper detection and localization have been studied. A project aim has been to cover the
whole time line from sniper transport and establishment to shot. To do so, different optical sensors with and
without laser illumination have been tested, as well as acoustic arrays and solid state projectile radar. A sensor
fusion node collects detections and background statistics from all sensors and employs hypothesis testing and
multisensor estimation programs to produce unified and reliable sniper alarms and accurate sniper localizations.
Operator interfaces that connect to the fusion node should be able to support both sniper countermeasures and
the guidance of personnel to safety. Although the integrated platform has not been actually built, sensors have
been evaluated at common field trials with military ammunitions in the caliber range 5.56 to 12.7 mm, and
at sniper distances up to 900 m. It is concluded that integrating complementary sensors for pre- and postshot
sniper detection in a common system with automatic detection and fusion will give superior performance,
compared to stand alone sensors. A practical system is most likely designed with a cost effective subset of
available complementary sensors.
From scientific research to deployable operational solutions, Fourier-Transform Infrared (FT-IR)
spectroradiometry is widely used for the development and enhancement of military and research
applications. These techniques include targets IR signature characterization, development of advanced
camouflage techniques, aircraft engine's plumes monitoring, meteorological sounding and atmospheric
composition analysis such as detection and identification of chemical threats. Imaging FT-IR spectrometers
have the capability of generating 3D images composed of multiple spectra associated with every pixel of
the mapped scene. That data allow for accurate spatial characterization of target's signature by resolving
spatially the spectral characteristics of the observed scenes.
MR-i is the most recent addition to the MR product line series and generates spectral data cubes in the
MWIR and LWIR. The instrument is designed to acquire the spectral signature of various scenes with high
temporal, spatial and spectral resolution. The four port architecture of the interferometer brings modularity
and upgradeability since the two output ports of the instrument can be populated with different
combinations of detectors (imaging or not). For instance to measure over a broad spectral range from 1.3 to
13 μm, one output port can be equipped with a LWIR camera while the other port is equipped with a
MWIR camera. Both ports can be equipped with cameras serving the same spectral range but set at
different sensitivity levels in order to increase the measurement dynamic range and avoid saturation of
bright parts of the scene while simultaneously obtaining good measurement of the faintest parts of the
scene. Various telescope options are available for the input port.
Overview of the instrument capabilities will be presented as well as test results and results from field trials
for a configuration with two MWIR cameras. That specific system is dedicated to the characterization of
airborne targets. The expanded dynamic range allowed by the two MWIR cameras enables to
simultaneously measure the spectral signature of the cold background and of the warmest elements of the
scene (flares, jet engines exhausts, etc.).
In many spatial systems, image is a core technology to fulfil the mission requirements. Depending on the application, the
needs and the constraints are different and imaging systems can offer a large variety of configurations in terms of
wavelength, resolution, field-of-view, focal length or sensitivity. Adequate image processing algorithms allow the
extraction of the needed information and the interpretation of images.
As a prime contractor for many major civil or military projects, Astrium ST is very involved in the proposition,
development and realization of new image-based techniques and systems for space-related purposes. Among the
different applications, space surveillance is a major stake for the future of space transportation. Indeed, studies show that
the number of debris in orbit is exponentially growing and the already existing population of small and medium debris is
a concrete threat to operational satellites. This paper presents Astrium ST activities regarding space surveillance for
space situational awareness (SSA) and space traffic management (STM). Among other possible SSA architectures, the
relevance of a ground-based optical station network is investigated. The objective is to detect and track space debris and
maintain an exhaustive and accurate catalogue up-to-date in order to assess collision risk for satellites and space vehicles.
The system is composed of different type of optical stations dedicated to specific functions (survey, passive tracking,
active tracking), distributed around the globe. To support these investigations, two in-house operational breadboards
were implemented and are operated for survey and tracking purposes.
This paper focuses on Astrium ST end-to-end optical-based survey concept. For the detection of new debris, a network of
wide field of view survey stations is considered: those stations are able to detect small objects and associated image
processing (detection and tracking) allow a preliminary restitution of their orbit.
Onera, the French Aerospace Lab, develops and models active imaging systems to understand the relevant physical
phenomena impacting on their performances. As a consequence, efforts have been done both on the propagation of a
pulse through the atmosphere (scintillation and turbulence effects) and, on target geometries and their surface properties
(radiometric and speckle effects). But these imaging systems must operate at night in all ambient illuminations and
weather conditions in order to perform the strategic surveillance of the environment for various worldwide operations or
to perform the enhanced navigation of an aircraft. Onera has implemented codes for 2D and 3D laser imaging systems.
As we aim to image a scene even in the presence of rain, snow, fog or haze, Onera introduces such meteorological
effects in these numerical models and compares simulated images with measurements provided by commercial imaging
Applications like hydro-archeology, hydrobiology, or hydraulic engineering sometimes require accurate surveying of
submerged areas with point densities usually only achieved with mobile or terrestrial laser scanning. For navigable
waterbodies, hydrographic laser scanning from a floating platform represents a viable solution. RIEGL's new
hydrographic laser scanner VQ-820-G with its exceptionally high measurement rate of up to 110,000 net measurements
per second and its small laser footprint is optimally suited for such applications. We present results from a measurement
campaign surveying prehistoric lake dwellings at Lake Constance in Germany. While the aim of typical hydrographic
laser scanning applications is to roughly acquire the ground's shape and structure, in this case it was tried to determine
the exact position, shape, and attitude of the remainders of the piles. The special requirements with respect to mission
planning and data processing are discussed and the performance of the laser scanner is assessed.
A remote laser timing system has been developed for use by the British Cycling team. Five optical Timing Gate Units
(TGU) have been installed around the track at the Manchester Velodrome. Each TGU can identify and monitor multiple
cyclists during training sessions. Lap and split times can be measured as well as the speeds of individual cyclists passing
each gate. The system allows coaches to concentrate on observing the cyclists' technique rather than manually capturing
their times. It has resulted in more effective and efficient training sessions that have helped cyclists improve their
performance. This paper will describe the design issues encountered, as well as the optical and signal processing
solutions. Example results obtained from training sessions will be presented.
In this paper, a novel technique for accurate velocity measurement for high speed space objects using linear frequency
modulation (LFM) radar signatures is presented. The proposed method utilizes the phase slope of the received
intermediate frequency (IF) signals from the moving object to estimate the object's kinematic parameters. Constant false
alarm rate (CFAR) detection and finite impulse response (FIR) filtering are subsequently exploited to enhance the
accuracy of the velocity estimates. Then polynomial fitting is incorporated into the phase slope analysis to further reduce
estimation errors. Simulation results demonstrate that the phase slope method is computationally efficient and accurate
for velocity estimation.
The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active
imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising
military applications, system analyses, a roadmap and recommendations.
Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the
sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn,
active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows
observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage,
camouflage...) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to
performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result
in new capabilities.
We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long
range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat
detection. We present the system analyses that have been performed for confirming the interests, limitations and
requirements of spectral active imaging in these three prioritized applications.
A fast and reliable detection of potentially dangerous substances has become very important in ensuring civilian security.
Currently, modern security systems have proven to be more effective on the basis that objects should be properly
characterized and identified. For instance, chemical tests are used to identify samples of whitish powder that is suspected
to be dangerous or illegal. Although these chemical tests are conducted very quickly, they are relatively expensive.
However, well established methods of optical characterization offer a suitable alternative. The demand for low-cost and
disposable devices have escalated the development of intelligent photodiodes, especially of tunable a-Si:H multispectral
photodiodes1. Our aim of reengineering is to develop the best match for the spectral response adjustment. Unfortunately,
it is not sufficient to optimize the spectral response only. The top down design flow begins with the calculation of the
photocurrent for different combinations of light sources, spectral responses and whitish powder samples to build up a
multivariate data set. The optimum combination is found at the point of intersection in the factor values in a 2-D
scattergram. It is therefore, required that the use optimized photodiodes would simplify and accelerate the identification
of potentially dangerous substances.
High-power near-Infrared LED (IRED) are gaining more and more interest in a large variety of commercial, industrial
and military applications.
IRED are based on InAlGaAs semiconductor structures which cover a spectral range of 780 nm to 1100 nm. This
wavelength range is supposed to be not visible to the human eye. But, depending on the radiant intensity and wavelength,
a reddish glow is still evident. Therefore, in covert applications longer wavelength of 940 nm or even higher are
preferred due to the much lower sensitivity of the human eye compared to 850 nm. On the other hand at around 850 nm
the spectral sensitivity of CMOS or CCD cameras or other silicon based photo detectors is at its maximum. We present
the latest developments in high power IRED in the quest for more than 1 W from a single 1mm2 die.
Fiber optical interferometers belong to highly sensitive equipments that are able to measure slight changes like distortion
of shape, temperature and electric field variation and etc. Their great advantage is that they are insensitive on ageing
component, from which they are composed of. It is in virtue of herewith, that there are evaluated no changes in optical
signal intensity but number interference fringes. To monitor the movement of persons, eventually to analyze the changes
in state of motion we developed method based on analysis the dynamic changes in interferometric pattern. We have used
Mach- Zehnder interferometer with conventional SM and PM fibers excited with the DFB laser at wavelength of 1550
nm. It was terminated with optical receiver containing InGaAs PIN photodiode. Its output was brought into measuring
card module that performs on FFT of the received interferometer signal. The signal rises with the composition of two
waves passing through single interferometer arm. The optical fiber SMF 28e or PM PANDA fiber in one arm is
referential; the second one is positioned on measuring slab at dimensions of 1×2m. A movement of persons over the slab
was monitored, signal processed with FFT and frequency spectra were evaluated. They rose owing to dynamic changes
of interferometric pattern. The results reflect that the individual subjects passing through slab embody characteristic
frequency spectra, which are individual for particular persons. The scope of measuring frequencies proceeded from zero
to 10 kHz. At experiments the stability of interferometric patterns was evaluated as from time aspects, so from the view
of repeated identical experiments. Two kinds of balls (tennis and ping-pong) were used to plot the repeatability
measurements and the gained spectra at repeated drops of balls were compared. Those stroked upon the same place and
from the same elevation and dispersion of the obtained frequency spectra was evaluated. These experiments were
performed on the series of 20 repeated drops from highs of 0,5 and 1m. The evaluation of experiments displayed that the
dispersion of measured values is lower than 4% and could be reduced by PM fibers usage.
An iterative algorithm which identifies the presence of different gases using a hyperspectral image was developed and
tested. The algorithm uses the "stepwise regression" method combined with new methods of detection and identification.
This algorithm begins with a library of gas signatures; an initial fit is done with all the gases. The algorithm then
eliminates those signatures which do not noticeably improve the fit to the measured signature. We then consider which
of the gases that were detected have a high probability of being mistaken with the detection of other gases that are also
present in the scene. A necessary post-processing step eliminates gases which do not uniquely fit the signature of the
examined pixel, with an emphasis on eliminating gases which may have been misidentified.
Most chemical gas detection algorithms for hyperspectral imaging applications assume a gas with a perfectly
known spectral signature. In practice, the chemical signature is either imperfectly measured and/or exhibits
spectral variability due to temperature variations and Beer's law. The objective of this work is to explore robust
matched filters that take the uncertainty and/or variability of the target signatures into account. We introduce
various techniques that control the selectivity of the matched filter and we evaluate their performance in standoff
LWIR hyperspectral chemical gas detection applications.
Remote sensing by infrared spectroscopy allows detection and identification of hazardous clouds in the atmosphere from
long distances. Previous work showed how imaging spectroscopy can be used to assess the location, the dimensions, and
the dispersion of a potentially hazardous cloud. In this work an infrared hyperspectral imager based on a Michelson
interferometer in combination with a focal plane array detector was deployed to measure gas emissions in the Hamburg
port area. Emissions from ships, industrial sources as well as gases released intentionally were measured. Using
algorithms for remote sensing by infrared spectroscopy it was possible to identify, visualize, and track the gas clouds in
real time. The system proved to be robust in the field. It provided excellent spectra with low noise and high spatial
Recent advances in InGaAs camera technology has stimulated interest in the short wave infra-red (SWIR) band in
the spectral region 0.9 - 1.7 μm. Located between the visible and thermal infra-red, the SWIR band shows
interesting properties of both. Images tends to have the look of the visible and are less affected by scattering from
aerosol haze, however the solar irradiance is dropping rapidly with wavelength in the SWIR. Spectral signatures,
particularly of paints and dyes, may be different in the SWIR band compared to the visible. For these reasons we
have chosen to investigate hyper-spectral measurements in this band using the NovaSol μHSI SWIR hyper-spectral
The described spectral imaging system, referred to as a Snapshot Hyperspectral Imaging Fourier Transform (SHIFT)
spectrometer, is capable of acquiring spectral image data of a scene in a single integration of a camera, is ultra-compact,
inexpensive (commercial off-the-shelf), has no moving parts, and can produce datacubes (x, y, λ) in real time. Based on
the multiple-image FTS originally developed by A. Hirai , the presented device offers significant advantages over his
original implementation. Namely, its birefringent nature results in a common-path interferometer which makes the
spectrometer insensitive to vibration. Furthermore, it enables the potential of making the instrument ultra-compact,
thereby improving the portability of the sensor. By combining a birefringent interferometer with a lenslet array, the
entire spectrometer consumes approximately 15×15×20 mm3, excluding the imaging camera. The theory of the
birefringent FTS is provided, followed by details of its specific embodiment and a laboratory proof of concept of the
sensor. Post-processing is currently accomplished in Matlab, but progress is underway in developing real-time
reconstruction capabilities with software programmed on a graphics processing unit (GPU). It is anticipated that
processing of >30 datacubes per second can be achieved with modest GPU hardware, with spatial/spectral data of or
exceeding 256×256 spatial resolution elements and 60 spectral bands over the visible (400-800 nm) spectrum. Data
were collected outdoors, demonstrating the sensor's ability to resolve spectral signatures in standard outdoor lighting and
environmental conditions as well as retinal imaging.
We present two methods to improve the well-known algorithms for hyperspectral point target detection: the constrained
energy minimization algorithm (CEM), the Generalized Likelihood Ratio Test algorithm (GLRT) and the adaptive
coherence estimator algorithm (ACE). The original algorithms rely solely on spectral information and do not use spatial
information; this is normally justified in subpixel target detection since the target size is smaller than the size of a pixel.
However, we have found that, since the background (and the false alarms) may be spatially correlated and the point
spread function can distribute the energy of a point target between several neighboring pixels, we should consider spatial
filtering algorithms. The first improvement uses the local spatial mean and covariance matrix which take into account the
spatial local mean instead of the global mean. The second considers the fact that the target physical sub-pixel size will
appear in a cluster of pixels. We test our algorithms by using the dataset and scoring methodology of the Rochester
Institute of Technology (RIT) Target Detection Blind Test project. Results show that both spatial methods independently
improve the basic spectral algorithms mentioned above; when used together, the results are even better.
VTT Technical Research Centre of Finland has developed a lightweight Fabry-Perot interferometer based hyperspectral
imager weighting only 400 g which makes it compatible with various small UAV platforms. The concept of the
hyperspectral imager has been published in SPIE Proc. 74741 and 76682. This UAV spectral imager is capable of
recording 5 Mpix multispectral data in the wavelength range of 500 - 900 nm at resolutions of 10-40 nm,
Full-Width-Half-Maximum (FWHM). An internal memory buffer allows 16 Mpix of image data to be stored during one
image burst. The user can configure the system to take either three 5 Mpix images or up to 54 VGA resolution images
with each triggering. Each image contains data from one, two or three wavelength bands which can be separated during
post processing. This allows a maximum of 9 spectral bands to be stored in high spatial resolution mode or up to 162
spectral bands in VGA-mode during each image burst. Image data is stored in a compact flash memory card which
provides the mass storage for the imager. The field of view of the system is 26° × 36° and the ground pixel size at 150 m
flying altitude is around 40 mm in high-resolution mode. The design, calibration and test flight results will be presented.
The accuracy achieved by applications employing hyperspectral data collected by hyperspectral cameras depends
heavily on a proper estimation of the true spectral signal. Beyond question, a proper knowledge about the sensor
response is key in this process. It is argued here that the common first order representation for hyperspectral
NIR sensors does not represent accurately their thermal wavelength-dependent response, hence calling for more
sophisticated and precise models. In this work, a wavelength-dependent, nonlinear model for a near infrared
(NIR) hyperspectral camera is proposed based on its experimental characterization. Experiments have shown
that when temperature is used as the input signal, the camera response is almost linear at low wavelengths,
while as the wavelength increases the response becomes exponential. This wavelength-dependent behavior is
attributed to the nonlinear responsivity of the sensors in the NIR spectrum. As a result, the proposed model
considers different nonlinear input/output responses, at different wavelengths. To complete the representation,
both the nonuniform response of neighboring detectors in the camera and the time varying behavior of the input
temperature have also been modeled. The experimental characterization and the proposed model assessment
have been conducted using a NIR hyperspectral camera in the range of 900 to 1700 [nm] and a black body
radiator source. The proposed model was utilized to successfully compensate for both: (i) the nonuniformity
noise inherent to the NIR camera, and (ii) the stripping noise induced by the nonuniformity and the scanning
process of the camera while rendering hyperspectral images.