Measurement of objects with complex geometry and many self-occlusions is increasingly important in many fields, including additive manufacturing. In a fringe projection system, the camera and the projector cannot move independently with respect to each other, which limits the ability of the system to overcome object self-occlusions. We demonstrate a fringe projection setup where the camera can move independently with respect to the projector, thus minimizing the effects of self-occlusion. The angular motion of the camera is tracked and recalibrated using an on-board inertial angular sensor, which can additionally perform automated point cloud registration.
Photogrammetry based systems are able to produce 3D reconstructions of an object given a set of images taken from different orientations. In this paper, we implement a light-field camera within a photogrammetry system in order to capture additional depth information, as well as the photogrammetric point cloud. Compared to a traditional camera that only captures the intensity of the incident light, a light-field camera also provides angular information for each pixel. In principle, this additional information allows 2D images to be reconstructed at a given focal plane, and hence a depth map can be computed. Through the fusion of light-field and photogrammetric data, we show that it is possible to improve the measurement uncertainty of a millimetre scale 3D object, compared to that from the individual systems. By imaging a series of test artefacts from various positions, individual point clouds were produced from depth-map information and triangulation of corresponding features between images. Using both measurements, data fusion methods were implemented in order to provide a single point cloud with reduced measurement uncertainty.
In this paper we show that, by using a photogrammetry system with and without laser speckle, a large range of additive
manufacturing (AM) parts with different geometries, materials and post-processing textures can be measured to high
accuracy. AM test artefacts have been produced in three materials: polymer powder bed fusion (nylon-12), metal powder
bed fusion (Ti-6Al-4V) and polymer material extrusion (ABS plastic). Each test artefact was then measured with the
photogrammetry system in both normal and laser speckle projection modes and the resulting point clouds compared with
the artefact CAD model. The results show that laser speckle projection can result in a reduction of the point cloud
standard deviation from the CAD data of up to 101 μm. A complex relationship with surface texture, artefact geometry
and the laser speckle projection is also observed and discussed.
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change
significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to
keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for
photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To
calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector
is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns
on planes, before the actual object can continue to be measured after a motion of a camera or projector has been
introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup
changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep
learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based
on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the
success of this calibration pipeline can be greatly improved by using adequate a priori information from the