Translator Disclaimer
Open Access Paper
17 September 2019 LumiScan technology for automation and quality inspection
Author Affiliations +
Proceedings Volume 11144, Photonics and Education in Measurement Science 2019; 1114412 (2019)
Event: Joint TC1 - TC2 International Symposium on Photonics and Education in Measurement Science 2019, 2019, Jena, Germany
LumiScan is a novel technology by which highly accurate measurements on shiny, metallic parts are feasible without any special requirements on illumination. The system is fully operational on the shop floor, demonstrating its superior performance.



Automation tasks and robotic applications are becoming increasingly diverse and complex. This goes hand in hand with requirements imposed by industry 4.0 and production of lot size one. Tasks for robotics and automation in flexible production are becoming more and more diverse and complex. Statically pre-programmed work steps in automated manufacturing will give way to fully automatic and self-acting robots and cobots (collaborative robots). These novel manufacturing methods pose major challenges to the optical sensors that enable the robot to detect its environment, find workpieces, and check them for defects. Current sensors fail due to inherent weaknesses of their technology on a variety of these tasks. To this end, complex multi-sensor systems have to be used.

In this contribution we will present LumiScan technology and products developed by HD Vision Systems GmbH. With their patent pending multi-camera approach, industrial image processing tasks are facilitated by superior data quality. The approach is characterized by its high accuracy on glinting and metallic objects, its independence to external illumination, and no need for active illumination.




LumiScan Technology

The triangulation principle is still widespread in today’s sensor technology, such as passive stereo sensors or active systems with structured illumination. These systems have frequent problems with complex objects, due to shading effects or in the case of metallically shiny surfaces. Here, the assignment of the corresponding image areas as the basis of triangulation cannot be done correctly. In order to avoid such errors, these procedures are often complex in the setup or parameterization.

Lichtfeld imaging and the LumiScan Technology in particular offers completely new possibilities. The light field contains the complete optical information of a scene. By measuring the light field, the exact reconstruction of this scene, such as object shape and position, as well as surface color and gloss, is possible. This is done highly robustly of any interference. Due to these advantages, the consistent use of the LumiScan is predestined to meet the increasingly higher requirements for optical sensors in productive environments in the long term.

The patented pending LumiScan technology from HD Vision Systems GmbH teaches a robot to see, much like humans themselves. When people take a component into their hand for further processing in production, they check whether the component meets the desired quality: Is the colour correct? Are scratches visible? Is the correct serial number present? Is the form correct? The size and visual appearance? All these are questions that are largely unconscious in humans during the manufacturing process. This processing is now also open to robots, which can also take on complicated tasks.

As a spin-off of the IWR of the University of Heidelberg, HD Vision Systems GmbH has a technology that reflects the latest research results. The novel, light field-based sensor technology is a hardware/software combination. This allows to measure the object geometry, object color and gloss properties with only one sensor even on complex and glossy surfaces. At the same time, Deep Learning approaches are applied to 2D and 3D data. This allows for the first time to solve many tasks of automation under the umbrella of Industry 4.0 in a highly accurate, easy to use and flexible manner, with only one sensor system.

The technology of HD Vision Systems GmbH is easy to use and simple to implement. The team has extensive expertise in all areas of industrial image processing. As such, integration of the technology is offered in addition to the LumiScan products themselves, either by HD Vision Systems GmbH or by their partners. Via Profinet, OPC-UA and GenICam, the systems implement standard interfaces and can thus be seamlessly integrated into the existing infrastructure and machine control. The software can be extended modularly via a simple to use API. The customer is offered the entire process chain from raw 3D point clouds to application-related parameters. This includes object and layer detection, the handle in the box, barcode and OCR for traceability and IO/NIO values for deviations from CAD models. Customer-specific solutions are also developed by HD Vision Systems GmbH.

HD Vision Systems GmbH successfully implements projects with strong partners, leading suppliers of the automotive industry, the packaging industry and in the manufacturing industry. As a result, the LumiScan products are well proven for production requirements under challenging environments.


LumiScan Products

HD Vision System’s LumiScan products sample the plenoptic function with discrete cameras. The term plenoptic was derived from the word roots plen- (plenus) and opti- (optos), which means full/complete and eye/view, respectively. The plenoptic function was coined by Bergen and Anderson to describe the intensity of each light ray in the world as a function of visual angle, wavelength, time, and viewing position. It captures everything that can potentially be seen by an optical device and is related to previous concept of J. J. Gibson’s “the structure of ambient light” and Leonardo da Vinci’s “visual pyramid.” The advantage of using discrete cameras for measuring the plenoptic function is the flexibility and high resolution of this setup. The first product, primarily tailored towards the robotics and automation market, is the LumiScanX, shown in Figure 1.

Figure 1.

LumiScanX, a sensor system consisting of 13 discrete industrial-grade cameras. The footprint of the system is 112mm x 122mm x 52mm.


Other hardware configurations are presented in Figure 2. They consist of single camera gantry setups or linear arrays. All these configurations have different strengths. For example, with a single camera, a temporal multiplexing of the plenoptic function is performed. This facilitates high quality captures with a high-resolution camera. Only static objects can be captured in this fashion due to a significant longer capturing process as compared to multi-camera setups. All the configuration shown in Figure 2 have been produced for customers and are currently used for different aspects of their automation or quality inspection tasks.

Figure 2.

The LumiScan technology is independent of a specific hardware. Different system layouts such has single camera.






Figure 3.

Sample bin-picking application. The general setup and an image of the objects in a bin are shown in the top row. The point cloud and detected object candidates for performing the pick process below.


The task of Bin Picking is a core problem in robotics. The goal is to have the robot to pick-up known objects with random poses out of a bin. This process, if robust, facilitates the handling in automatic production tremendously. The tasks is made difficult due to occlusions of the objects in the bin and to identifying objects, the robot can reach without collisions or singularities of the required poses in reaching the object.

The LumiScanX sensor and it’s accompanying software suite has been specifically designed for this tasks. It is fully operational at customers and can be easily integrated into present processes. The technology is robust against occlusions, making the process easier to handle than with current sensors. Objects can be taught-in via CAD data or directly through examples. LumiScan is easy to install and set up. Communication with all common robot types and image processing libraries is done by default via OPC-UA, Profinet or Gen-TL.


Quality Inspection

Figure 5.

Results of deep-learning-based defect classification and localization. In this sample, scratches are detected on metal parts.


Besides the localization and gripping of objects, quality inspection is a vital task in fully automated production systems. To this end, HD Vision Systems offers Deep Learning-based approaches. Defects classes are simple to teach-in. The quality inspection tasks can be done in conjunction with handling by the robot. Thus, no or only minimal additions to cycle times are required.



In this contribution we have presented the LumiScan technology and LumiScan products of HD Vision Systems GmbH. The technology is easy to use and simple to implement. It has distinct advantages on challenging and non-cooperative surfaces that are frequently encounter in today’s manufacturing industries. It is uniquely able to measure highly accurately the surface properties and 3D shape of metallic and glinting objects. Deep Learning has significant advantages on the high density data streams and is applied to a wide range of quality inspection applications. HD Vision Systems GmbH successfully implements projects with strong partners, leading suppliers of the automotive industry and in the manufacturing industry. As a result, the LumiScan products are well proven for production requirements under challenging environments. The technology is not limited to the hardware of LumiScanX but can be flexibly used on other hardware instances specifically adapted to customer’s requirements. In this way, LumiScan sensors and advanced algorithms are accessible to a broad user base and offers customers of HD Vision Systems GmbH significant advantages along with cost reductions.

© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Christoph S. Garbe, Sven Wenzel, and Benedikt Karolus "LumiScan technology for automation and quality inspection", Proc. SPIE 11144, Photonics and Education in Measurement Science 2019, 1114412 (17 September 2019);


Robot Vision Overview
Proceedings of SPIE (July 22 1985)
Real-time adaptive vision servoing of a robotic manipulator
Proceedings of SPIE (November 01 1992)
Virtual display aids for teleoperation
Proceedings of SPIE (March 01 1992)
Review of ESPRIT-funded European vision projects
Proceedings of SPIE (August 06 1993)
Supervised autonomy for robotic inspection
Proceedings of SPIE (May 02 2007)

Back to Top