Structured light depth map systems are a type of 3D system where a structured light pattern is projected into the object space and an adjacent receiving camera is used to capture the image of the scene. By using the distance
between the camera and the projector together with the structured pattern you can estimate the depth of objects in
the scene from the camera. It is important to be able to compare two systems to see how one compares to another. Accuracy, resolution, and speed are three aspects of a structured light system that are often used for performance evaluation. It would be ideal if we could use the accuracy and resolution measurements to answer questions such as how close two cubes can be together and be resolved as two objects. Or, determine how close a person must be to the structured light system in order to determine how many fingers this person is holding up. It turns out, from our experiments, a systems ability to resolve the shape of an object is dependent on a number of factors such as the shape of an object, its orientation and how close it is to other adjacent objects. This makes the task of comparing the resolution of two systems difficult. Our goal is to choose a target or a set of targets from which we make measurements that will enable us to quantify, on the average, the comparative resolution performance of one system to another without having to make multiple measurements on scenes with a large set of object shapes, orientations and proximities to each other. In this document we will go over a number of targets we evaluated and will focus on the “Cut-out Star Target” that we selected as being the best choice. Using this target we will show our evaluation results of two systems. The metrics we used for the evaluation were developed during this work. These metrics will not directly answers the question of how close two objects can be to each other and still be resolve, but it will indicate which system will perform better over a large set of objects, orientations and proximities to other objects.