A challenge in the semiconductor industry is the 3D inspection of solder bumps grown on wafers for direct die-to-die bonding. In an earlier work we proposed a novel mechanism for reconstructing wafer bump surface in 3D, which is based upon projecting a binary pattern to the surface and capturing image of the illuminated scene. By shifting the binary pattern in space and every time taking a separate image of the illuminated surface, each position on the illuminated surface will be attached with a binary code in the sequence of images taken. 3D information about the bump surface can then be obtained over these coded points via triangulation. However, when a binary pattern is projected onto the inspected surface through projection lenses, the high order harmonics of the pattern are often diminished because of the lens' limited bandwidth. This will lead to blurring of the projected fringe boundaries in the captured image data and make differentiation between dark and bright fringes there difficult. In addition, different compositions of the target surface, some metallic (the solder surface) and some not (the substrate surface of the wafer), have different reflectance functions (including both the specular and lambertian components). This makes fringe boundary detection in the image data an even more challenging problem. This paper proposes a solution to the problem. It makes use of the spatial-temporal image volume over the target surface to tackle the issue of inhomogeneous reflectance function. It is shown that the observed intensity profile across the images of a fixed point has the same up-and-down profile of the orignal binary gratings, regardless of the reflectance on the target surface, from which edges can be detected using classical methods like the gradient based ones. Preliminary study through theoretical analysis and empirical experiments on real image data demonstrate the feasibility of proposed approach.