1 January 2008 High-speed parallel very large scale integration architecture for global stereo matching
Author Affiliations +
J. of Electronic Imaging, 17(1), 010501 (2008). doi:10.1117/1.2892680
Although stereo matching algorithms based on belief propagation (BP) tend to show excellent matching performance, their huge computational complexity has been the major barrier to real-time applications. In this light, we propose a parallel very large scale integration (VLSI) architecture for BP computation, which has only simple integer operations and shows low matching error rate for the Middlebury database.
Park and Jeong: High-speed parallel very large scale integration architecture for global stereo matching



Stereo matching algorithms find corresponding points in a pair of images to locate 3-D positions. They can be classified into either local or global matching approaches.1 Local approaches, like correlation and dynamic programming methods, deal only with subimages. These approaches have the advantage of real-time speed,2, 3 but tend to produce high errors. In contrast, the global approaches, like graph cuts and belief propagations (BPs),4 deal with full images. These approaches have the advantage of low errors, but tend to execute huge computational loads. In real-time applications, like robot vision, the stereo matching system should be compact and fast. In this context, we present a very large scale integration (VLSI) architecture for stereo matching with BP.


Belief Propagation Formulation for Stereo Matching

Given the left and right images gr , gl , and the parameters cd , cv , Kd , and Kv , we describe the energy model for a 2-D Markov random field (MRF) as follows.4

where D(dp) denotes the data cost of the label dp[0,dmax1] at the pixel p in the image P , and V(dp,dq) denotes the discontinuity cost between the label dp and dq of the neighbor nodes N . The disparity d̂ can be estimated using the BP’s message̱update̱function (p,q,k,k+1) and the decisioṉfunction (p,K) as follows:




where N(p)\q denotes the neighbors of p other than q , and α denotes the normalization value. At each node p , the message mpqk+1(dq) at k+1 iteration times is updated synchronously using neighboring message mupk(dp) and sent from node p to neighbor node q . After K iterations, the d̂p at each node is decided by Eq. 2.


Proposed Architecture of Stereo Matching

Generally, BP can be separated into two methods, mainly according to the update style. The first method updates all the nodes synchronously on the 2-D MRF.4 The second method updates all the nodes sequentially; first in the inward direction from the leaf to the root and next in the reverse direction.5, 6 The sequential method needs to update only once at each node and obtains the final results. Therefore, the nodes can be propagated fast with the small number of operations. In Ref. 5, the authors reported that the sequential update based on spanning trees in MRF can achieve fast convergence. We applied the tree structure to each scan line, as shown in Figs. 1 and 1 . The tree messages are updated using messages from the neighboring scan lines that have been determined in the previous iteration times. For the image with M×N pixels in Fig. 1, N scan lines can be separated into G groups, and the H lines of each group g[0,G1] can be processed in parallel with H processors. This observation is shown in our VLSI parallel sequences as follows. A node located in a pixel is denoted by a 2-D vector p=[p0p1]T . For synchronous iteration k from 1 to K , for group g from 0 to G1(=NH1) , for each parallel processor h from 0 to H1 , (p=[gH+hp1]T) .

  • 1 Inward processing to root, for p1=0,,M1 , message̱update̱function (p,p+[01]T,k,k) for rightward message.

  • 2 Outward processing to leaf, for p1=M1,,0 , message̱update̱function (p,p[01]T,k,k) for leftward message message̱update̱function (p,p+[10]T,k,k+1) for downward message message̱update̱function (p,p[10]T,k,k+1) for upward message decisioṉfunction (p,K) .

Fig. 1

Update sequence at k iteration times on 2-D MRF: (a) inward processing, (b) outward processing, and (c) parallel processing within group.


As shown in Fig. 2 , the H processors calculate the messages in the parallel, receiving the left and right pixel data from the H scan line buffers, and reading and writing with each message buffer. The processor consists of the processing elements (PE) PEf , PEb , PEu , PEd , and PEo . Using the image data and the neighboring messages, PEf calculates the forward message in the inward time and PEb , PEu , PEd , and PEo calculate each direction’s message and disparity d̂ in the outward time.

Fig. 2

Parallel and pipeline architecture: (a) processor array and (b) PE.



Architecture of Processing Element

The PE is the basis logic to calculate the new message at each node as follows.

When V(t,l)=min(Cvtl,Kv) , by the recursive backward and forward methods of the distance transform,4 the time complexity is O(5D) for D disparity levels. Due to our pipeline structure, 2-D clocks are necessary for calculating the message mpqk+1(dp) . In the forward initialization, D1(1)=B , D2(1)=B ( B is as big as possible).

For clock t from 0 to D1 in the forward PE,


In the backward initialization, D3(1)=B .

For clock t from 0 to D1 in the backward PE,

Figure 2 shows the PE architecture. The data cost PE calculates the data cost D(t) from the left and right image pixels. The forward PE reads msum(t) , which is the sum of the messages and the data cost; outputs the forward cost mf(t) , which is the minimum value between msum(t) and D1(t1)+Cv ; and saves it to the stack. In Eq. 3, the parameters are calculated for the backward time. In our system, the minimum cost of msum(t) is used for the normalized parameter α .

The backward processor reads the mf(D1t) from the stack, calculates the minimum value D3(t) recursively, outputs the minimum value between D3(t) and the parameter mf(1) , and then subtracts it by α for the normalization.


Experimental Results

As shown in Fig. 3 , we tested our system using four grayscale images and ground truth data from the Middlebury database.1 The error rate represents the percentage of disparity error of more than 1 between output d(x,y) and ground truth dTRUE(x,y) ,

where Pm is the pixel area except for the occlusion part, and Nm is the pixel number in this area. As shown in Fig. 4 and Table 1 , our system, iterated a small number of times, shows the results superior to the local method2, 3 and BP in the Tsukuba image. Based on Fig. 4, our disparity error converges rapidly around 12 iterations.

Fig. 3

Left images: (a) Tsukuba, (b) Venus, (c) Sawtooth, and (d) Map.


Fig. 4

Comparison of outputs in Tsukuba: (a) ground truth, (b) convergence rate, (c) our system at 12 iterations, (d) BP at 12 iterations, (e) local method,2 and (f) local method.3


Table 1

Error rate comparison in several images.

Methods(iteration)Tsukuba 384×288 Venus 436×383 Map 436×380 Sawtooth 284×216 Speedperformance
Local case[i]4.3%1.5%0.8%1.3%No real-time hardware
Local case[ii]4.3%2.2%0.8%2.1%SIMD in Pentium 4, 320×240 , 7frs
BP4.8%8.9%4.2%1.4%No real-time hardware
Our chip(12)2.6%0.8%0.2%0.8%FPGA 256×240 , 25frs

Reference 2.

Reference 3.

Given an M×N image, D disparity levels, and T iterations, we need only 2-D forward and backward processing clocks in PE, and 2M inward and outward processing steps at each scan line. G(=NH) groups are iterated T times by H processors in parallel at F frame rates. From Fig. 5 , the total necessary number of clocks to process F frames in one second is 49M clocks [=2D(2M)(NH)TF=(16×2)×(256×2)×(24024)×12×25] . Our system’s 65-MHz clock was enough for real-time processing.

Fig. 5

Chip specifications: (a) overall system and (b) resource usage and output performance.


As shown in Fig. 2, the overall memory is spent for the scan line buffer and message buffer. For real-time processing, our processors should be allowed to access a pair of images in the scan line buffer, while the buffer loads new images from the cameras continuously. Hence, the size of buffer for four images is allocated on the field programmable gate array (FPGA), which means 2Mbits[=4×(256×240×8)] . Given C(=4)bits as the size of message at each disparity level and H(=24) processors, the message buffer memory size is as follows:   leftward message: 1.5kbits=HCD , rightward message: 0.4Mbits=HCDM ,  upward and downward message: 8Mbits=2HCDMG .

In outward processing, the newly calculated leftward message is only used for the next pixel processing. One scan line size of rightward messages in inward time should be stored for outward processing. Since the upward and downward messages are updated synchronously for the entire image, we need to store all the pixel messages in the image. Due to this big memory size, 22 processors access the external memory through an 8×22-bit data bus. Two processors use the internal block RAMs on the FPGA. The overall memory resource usage is described in Fig. 5.



Although BP produces good error performances in the area of image processing, the VLSI architecture has not been fully studied yet. In this context, we propose a parallel VLSI architecture for stereo matching. The test system has only 16 disparity levels, which might not be satisfactory for 3-D recognition tasks. However, for applications, like real-time Z-keying and target-background separation, where low disparity error at the object boundary is important, the proposed chip can be effectively used.


1.  D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis.  10.1023/A:1014573219977 47(1/2/3), 7–42 (Apr. 2002). Google Scholar

2.  H. Hirschmuller, “Improvements in real-time correlation-based stereo vision,” IEEE Workshop Stereo Multi-Baseline Vision, pp. 141–148 (2001). Google Scholar

3.  Y. Sukjune, P. Sung-Lee, K. Sungchul, and K. Yoon-Keun, “Fast correlation-based stereo matching with the reduction of systematic errors,” Pattern Recogn. Lett.  10.1016/j.patrec.2005.03.037 26(14), 2221–2231 (2005). Google Scholar

4.  P. F. Felzenszwalb and D. R. Huttenlocher, “Efficient belief propagation for early vision,” Proc. 2004 IEEE CVPR 1, I-261–I-261 (2004). Google Scholar

5.  J. W. Martin, M J. Wainwright, T. S. Jaakkola, and A. S. Willsky, “Tree-based reparameterization framework for analysis of sum-product and related algorithms,” IEEE Trans. Inf. Theory  10.1109/TIT.2003.810642 49(5), 1120–1146 (2003). Google Scholar

6.  M. F. Tappen and W. T. Freeman, “Comparison of graph cuts with belief propagation for stereo, using identical MRF parameters,” in ICCV, pp. 900–907 (2003). Google Scholar

S. C. Park, Hong Jeong, "High-speed parallel very large scale integration architecture for global stereo matching," Journal of Electronic Imaging 17(1), 010501 (1 January 2008). http://dx.doi.org/10.1117/1.2892680

Very large scale integration

Image processing


Field programmable gate arrays

Parallel computing


Magnetorheological finishing


Back to Top