The success of deep brain stimulations (DBS) heavily relies on the accurate placement of electrodes in the operating room (OR). However, the pre-operative images such as MRI and CT for surgical targeting are degraded by brain shift, a combination of brain movement and deformation. One way to compensate for this intra-operative brain shift is to utilize a nonlinear biomechanical brain model to estimate the whole brain deformation based on which an updated MR can be generated. Due to the variability of deformation in both magnitude and direction among different cases, partially sampled intraoperative data (e.g., O-arm, CT) of tissue motion is critical to guide the model estimation. In this paper, we present a method to extract the sparse data by matching brain surface features from pre- and post-operative CTs, followed by the reconstruction of the full 3d-displacement field based on the original spatial information of these 2d points. Specifically, the size and the location of the sparse data were determined based on the pneumocephalus in the post-operative CT. The 2D CT-encoded texture maps from both pre-and post-operative CTs were then registered using Demons algorithm. The final 3d-displacement field in our one-patient-example shows an average lateral shift of 1.42mm, and a shift of 10.11mm in the direction of gravity. The results presented in this work have shown the potential of assimilating the sparse data from intra-operative images into the pipeline of model-based image guidance for DBS in the future.