This study introduces a novel depth estimation method that can automatically generate plausible depth map from a single image with unstructured environment. Our goal is to extrapolate depth map with more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Based on the preexisting DepthTransfer algorithm, our approach primarily transfers depth information at the level of superpixels from the most photometrically similar retrieval images under the framework of non-parametric learning. Posteriorly, we propose to concurrently warp the corresponding superpixels in multi-scale levels, where we employ an improved SLIC technique to segment the RGBD images from coarse to fine. Then, modified Cross Bilateral Filter is leveraged to refine the final depth field. With respect to training and evaluation, we perform our experiment on the popular Make3D dataset and demonstrate that our method outperforms the state-of-the-art in both efficacy and computational efficiency. Especially, the final results show that in qualitatively evaluation, our results are visually superior in realism and simultaneously more immersive.