This paper describes a central processing unit (CPU)-based technique for terrain geometry rendering that could relieve graphics processing unit (GPU) from processing the appropriate level of detail (LOD) of the geometric surface. The proposed approach alleviates the computational load on the CPU and approaches GPU-based efficiency. As the datasets of realistic terrains are usually huge for real-time rendering, we suggest using a training stage to handle large tiled QuadTree terrain representation. The training stage is based on multiresolution wavelet decomposition and is used to limit the region of error control inside the tile. Maximum approximation errors are then calculated for each tile at different resolutions. Maximum world-space errors of the tile at different resolutions permit selection of the appropriate resolution of downsampling that will represent the tile at the run time. Tests and experiments demonstrate that B-spline 0 and B-spline 1 wavelets, well known for their properties of localization and their compact support, are suitable for fast and accurate localization of the maximum approximation error. The experimental results demonstrate that the proposed approach drastically reduces computation time in the CPU. Such a technique should also be used on low/medium end PCs, and embedded systems that are not equipped with the latest models of graphic hardware.