Due to the low-cost, lightweight and widely available, approaches that use a single monocular camera to relocalize pose in 3D point-cloud dense map has gained significant interests in a wide range of applications. Relocalizing camera in a given map is of crucial importance for vision-based autonomous navigation. When the robot run in a new map or tracks lost, the robot's global localization cannot be obtained. We propose a novel approach, which relocalizes the pose of monocular camera with respect to a prebuilt dense map. A common monocular visual odometry system is employed to reconstruct a sparse set of 3D points based on local bundle adjustment. These reconstructed points are matched against the dense map to get the camera global pose with the particle filter (PF) algorithm and iterative closest point (ICP) algorithm. The particles state represents the potential initial pose. The ICP alignment result is used to update particles pose and important weight. Meanwhile, adaptive resampling is used to approximate the distribution of pose. Our monocular camera relocalization approach has several advantages. The improved PF algorithm is utilized to tackle the local convergence of ICP alignment. And our approach only relies on matching geometry between local reconstruction and dense map, it is robust to photometric appearance variance in the environment. In addition, the approach also can estimate the metric scale, which cannot be recovered from monocular visual odometry. We present real-world experiments demonstrating that our approach can relocalize the monocular camera and accurately estimate the metric scale in a given dense map. The results verify the effectiveness and robustness of this algorithm.