Within the paper, we present an approach for the alignment of point clouds collected by the RGB-D sensor Microsoft
Kinect, using a MEMS IMU and a coarse 3D model derived from a photographed evacuation plan. In this approach, the
alignment of the point clouds is based on the sensor pose, which is computed from the analysis of the user’s track,
normal vectors of the ground points, and the information extracted from the coarse 3D model. The user’s positions are
derived from a foot mounted MEMS IMU, based on zero velocity updates, and also the information extracted from a
coarse 3D model. We will then estimate the accuracy of point cloud alignment using this approach, and discuss about the
applications of this method in indoor modeling of buildings.