The work in this paper deals with moving object detection (MOD) for single/multiple moving objects from unmanned aerial vehicles (UAV). The proposed technique aims to overcome limitations of traditional pairwise image registrationbased MOD approaches. The first limitation relates to how potential objects are detected by discovering corresponding regions between two consecutive frames. The commonly used gray level distance-based similarity measures might not cater well for the dynamic spatio-temporal differences of the camera and moving objects. The second limitation relates to object occlusion. Traditionally, when only frame-pairs are considered, some objects might disappear between two frames. However, such objects were actually occluded and reappear in a later frame and are not detected. This work attempts to address both issues by firstly converting each frame into a graph representation with nodes being segmented superpixel regions. Through this, object detection can be treated as a multi-graph matching task. This allows correspondences to be tracked more reliably across frames, which does not necessarily have to be limited to frame pairs. Building upon this, all detected objects and candidate objects are reanalyzed where a graph-coloring algorithm performs occlusion detection by considering multiple frames. The proposed framework was evaluated against a public dataset and a self-captured dataset. Precision and recall are calculated to evaluate and validate overall MOD performance. The proposed approach is also compared with Support vector machine (SVM), linear SVM classifier, and Canny edge detector detection algorithms. Experimental results show promising results with precision and recall at 94% and 89%, respectively.