The detection of temporal changes or new activity in large areas of satellite imagery have been of great interest in many applications ranging from meteorology, environmental-sensing, to defense and disaster monitoring. Automated change detection algorithms play a significant role in analyzing large volumes of satellite data, which otherwise would be a manually laborious task. This paper deals with the evaluation of various change detection methods for detecting both structured and unstructured changes in multi-spectral satellite imagery. A comparison of the algorithm performance on both real and simulated satellite imagery is presented here. The real-world dataset (WorldView-2) required accurate geometric registration and manual ground-truth labelling, while on the other hand, the simulated satellite imagery (generated with the RIT DIRSIG tool) has the advantage of prompt availability and included ground-truth. A variety of structural spatial and spectral changes were incorporated into the simulated images according to the required complexity and the ground truth generated together with the imagery. An array of algorithms were evaluated and compared, including conventional methods such as differencing, ratioing, image transformation techniques; statistical methods such as chronochrome, covariance equalization; linear/non-linear distribution-based methods such as point density, simplex volume/complexity, mean-shift/outlier metrics, and non-linear graph-based methods. The Receiver-Operating-Characteristic (ROC) curve and Area-Under-Curve (AUC) metrics were used to evaluate the algorithms. Finally, the performance of various algorithms on simulated vs real satellite images is also discussed, which may be helpful in further applications of the simulated imagery, such as for training deep-learning networks, where real imagery is not readily available.