This study adapts a variety of techniques derived from multi-spectral image classification to find objects amid cluttered backgrounds in hyperspectral imagery. This study quantitatively compares the algorithms against a standard object search, the matched filter (MF) and recently developed object detector, Adaptive Cosine Estimator (ACE). These object searches require calculating the Mahalanobis distance between the average object spectral signature and the test pixel spectrum and needs the computation of a covariance matrix. The covariance matrix is generated using the entire image (Whitened Euclidean Distance, WED) or using pixels associated with the object (Maximum Likelihood Classifier, MLC). The latter computation requires a relatively large number of pixels to generate a non-singular, accurate covariance matrix. Regularizing object pixels via optimally mixing (likelihood maximization) diagonal, object, and entire image covariance matrices to generate the object covariance matrix estimate. This approximation is called the Regularized Maximum Likelihood Classifier (RMLC). The object searches MF, ACE, WED, MLC, and RMLC were applied to visible/near IR data collected from forest and desert environments. This study searched for objects using object signatures and covariance matrices taken directly from the scene and from statistically transformed object signatures and covariance matrices from another time. This study found a substantial reduction in the number of false alarms (factor of 10 to 1000) using WED, ACE, RMLC relative to MF searches for the two independent data collects. The regularization of in-scene and transformed covariance matrices substantially reduced false alarms relative to using unprocessed covariance matrices. This study adds simple, high performing algorithms to the object search arsenal.