The ability of certain performance metrics to quantify how well a target recognition system under test (SUT) can correctly identify targets and non-targets is investigated. The SUT, which may employ optical, microwave, or other inputs, assigns a score between zero and one that indicates the predicted probability of a target. Sampled target and nontarget SUT score outputs are generated using representative sets of beta probability densities. Two performance metrics, the area under the receiver operating characteristic (AURC) and the confidence error (CE), are analyzed. The AURC quantifies how well the target and nontarget distributions are separated, and the CE quantifies the statistical accuracy of each assigned score. The CE and AURC were generated for many representative sets of beta-distributed scores, and the metrics were calculated and compared using continuous methods as well as discrete (sampling) methods. Close agreement in results with these methods for the AURC is shown. While the continuous and the discrete CE are shown to be similar, differences are shown in various discrete CE approaches, which occur when bins of various sizes are used. A method for an alternative weighted CE calculation using maximum likelihood estimation of density parameters is identified. This method enables sampled data to be processed using continuous methods.