Target identification decisions must be `positive'--accurate with high confidence. To achieve this, a classifier must build a model (train) with some data and then generalized its decisions to new data sets. The ability to accurately estimate the true classifier error (or predict classification performance) is contingent on the amount of data available. The more data, the better the error estimate and the more confident the resulting decisions. Ideally, one needs infinite data for modeling and assessing classifier performance; however, this is rarely the case. This paper investigates techniques for improving audio target identification accuracy and confidence with a Multilayer Perceptron (MLP). The first technique, bagging, is a combination of bootstrapping and (classifier) aggregation and the second, decision fusing, combines the decisions of multiple classifiers. The `bagged' identification performance for a subset of the Rome Laboratory Greenflag database is compared to the MLP performance without bagging and that of MLP whose decisions are combined with those of our other classifiers. Both techniques improved identification accuracy, albeit bagging did so only slightly. More importantly, the confidence of the identification decisions were significantly improved by the pooling of evidence inherent in both the bagging and decision fusion processes.