An outstanding problem in the study of adaptive learning is overspecialization of the learning system, and its consequent inability to handle new data correctly. A means of addressing this difficulty is described here. When used in conjunction with standard processes such as backpropagation, it identifies the level of corruption of the training sample, and thus provides a `best fit' to the entire domain of interest, rather than to the training sample alone. This is accomplished by a combination of simulated annealing, bootstrap estimation, and analysis methods derived from statistical mechanics. Its advantage is that data need not be reserved for an independent test set, and thus all available samples are used. A modified generalization error, defined through a thermalization parameter on the training set, provides a measure of the sample space consistent with the network function. A criterion for optimal match between network and sample set is obtained from the requirement that generalization error and training error be consistent. Numerical results are presented for examples which illustrate several distinct forms of data corruption. A quantity analogous to the specific heat in thermodynamic systems is found to exhibit anomalies at values of training error near the onset of overtraining.