In many areas of practical interest, for example medical decision making problems, input data for training and testing neural networks are severely limited in number, are corrupted by noise, and may be highly correlated. In this study we examine these factors by investigating network performance on a simulated Gaussian data set with known first and second order statistics. Following the work of Wagner et al. for statistical (likelihood- ratio) classifiers, we study how the addition of noisy/correlated features affects the performance of neural network classifiers. Results are similar to that of the previous study, demonstrating that for small data sets, additional noisy/correlated features in fact degrade network performance. In addition, the use of sophisticated statistical techniques including the jackknife, Fukunaga-Hayes group jackknife, and bootstrap to estimate performance variation and remove small-sample bias are examined and found to offer significant advantages.