Neural networks (NNs) represent a class of systems that do not fit into the current paradigms of software development and certification. Instead of being programmed, a learning algorithm "teaches" a NN using a set of data. Often, because of the non-deterministic result of the adaptation, the NN is considered a "black box" and its response may not be predictable. Testing the NN with similar data as that used in the training set is one of the few methods used to verify that the network has adequately learned the input domain. In most instances, such traditional testing techniques prove adequate for the acceptance of a neural network system. However, in more complex, safety- and mission-critical systems, the standard NN training-testing approach is insufficient to provide a reliable method for their certification. Verifying correct operation of NNs within NASA projects, such as autonomous mission control agents and adaptive flight controllers, and within nuclear engineering applications, such as safety assessors and reactor controllers, requires as rigorous an approach as those applied to common programming techniques. This verification and validation (V&V) challenge is further compounded by adaptive neural network systems; ones that modify themselves, or "learn," during operation. These systems continue to evolve during operation, for better or for worse. Traditional software assurance methods fail to account for systems that change after deployment. Several experimental NN V&V approaches are beginning to emerge, but no single approach has established itself as a dominant technique. This paper describes several of these current trends and assesses their compatibility with traditional V&V techniques.