Test & Evaluation, Validation & Verification

Trust of aerial unmanned systems: Unmanned systems are still a relatively new concept to m ost of the civilian population. As a result, there is a natural fear of a new and unproven technology with concerns about safety. This in turn creates difficulties for DOD to obtain approvals for proper test and evaluation of new systems or in some cases support for resourcing the acquisition of a new system.

In the future, UASs will be deployed on a timeline of months instead of years. Systems that are being developed in industry and academia have utility today to a warfighter who is facing enormous challenges. The question of how to start testing these systems in parallel with development may require us to move beyond the traditional test focus and towards a test strategy that covers the entire acquisition cycle from cradle to grave. The challenges of testing UASs are moving from simple system test toward the world of complex systems engineering.

Yet, intelligent machines in the form of UASs are now rapidly finding their way into the hands of the warfighter and are envisioned to provide amazingly new tactical capabilities in the near future in a variety of areas including mission assurance; command and control; and intelligence, surveillance, and reconnaissance. Test and certification techniques that are appropriate for autonomous systems may be dramatically different from those used for manned platforms: The projected exponential growth in Software Lines of Code (SLOC) and the nondeterministic nature of many algorithms will lead to prohibitive costs to test exhaustively. In lieu of this brute force approach, timely and efficient certification (and recertification) of intelligent and autonomous control systems will require analytical tools that work with realistic assumptions, including approaches to bound uncertainty caused by learning/adaptation or other complex nonlinearities that may make behavior difficult to predict.

Test and certification will need to prove not just safety, but also level of competence at mission tasks. This will require clearly defined metrics for stability, robustness, performance, controllability, for example), and the development of new tools for software verifiability and certification. Over time, machine learning will become an important aspect to autonomous system performance and will pose extreme challenges to test and certification of systems.
As a corollary to the above views, there is a need for acceptance of nondeterministic performance and decision making by the test and evaluation community. Unmanned systems will operate in highly dynamic, unstructured environments, for which there are not computationally tractable approaches to comprehensively validate performance. Formal methods for finite-state systems based on abstraction and model-based checking do not extend to such systems, probabilistic or statistical tests do not provide the needed levels of assurance, and the set of possible inputs is far too large. Both run-time and quantum verification and validation (V&V) approaches may prove to be viable alternatives. Run-time approaches insert a monitor/checker and simpler verifiable backup controller in the loop to monitor system state during run time and check against acceptable limits, and then switch to a simpler backup controller (verifiable by traditional finite-state methods) if the state exceeds limits.

Achieving gains from use of autonomous systems will require developing new methods to establish “certifiable trust” in autonomy through verification and validation (V&V) of the near-infinite state systems that result from high levels of adaptability. The lack of suitable V&V methods today prevents all but relatively low levels of autonomy from being certified for use.

A new paradigm of validation is needed and envisions one that requires more field work and evolution/maturation, re-defining "certification." While developing trust is important, it may not require, or ultimately be in our interest, to formally prove systems will work a certain way to a near-infinite state. In fact, it is more likely the case that there are hidden assumptions in one’s general approach that may prove to be the greatest source of problems, not the explicit stated coverage. There needs to be a balanced, risk-reward analysis in determining the extent to which a system’s performance is proven. Moreover, S&T investments should emphasize continuous contact, continuous testing, and continuous evolution, rather than intermittent stops and starts.


Multimedia

The PowerPoint,  podcast and video on this page address the the goals and technical challenges to developing the T&E, V&V science and tools that will enable more rapid and cost-effective fielding of autonomous technology.