Evaluation of New Assurance Tools for Airborne Machine Learning-Based Functions
C. Liu, H. Herencia-Zapana, S. Hasan, A. Tahat, I. Amundson, D. Cofer
Digital Avionics Systems Conference, 2024
As part of the DARPA Assured Autonomy program, our team has developed or evaluated a number of technologies to address gaps in traditional hardware and software assurance processes that make it difficult or impossible to demonstrate the correctness and safety of machine learning (ML) components. These include new approaches for testing and completeness metrics, formal analysis of neural networks, input domain shift assessment, and run-time monitoring and enforcement architectures. Although many of these tools and methods were successfully applied to demonstration platforms, most have not been evaluated on real-world product development efforts in a certification context. In this paper, we describe our evaluation of these new assurance methods and tools applied to ML-based systems that will soon be undergoing certification.