Formal Methods use for Learning Assurance (ForMuLA)

M. Baleani, A. Clavière, D. Cofer, E. DeWind, L. Di Guglielmo, O. Ferrante, G. Gentile, D. Kirov, D. Larsen, L. Mangeruca, S. Fulvio Rollini, G. Cima, R. Schneider, H. Semde, G. Soudain

EASA Innovation Partnership Contract, Final Report, April 2023

The aim of this report is to present the outcome of the collaboration between EASA and Collins Aerospace on an Innovation Partnership Contract (IPC) that investigated the use of formal methods as part of the learning assurance building block of the EASA AI Roadmap. The project ran from Oct 2021 to Mar 2023. The IPC project titled ``Formal Methods use for Learning Assurance'' (ForMuLA) focused on emphasizing opportunities for the adoption of formal methods techniques in the design assurance process of machine learning enabled systems. This resulted in the following key achievements:
  • Proposed use of formal methods as anticipated means of compliance for a set of key certification objectives from the EASA Concept Paper for Level 1 and 2 Machine Learning Applications. This supported the update of definitions in the concept paper and the clarification of objective LM-11 on learning algorithm and trained model stability, which has been split into objectives LM-11 and LM-12 in the transition to the new version of the concept paper.
  • Detailed discussion of relevant formal methods (FM) technologies and supporting statistical methods, and their possible role in the development and validation and verification (V\&V) of machine learning enabled systems. Emphasis has been made on innovative FM applications specific to the robustness assessment of machine learning models.
  • Practical demonstration of the use of formal methods on an industrial use case of a deep learning-based estimator for remaining useful life of mechanical bearings in airborne equipment. The output of the estimator is used for on-ground maintenance applications. Demonstrations provided concrete evidence of how FM and supporting statistical techniques can be used as part of the verification activities to deal with data quality assessment, ML stability, robustness and intended behavior verification.
The considerations summarized in the report apply to machine learning in general, but particular emphasis has been placed on specific challenges related to neural networks. Discussion of formal methods applications are purposefully kept generic. This report does not recommend specific methods or tools, but is rather intended to motivate opportunities from a theoretical perspective. Where applicable, a reference is made to concrete methods and tools.