Explainable Assurance through Compositional Verification with Cognitive Models
P. Ganeriwala, C. Chambers, S. Bhattacharyya, J. Babar
4th International Workshop on Explainability of Real-time Systems and their Analysis (ERSA), December 2025
Cyber-physical systems involving human operators are increasingly being deployed in the safety- and security-critical domains. It has become essential to model human operator system interactions and rigorously analyze whether human operators may be exposed to harm. In these contexts, verifying correctness alone is insufficient; outcomes of the analyses must be accompanied by explanations that are understandable and actionable. We present an approach for architectural modeling and compositional verification that integrates human cognitive models with system models to analyze operator–system interaction. The verification process produces explanation-ready artifacts, such as counterexample traces and proof obligations, that highlight why a property holds or fails and what this means for the human operator. These artifacts provide a bridge between formal verification results and human-understandable explanations, supporting safety-assurance and accountability. We demonstrate the approach through case studies in mixed-reality and increasingly autonomous systems, and show how verification outputs yield understandable and actionable explanations.