AGREE-Dog Copilot: A Neuro-Symbolic Approach to Enhanced Model-Based Systems Engineering
A. Tahat, I. Amundson, D. Hardin, D. Cofer
AISoLA, November 2025
Formal verification tools like model checkers have long demonstrated their capability to ensure mission-critical properties are satisfied, yet their adoption in the aerospace and defense industries remains limited. Surveys consistently identify difficulty in interpreting analysis results, especially counterexamples, as a primary barrier. Previously, our team developed AGREE, an assume-guarantee compositional reasoning tool for architectural models, which generates detailed but often challenging-to-interpret counterexamples. In this paper, we introduce AGREE-Dog, an open-source generative AI copilot integrated into the OSATE IDE to enhance explainable compositional reasoning with AGREE and AADL. AGREE-Dog automates 16 DevOps and ProofOps steps, utilizing a novel context-selection and memory management system to efficiently manage evolving artifacts and historical interactions. We introduce structural and temporal metrics to evaluate the typically overlooked human contributions in generative AI-supported workflows. Evaluations using 13 UV fault-injection scenarios demonstrate a significant reduction in manual effort (less than 0.1% of tokens authored by users), rapid convergence of counterexample repairs (84.6% resolved in a single iteration, accuracy increasing to about 92% after two iterations, and reaching 100% within three iterations), and low latency (average LLM response under 22 seconds, with negligible AGREE-Dog computational overhead). We also discuss limitations and future work. These promising results motivate further exploration into explainable model-based systems engineering (MBSE).