N. Narayan, P. Ganeriwala, R. Jones, M. Matessa, S. Bhattacharyya, J. Davis, H. Purohit, S. F. Rollini
IEEE International Systems Conference (SysCon), 2023
Autonomous agents are expected to intelligently handle emerging situations with appropriate interaction with humans, while executing the operations. This is possible today with the integration of advanced technologies, such as machine learning, but these complex algorithms pose a challenge to verification and thus the eventual certification of the autonomous agent. In the discussed approach, we illustrate how safety properties for a learning-enabled increasingly autonomous agent can be formally verified early in the design phase. We demonstrate this methodology by designing a learning-enabled increasingly autonomous agent in a cognitive architecture, Soar. The agent includes symbolic decision logic with numeric decision preferences that are tuned by reinforcement learning to produce post-learning decision knowledge. The agent is then automatically translated into nuXmv, and properties are verified over the agent.