Enhancing Autonomy with Trusted Cognitive Modeling

S. Bhattacharyya, J. Davis, T. Vogl, M. Fix, A. McLean, M. Matessa, L. Smith-Velazquez

AUVSI Unmanned Systems, May 2015

Autonomous Systems (AS) have been increasingly deployed for safety, aviation, medical, and military applications with different degrees of autonomy. Autonomy involves the implementation of adaptive algorithms (artificial intelligence and/or adaptive control technologies). Advancing autonomy involves transferring more of the decision-making to AS. For this transition to occur there has to be significant trust and reliability in the AS.

Exhaustive analysis of all the possible behaviors exhibited by autonomy is intractable. However, methods to guarantee behaviors of the intelligent agent relevant to the application can be developed. One of the approaches we explain here is to translate from a cognitive architecture to a formal modeling environment. This approach enables developers to continue to use the tools (such as ACT-R or Soar) that are good at cognitive modeling, while gaining trust through the guarantees provided by formal verification. In this process, there are several challenges to be addressed regarding the interactions within the cognitive engine, maintaining the integrity of the interactions occurring based on the cognitive architecture, and addressing the compositional analysis of rules. We describe this approach in this paper.