Over View
Multi-Agent Systems (MAS) is the study of multi-agent interactions in a shared environment. Communication for cooperation is a fundamental construct for sharing information in partially observable environments. Cooperative Multi-Agent Reinforcement Learning (CoMARL) is a learning framework where we learn agent policies either with cooperative mechanisms or policies that exhibit cooperative behavior. Explicitly, there are works on learning to communicate messages from CoMARL agents; however, non-cooperative agents, when capable of access a cooperative team’s communication channel, have been shown to learn adversarial communication messages, sabotaging the cooperative team’s performance particularly when objectives depend on finite resources. To address this issue, we propose a technique which leverages local formulations of Theory-of-Mind (ToM) to distinguish exhibited cooperative behavior from non-cooperative behavior before accepting messages from any agent. We demonstrate the efficacy and feasibility of the proposed technique in empirical evaluations in a centralized training, decentralized execution (CTDE) CoMARL benchmark. Furthermore, while we propose our explicit ToM defense for test-time, we emphasize that ToM is a construct for designing a cognitive defense rather than be the objective of the defense.
Team
Advisor: Vahid Behzadan
Publications
- Nancirose Piazza and Vahid Behzadan. 2023. A Theory of Mind Approach as Test-Time Mitigation Against mergent Adversarial Communication: Ex-tended Abstract. In Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), London, United Kingdom,May 29 – June 2, 2023, IFAAMAS Piazza, Nancirose and Vahid Behzadan
- “A Theory of Mind Approach as Test-Time mitigation Against Emergent Adversarial Communication.” (2023). Arvix Preprint.