Any-Play: an Intrinsic Augmentation for Zero-Shot Coordination

Keane Lucas, Ross E. Allen. In International Conference on Autonomous Agents and Multi-Agent Systems 2022. [BibTeX]

Cooperative artificial intelligence with human or superhuman proficiency in collaborative tasks stands at the frontier of machine learning research. Prior work has tended to evaluate cooperative AI performance under the restrictive paradigms of self-play (teams composed of agents trained together) and cross-play (teams of agents trained independently but using the same algorithm). Recent work has indicated that AI optimized for these narrow settings may make for undesirable collaborators in the real-world. We formalize an alternative criteria for evaluating cooperative AI, referred to as inter-algorithm cross-play, where agents are evaluated on teaming performance with all other agents within an experiment pool with no assumption of algorithmic similarities between agents. We show that existing state-of-the-art cooperative AI algorithms, such as Other-Play and Off-Belief Learning, under-perform in this paradigm. We propose the Any-Play learning augmentation—a multi-agent extension of diversity-based intrinsic rewards for zero-shot coordination (ZSC)—for generalizing self-play-based algorithms to the inter-algorithm cross-play setting. We apply the Any-Play learning augmentation to the Simplified Action Decoder (SAD) and demonstrate state-of-the-art performance in the collaborative card game Hanabi.

This project is the result of a summer 2021 internship at MIT Lincoln Lab as part of their AI Technology Group.

Please check out the paper, code, video, and a news article MIT wrote about it!