Overview
Since the emergence of Deep Reinforcement Learning (DRL) algorithms, there has been a surge of interest from both research and industry in the promising potential of this paradigm. The current and envisioned applications of deep RL range from autonomous navigation and robotics to control applications in critical infrastructure, air traffic control, defense technologies, and cybersecurity. Despite the extensive opportunities and benefits of deep RL algorithms, the security risks and challenges associated with them remain largely unexplored. Recent studies have highlighted the vulnerability of DRL algorithms to small perturbations in their state observations, which can be exploited by adversaries to manipulate the behavior and performance of DRL agents. This project aims to advance the current state of the art in three distinct but interconnected areas:
Affiliate Research Groups:
Tools and Datasets:
RLAttack: Framework for experimental analysis of adversarial example attacks on policy learning in Deep RL.
Publications:
AI Safety & Security