Presenter: Nancirose Piazza (PhD Student – SAIL Lab)
Time: Friday 2/19, 2pm – 3pm ET
Recording: https://youtu.be/aF6yX9AfnnI
Abstract:
Deep Reinforcement Learning (DRL) has become an appealing solution to algorithmic trading such as high frequency trading of stocks and cyptocurrencies. However, DRL have been shown to be susceptible to adversarial attacks. It follows that algorithmic trading DRL agents may also be compromised by such adversarial techniques, leading to policy manipulation and policy imitation. In this work, we develop a threat model for deep trading policies and discuss the usage of passive and active test-time attacks. Furthermore, we demonstrate and evaluate the effectiveness and transferability of test-time attacks against benchmark and real-world DQN trading agents.
Bio:
Nancirose Piazza is a PhD student in the University of New Haven’s Engineering and Applied Science program. She did her Bachelor of Science in Mathematics and Master of Science in Data Science. She is a researcher at the Secured and Assured Intelligence Learning Lab. Her current research is on the vulnerabilities of Trading Deep Reinforcement Learning agents against adversarial attacks, developing a domain specific threat model for trading agents, and theoretical research for a recent mitigation algorithm called Constrained Randomization of Policy.