ADVERSARIAL ATTACKS ON DEEP TRADING ALGORITHMS

Overview

The Markov Decision Process has allowed Reinforcement Learning to become realistic method to automating complex tasks. With the combination of Neural Networks’ ability to feature engineer high dimensional data and Reinforcement Learning, Deep Reinforcement Learning has become an appealing method to trading automation. Such Artificial Intelligence is under the term High Frequency Trading, which is the automation of transactions within minute units of time. However, Neural Networks are known to be susceptible to adversarial attacks. In recent years, research has followed threat modeling and the evaluation robustness of Neural Networks. It follows that Deep Reinforcement Learning Agents are also susceptible to the same methodology and should also be subjected to the evaluation of agent robustness through adversarial attacks. This project will demonstrate the compromise of such Trading Deep Reinforcement Learning Agents through Deep Reinforcement Learning Threat Modeling. Therefore, the deployment of trading agents into practice should be informed of existing threat models for Deep Reinforcement Learning Agents and consider the robustness and resilience of such agents against adversarial attacks. Additional methods of mitigations may be further researched.

Current Team:

Nancirose Piazza
Yaser Faghan
Ali Fathi (Director of AI Model Risk at RBC Enterprise Model Risk Management Group)
PI: Vahid Behzadan

Affiliated Research Group:

Enterprise Model Risk Management Group, Royal Bank of Canada (RBC)

Tools and Datasets:

Publications:

  1. Faghan, Y., Piazza, N., Behzadan, V., & Fathi, A. (2020). Adversarial Attacks on Deep Algorithmic Trading Policies. arXiv preprint arXiv:2010.11388. [Accepted at CAMLIS 2021]

AI Safety & Security