ADVERSARIAL ATTACK ON EEG-BASED BCI DEVICES

Overview

The proposed project aims to address this gap by investigating the susceptibility of Motor Imagery (MI) BCIs to perturbations in the sensory stimuli observed by the BCI user. Inspired by adversarial example attacks against machine learning, we hypothesize that the integration of cognitive, measurement, and machine learning components in EEG-based BCIs may also be vulnerable to Adversarial Stimuli, defined as minor perturbations that can be induced directly at the sensory level. Based on preliminary experimental validation of this hypothesis, this project aims to investigate the robustness of BCI devices in 3 inter-related thrusts:

  1.  Investigate the susceptibility of EEG-based MI BCIs to 3 modes of adversarial stimuli (I.e., visual, auditory, and tactile);
  2. Exploring and developing tools, techniques, and frameworks for evaluation of robustness and resilience of MI BCIs to adversarial stimuli ;
  3. Investigating the functional separability of adversarially-induced potentials from intended MI signal in EEG-based MI BCI;
  4. Investigating the utility of adversarial training and variants of Error-Related Potentials (ERP) in mitigating the impact of adversarial stimuli.
 

Current Team Members:

Bibek Upadhayay

Past Team Members:

Christopher Howard
Karrie LeDuc-Santoro
PI: Vahid Behzadan

Affiliate Organizations:

Office of Naval Research (ONR)

Tools and Datasets:

Code: Github

TLDR: Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed Sensory Events.

Pre-print: Arxiv

Publications: Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed Sensory Events.

ROBUST COGNITION