Overview
The proposed project aims to address this gap by investigating the susceptibility of Motor Imagery (MI) BCIs to perturbations in the sensory stimuli observed by the BCI user. Inspired by adversarial example attacks against machine learning, we hypothesize that the integration of cognitive, measurement, and machine learning components in EEG-based BCIs may also be vulnerable to Adversarial Stimuli, defined as minor perturbations that can be induced directly at the sensory level. Based on preliminary experimental validation of this hypothesis, this project aims to investigate the robustness of BCI devices in 3 inter-related thrusts:
Current Team Members:
Past Team Members:
Christopher Howard
Karrie LeDuc-Santoro
PI: Vahid Behzadan
Affiliate Organizations:
Office of Naval Research (ONR)
Tools and Datasets:
Code: Github
TLDR: Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed Sensory Events.
Pre-print: Arxiv
Publications: Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed Sensory Events.