ADVERSARIAL MANIPULATION OF AUTOMATED OSINT SOLUTIONS

Overview

Deep learning models are increasingly being used to automatically detect and classify Cyber Threat Intelligence (CTI). However, these models are susceptible to manipulation through slight alterations to input samples. This weakness can allow malicious actors to evade automated CTI monitoring, and in cases where the CTI system is integrated with a NextGen security solution, attackers can leverage this vulnerability to compromise the integrated security solution.

To address this issue, our project aims to enhance the robustness of various deep learning models that have been trained to detect and classify CTI data obtained from Open-Source Intelligence (OSINT). We will explore techniques to improve their resilience against such attacks and evaluate their effectiveness in both offensive and defensive modes by creating adversarial examples using transformers.

Overall, our goal is to enhance the security of automated CTI monitors and NextGen security solutions, reducing the risk of successful cyber attacks.

PI: Vahid Behzadan, Ph.D.

Current Team Members:
Bibek Upadhayay

Past Team Members:
Rachel Blumenthal (Yale)
Keelan Carey

Affiliate Organizations:

Office of Naval Research (ONR)

Tools and Datasets:

Code and Dataset: Github

Publications:

N/A – In the future

AI Safety & Security