Over View
We investigate the paradigm of adversarial attacks that target the emergent dynamics of Complex Adaptive Smart Cities (CASCs). To facilitate the analysis of such attacks, we develop quantitative definitions and metrics of attack, vulnerability, and resilience in the context of CASC security. Furthermore, we propose multiple schemes for classification of attack surfaces and vectors in CASC, complemented with examples of practical attacks. Building on this foundation, we propose a framework based on reinforcement learning for simulation and analysis of such attacks on CASC, and demonstrate its performance through two real-world case studies of targeting power grids and traffic management systems. We also remark on future research directions in analysis and design of secure smart cities and complex adaptive systems.
.
Current Team Members
James Minton
PI: Vahid Behzadan
Affiliate Research Groups
Tools and Datasets
TrolleyMod v1. 0: An Open-Source Simulation and Data-Collection Platform for Ethical Decision Making in Autonomous Vehicles: TrolleyMod is an open-source platform based on the CARLA simulator for the collection of ethical decision-making data for autonomous vehicles. This platform is designed to facilitate experiments aiming to observe and record human decisions and actions in high-fidelity simulations of ethical dilemmas that occur in the context of driving. Targeting experiments in the class of trolley problems, TrolleyMod provides a seamless approach to creating new experimental settings and environments with the realistic physics-engine and the high-quality graphical capabilities of CARLA and the Unreal Engine. Also, TrolleyMod provides a straightforward interface between the CARLA environment and Python to enable the implementation of custom controllers, such as deep reinforcement learning agents. The results of such experiments can be used for sociological analyses, as well as the training and tuning of value-aligned autonomous vehicles based on social values that are inferred from observations.
Publications:
AI Safety & Security