TLDR: Researchers developed an open-source simulation framework to test autonomous vehicles against adversarial attacks on both their perception (LiDAR data) and communication (V2X, GPS) systems. The framework integrates multiple simulators (CARLA, SUMO, Artery) and ROS 2, allowing for high-fidelity modeling of physical environments, traffic, and V2X networks. It supports various attack types like point perturbation, detachment, attachment for LiDAR, and V2X message manipulation, Sybil attacks, and GPS spoofing. Evaluations showed significant degradation in a 3D object detector’s performance under these simulated attacks, highlighting the framework’s effectiveness in identifying AV vulnerabilities.
Autonomous vehicles (AVs) are poised to transform transportation, offering promises of enhanced safety and efficiency. However, their reliance on sophisticated perception and communication systems also makes them susceptible to adversarial attacks. These attacks can subtly manipulate sensor inputs or communication signals, potentially leading to dangerous failures and reduced reliability.
Real-world testing of such extreme scenarios, including intentional attacks, is often expensive, time-consuming, and carries significant safety and legal risks. This is where high-fidelity simulators become crucial. They provide a safe and scalable environment for rigorously evaluating AV performance under controlled adversarial conditions, helping to improve their robustness before they hit the roads.
A New Framework for Adversarial Attack Simulation
A new open-source integrated simulation framework has been developed to address the need for comprehensive testing of AVs against multi-domain adversarial scenarios. This framework is designed to generate adversarial attacks targeting both the perception and communication layers of autonomous vehicles. Unlike existing solutions that often focus on specific attack generation or lack full integration, this new framework offers a comprehensive, modular, and extensible environment.
The framework provides high-fidelity modeling of physical environments, traffic dynamics, and vehicle-to-everything (V2X) networking. It orchestrates these complex components through a unified core, synchronizing multiple simulators based on a single configuration file. This allows for the creation of diverse perception-level attacks on LiDAR sensor data, as well as communication-level threats like V2X message manipulation and GPS spoofing. Importantly, its integration with ROS 2 (Robot Operating System 2) ensures compatibility with third-party AV software stacks.
How the Framework Works
The system integrates three main simulators: CARLA, SUMO, and Artery/OMNeT++. CARLA is responsible for rendering the physical environment and simulating realistic sensor data such as LiDAR, cameras, and radar. SUMO handles traffic simulation and acts as a central controller for synchronization. Artery, built on OMNeT++, simulates V2X communication using standard protocols, tracking vehicle mobility and publishing Cooperative Awareness Messages (CAMs) to ROS 2.
At its core, the framework has a Simulation Logic Core (SLC) module. This module is responsible for initializing, orchestrating, and recording each simulation session. It includes a scenario-based data generation module that configures simulators from a single scenario description file, an orchestration module for synchronizing and controlling the simulators, and an attack generation module. The attack generation module is capable of injecting adversarial point cloud data for 3D perception attacks and communication-level attacks targeting the V2X layer.
Types of Attacks Supported
The framework supports a range of adversarial attacks:
- Perception-Level Attacks: These attacks directly manipulate the digital input to LiDAR-based 3D object detectors without altering the physical environment.
- Adversarial Point Perturbation: Introduces small, structured modifications to the 3D coordinates of points in a point cloud to deceive detectors.
- Adversarial Point Detachment: Selectively removes critical points from the point cloud to disrupt detector performance, exploiting the sparsity of LiDAR data.
- Adversarial Point Attachment: Adds a small number of deliberately placed synthetic points to the input point cloud to degrade detector performance.
- Communication-Level Attacks: These target the V2X communication layer, which is crucial for cooperative AVs.
- V2X Message Manipulation: Malicious actors can forge, alter, or replay CAMs. Examples include fake position or speed attacks, where falsified coordinates or velocities are broadcasted.
- Sybil Attacks: A single attacker impersonates multiple distinct vehicles by sending out multiple CAMs with different vehicle IDs and locations, creating “ghost” vehicles.
- GPS Spoofing Attacks: Deceives a vehicle’s navigation system by broadcasting counterfeit GPS signals, leading to incorrect localization and potentially dangerous maneuvers. This includes Random Bias Attacks (injecting random constant errors) and Position Altering Attacks (falsifying positions to significantly incorrect values).
Evaluating Attack Effectiveness
To evaluate the impact of these attacks, the researchers used metrics such as the mAP ratio (mean Average Precision ratio) to assess detector robustness and Chamfer Distance (CD) to quantify the perceptibility of the adversarial examples. A lower mAP ratio indicates a stronger attack, while a lower CD suggests the attack is less noticeable.
The framework was demonstrated by evaluating its impact on SECOND, a state-of-the-art 3D object detector. Results showed that the framework successfully generated adversarial LiDAR data that closely resembled original point clouds while significantly reducing the detector’s performance. Point perturbation attacks, for instance, caused the greatest performance degradation with moderate perceptual distortion. Even removing a small fraction of salient points could lead to complete detection failure.
Also Read:
- Boosting Vehicular Network Security: A New AI Approach to SYN Flood Detection
- Navigating Unseen Roads: TrajAware’s Approach to VANET Routing
Conclusion and Future Directions
This open-source integrated simulation framework provides a powerful tool for generating and studying adversarial attacks on autonomous vehicles. By supporting coordinated attacks across both perception and communication surfaces, it enables the study of complex, multi-domain adversarial scenarios. The framework’s modular and pluggable architecture, along with ROS 2 compatibility, makes it suitable for integration with real-world AV software stacks.
Future work aims to extend support for 2D attacks and develop a perception evaluation platform that can compute vulnerability scores and suggest mitigation strategies. For more technical details, you can refer to the full research paper here.


