spot_img
HomeResearch & DevelopmentLORETTA: Unveiling a Stealthy Poisoning Attack on Dynamic Graph...

LORETTA: Unveiling a Stealthy Poisoning Attack on Dynamic Graph Neural Networks

TLDR: LORETTA is a novel, low-resource adversarial framework that poisons Continuous-Time Dynamic Graphs to degrade Temporal Graph Neural Network (TGNN) performance. It operates in two phases: sparsifying the graph by removing high-impact edges and then replacing them with strategically crafted adversarial negative edges, all while adhering to unnoticeability constraints. LORETTA significantly degrades TGNN performance, outperforms existing attack baselines, and remains undetectable to anomaly detection systems and robust against adversarial defenses, highlighting critical vulnerabilities in current TGNN models.

Temporal Graph Neural Networks (TGNNs) are becoming increasingly vital in many high-stakes applications, from predicting financial trends and powering recommendation systems to detecting fraud. These advanced models are designed to understand and learn from dynamic graphs, which represent relationships that evolve over time, such as social media interactions or transportation patterns. However, their growing importance also highlights a critical vulnerability: their susceptibility to ‘poisoning attacks’.

Poisoning attacks involve subtly altering the training data to corrupt the model’s learning process, ultimately degrading its performance. While such attacks have been studied for static graphs, applying them to dynamic graphs, especially Continuous-Time Dynamic Graphs (CTDGs) which model interactions as a continuous stream, presents unique challenges. Existing methods often require extensive computational resources or unrealistic access to the entire dataset, making them impractical for real-world scenarios.

Introducing LORETTA: A New Approach to Adversarial Attacks

A new framework called LORETTA (Low Resource Two-phase Temporal Attack) has been introduced to address these limitations. LORETTA is a novel adversarial framework specifically designed for CTDGs. Its primary goal is to significantly degrade the performance of TGNNs without requiring expensive surrogate models or full access to the dataset, operating instead with only the training portion of the data.

LORETTA employs a clever two-stage strategy:

1. Sparsification: In this initial phase, LORETTA identifies and removes high-impact edges from the graph. It does this by leveraging one of 16 different temporal importance metrics, effectively weakening the graph’s structure in critical areas.

2. Replacement: After removing these crucial edges, LORETTA strategically inserts an equal number of ‘adversarial negative’ edges. This is done using a novel degree-preserving negative sampling algorithm, ensuring that the new edges mimic natural temporal patterns and node activity, making the attack difficult to detect.

This plug-and-play design allows LORETTA to operate efficiently and adhere to realistic ‘unnoticeability constraints’. These constraints ensure the attack remains stealthy by limiting the total number of modifications, ensuring inserted edges have plausible timestamps, connect recently active nodes, and preserve the original graph’s degree distribution.

Also Read:

Demonstrated Effectiveness and Robustness

The impact of LORETTA is substantial. Across four widely benchmarked datasets (MOOC, Wikipedia, UCI, and Enron) and four State-of-the-Art (SotA) TGNN models, LORETTA degraded performance by an average of 29.47%. Specifically, it achieved up to 42.0% degradation on MOOC, 31.5% on Wikipedia, 28.8% on UCI, and 15.6% on Enron.

Beyond its potent attack capabilities, LORETTA also demonstrated remarkable stealth and resilience:

  • It consistently outperformed 11 other attack baselines.
  • It remained undetectable to four leading anomaly detection systems, which are designed to spot unusual patterns in data streams.
  • It proved robust against four SotA adversarial defense training methods, meaning models trained with these defenses still suffered significant performance drops.

A key advantage of LORETTA over previous methods like T-SPEAR is its efficiency. T-SPEAR relies on computationally expensive surrogate models and assumes full access to the dataset, which is often not the case for an attacker. LORETTA, by contrast, is a low-resource framework that operates without surrogate models and only requires access to the training data, making it a more practical and realistic threat model.

This research highlights the critical need for more robust defense mechanisms for TGNNs, especially as these models are increasingly deployed in sensitive applications. By understanding the vulnerabilities exposed by LORETTA, researchers can work towards building more secure and resilient temporal graph learning systems. You can read the full research paper here.

Dev Sundaram
Dev Sundaramhttps://blogs.edgentiq.com
Dev Sundaram is an investigative tech journalist with a nose for exclusives and leaks. With stints in cybersecurity and enterprise AI reporting, Dev thrives on breaking big stories—product launches, funding rounds, regulatory shifts—and giving them context. He believes journalism should push the AI industry toward transparency and accountability, especially as Generative AI becomes mainstream. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -