TLDR: The research paper introduces EVP, an Event-aware Prompt learning framework for dynamic graphs. EVP addresses the limitation of existing methods by explicitly leveraging historical event knowledge. It extracts, adapts, and aggregates historical events using event prompts, dynamic prompts, and a time decay function. Designed as a plug-in, EVP consistently improves the performance of traditional Dynamic Graph Neural Networks, dynamic graph pre-training, and prompt learning methods on tasks like temporal link prediction and node classification, demonstrating its effectiveness in utilizing historical event information.
In our increasingly interconnected world, many real-world systems can be represented as dynamic graphs, where interactions between entities evolve over time. Think of social networks where friendships form and dissolve, or e-commerce platforms where users make purchases and leave reviews. Understanding these evolving relationships is crucial for various applications, from predicting future trends to recommending content.
Traditional methods for dynamic graph learning, such as Dynamic Graph Neural Networks (DGNNs), have been popular. More recently, prompt learning techniques have emerged, offering a way to adapt pre-trained models to specific downstream tasks more efficiently. However, a significant challenge with existing prompt learning methods for dynamic graphs is their tendency to focus primarily on the relationship between nodes and time, often overlooking the rich information embedded in historical events.
A new research paper, titled “Event-Aware Prompt Learning for Dynamic Graphs” by Xingtong Yu, Ruijuan Liang, Xinming Zhang, and Yuan Fang, introduces a novel framework called EVP. EVP stands for Event-aware dynamic graph Prompt learning, and it’s designed to address this gap by explicitly leveraging historical event knowledge to enhance dynamic graph learning methods.
What is EVP and How Does It Work?
EVP is designed as a flexible plug-in that can be integrated with existing dynamic graph learning techniques. Its core idea revolves around understanding and utilizing the sequence of past events that shape a node’s behavior and interactions. The framework operates in three main stages:
First, **Event Extraction**: For each node in the dynamic graph, EVP identifies and extracts a series of its recent historical events. An event could be anything from a user posting a blog to listening to a new music genre. This collection of past interactions forms a crucial historical context for the node.
Second, **Event Adaptation**: Once historical events are extracted, EVP doesn’t just use them as they are. It introduces an event adaptation mechanism. This involves generating special “event prompts” that modify the representations (embeddings) of these historical events. The goal is to align the fine-grained characteristics of each event with the specific requirements of the downstream task, whether it’s predicting future links or classifying nodes.
Third, **Event Aggregation**: Historical events don’t all have the same importance. More recent events might be more relevant, but older events could reveal long-term patterns. EVP tackles this with an event aggregation mechanism. It uses a “dynamic prompt” to adaptively weigh the importance of different historical events. Additionally, a “time decay function” is applied, giving more emphasis to recent events while still considering the influence of earlier ones. This ensures that the aggregated historical knowledge is both comprehensive and relevant to the current context.
EVP as a Plug-in
One of EVP’s key strengths is its versatility. It can seamlessly integrate with various dynamic graph learning methods:
- **Traditional DGNNs:** EVP can directly use them as a backbone for pre-training and then enhance their downstream adaptation.
- **Dynamic Graph Pre-training Methods:** For models that first pre-train on general graph properties and then fine-tune, EVP can inject historical event knowledge into the pre-trained node embeddings before the fine-tuning phase.
- **Dynamic Graph Prompt Learning Methods:** Even existing prompt learning approaches can benefit. EVP can be applied after their initial prompt-adjusted features are generated, further enriching the node embeddings with event-aware insights.
Also Read:
- Protecting Privacy in Graph Learning with Multi-View Feature Propagation
- Deepening Knowledge Integration: How Semantic-Condition Tuning Enhances LLMs for Knowledge Graph Completion
Experimental Validation
The researchers conducted extensive experiments on four public datasets: Wikipedia, Reddit, MOOC, and Genre. They evaluated EVP’s performance on two common dynamic graph tasks: temporal link prediction (predicting future connections) and temporal node classification (categorizing nodes over time). The results consistently showed that EVP outperforms state-of-the-art methods across both tasks and in both transductive (nodes seen during training) and inductive (new nodes at test time) settings.
Ablation studies, which involved testing EVP with certain components removed, confirmed the importance of both the event adaptation and event aggregation mechanisms. The dynamic prompt and time decay function were found to be crucial for effectively leveraging historical event knowledge.
In conclusion, EVP offers a significant step forward in dynamic graph learning by effectively incorporating historical event knowledge. Its plug-in nature makes it a valuable enhancement for a wide range of existing methods, promising improved performance in understanding and predicting evolving graph structures. You can read the full research paper for more technical details here.


