TLDR: A new model called Deep Graph Neural Point Process (DGNPP) improves predictions in networks where interactions change over time. Unlike previous methods that ignored network structure, DGNPP uses two main components: a Node Aggregation Layer to capture static network connections and a Self Attentive Layer to track dynamic changes. This approach allows DGNPP to more accurately predict both what event will happen next and when it will occur, while also being more computationally efficient than existing models.
Temporal Interactive Networks (TINs) are dynamic systems where interactions between entities, such as people, devices, or systems, change over time. Unlike static networks, TINs are crucial for understanding evolving relationships in various real-world applications, including recommendation systems, user profiling, and social networks. However, a significant challenge in learning these networks has been the oversight of their underlying topological structure. Traditional approaches often treat TINs as simple multi-sequence prediction problems, failing to fully leverage the intricate connections within the network.
Addressing this limitation, researchers have introduced a novel model called the Deep Graph Neural Point Process (DGNPP). This innovative model integrates the network’s topological structure into the process of dynamically updating embeddings over time, leading to more precise predictions. The DGNPP model is built upon two core components: the Node Aggregation Layer and the Self Attentive Layer.
Node Aggregation for Static Structure
The Node Aggregation Layer (NAL) is designed to capture the static topological information of the network. It generates stable, unchanging representations (embeddings) for users and items by considering their connections within snapshots of the network at specific times. Instead of complex transformations, this layer simplifies the process by summing the weights of neighboring nodes, making it efficient while still effectively representing the network’s structure.
Self-Attention for Dynamic Evolution
Complementing the NAL, the Self Attentive Layer (SAL) is responsible for generating dynamic embeddings that evolve continuously over time. This layer uses a self-attention mechanism to capture long-range dependencies within the sequence of interactions, overcoming limitations of previous methods like RNNs and LSTMs in handling long sequences. The SAL consists of an Attentive Interaction Layer, which incorporates temporal and node information to create time-related interaction embeddings, and a Temporal Shift Layer, which models the passage of time to update dynamic embeddings.
Predicting Events and Times
The DGNPP model combines these static and dynamic embeddings into an ‘intensity function,’ which is then used to predict both the type of the next event and its precise occurrence time. The model is optimized using maximum likelihood estimation to maximize the probability of observed interaction sequences. For event prediction, DGNPP identifies the most likely next interaction. For time prediction, it estimates when a specific user-item interaction is most likely to occur.
Also Read:
- A New Approach to Detecting Network Intrusions Using Dynamic Graph Models
- STPFormer: A New Approach to Smarter Traffic Prediction
Superior Performance and Efficiency
Extensive experiments conducted on three public datasets—Reddit, Wikipedia, and Last.FM—demonstrate DGNPP’s superior performance. In event prediction tasks, DGNPP consistently outperforms competitive baseline models, showing significant improvements in metrics like Recall@10 and MRR. For instance, on the Last.FM dataset, it achieved a 16.0% lead in Recall@10 and 8.1% in MRR over baselines. In time prediction tasks, DGNPP also showed remarkable accuracy, with significantly lower RMSE values compared to other models. Beyond accuracy, DGNPP is also highly efficient, exhibiting reduced runtime on all tested datasets, making it suitable for large-scale data. Ablation studies further confirmed the critical roles of both the Node Aggregation Layer and the Self Attentive Layer in the model’s effectiveness.
In conclusion, the Deep Graph Neural Point Process offers a highly efficient and effective solution for learning temporal interactive networks. By intelligently combining static topological information with dynamic temporal changes, DGNPP significantly advances the accuracy and efficiency of predicting future events and their timings in complex evolving networks. For more details, you can refer to the full research paper: Deep Graph Neural Point Process For Learning Temporal Interactive Networks.


