TLDR: RED-Rec is an LLM-enhanced recommendation system that unifies user interest modeling across diverse online scenarios like feeds, search, and ads. It uses a hierarchical architecture with a 2-D dense mixing policy and scenario-aware querying to capture holistic user interests. Validated on Xiaohongshu with hundreds of millions of users, RED-Rec showed significant performance gains in recommendations and ad targeting, demonstrating the effectiveness of cross-scenario unified modeling at scale.
In the vast and dynamic world of User-Generated Content (UGC) platforms, understanding what users truly want is a monumental challenge. Users interact in many different ways – they browse their feeds, search for specific content, and engage with advertisements. Traditionally, recommendation systems on these platforms have treated each of these interactions as separate, operating in isolated “silos.” This fragmented approach often misses the bigger picture of a user’s interests and struggles to keep up with advanced technologies like Large Language Models (LLMs) at a massive scale.
A new research paper introduces RED-Rec, a groundbreaking recommender engine designed to unify user interest modeling across these diverse scenarios. Developed by researchers including Manjie Xu, Xin Jia, and Cheng Chen, RED-Rec aims to provide a more holistic understanding of user preferences by synthesizing actions from multiple behavioral contexts. This allows for a comprehensive model of both users and the content they interact with.
Addressing Fragmented User Understanding
Imagine a user who loves sustainable living. They might search for “eco-friendly packaging,” like posts about environmental advocacy in their feed, and click on ads for solar panels. All these actions point to the same core interest, but traditional systems would often treat them as unrelated. This leads to a fragmented view of the user and inconsistent recommendations.
RED-Rec tackles this by recognizing that user interests are consistent across different scenarios. It uses an LLM-powered architecture to create nuanced, multi-faceted representations of user interests. This means the system can understand the underlying passion for sustainable living, regardless of whether it’s expressed through a search query, a feed interaction, or an ad click.
How RED-Rec Works
The core of RED-Rec is its hierarchical two-tower architecture. It uses LLMs to encode both items (content) and users, creating rich semantic representations. For items, it processes both text (titles, descriptions, tags) and visual content. For users, it aggregates interactions from homefeed browsing, search, and advertisements into a combined sequence.
A key innovation is the “2-D dense mixing policy.” This strategy intelligently combines behavioral signals across different scenarios and over time. It ensures that even infrequent but valuable user signals, like a rare search query, are not lost amidst thousands of feed interactions. This balanced approach helps capture cross-scenario user intent patterns effectively.
Furthermore, RED-Rec employs a “scenario-aware interest querying” mechanism. Instead of a single user profile, it generates multiple interest-specific representations. This allows the system to express fine-grained, context-specific preferences during serving, meaning recommendations are tailored not just to the user, but also to the specific scenario they are currently in.
Also Read:
- Beyond Basic Suggestions: A New AI Agent Delivers Explainable and Adaptive Product Recommendations
- Navigating Document Intelligence with Large Language Models: A Comprehensive Overview
Real-World Impact and Dataset
The researchers validated RED-Rec through extensive online A/B testing on hundreds of millions of users at Xiaohongshu, a leading UGC platform. The results showed substantial performance gains in content recommendation and advertisement targeting. For instance, it led to a 0.8864% increase in total Advertiser Value (ADVV) and a 0.3401% boost in overall Feed Ad Spend (Cost), demonstrating significant business value.
To further support research in this area, the team also introduced a new million-scale sequential recommendation dataset called RED-MMU (RedNote’s Multi-Scenario Multimodal User Behaviors). This dataset, derived from anonymized user behavior on Xiaohongshu, provides diverse behavioral contexts, rich engagement patterns, and industrial-scale coverage, making it invaluable for evaluating unified and scenario-specific models.
The successful deployment of RED-Rec, now serving approximately 160 million daily users on Xiaohongshu, highlights the practical viability of LLM-based cross-scenario recommendation at an unprecedented scale. This work marks a significant step towards deeper personalization and more meaningful user engagement on large-scale UGC platforms. You can read the full research paper here.


