TLDR: ARC-Encoder is a novel method that compresses text into continuous representations for Large Language Models (LLMs), drastically reducing inference costs and extending context windows without modifying the LLM’s architecture. This ‘plug-and-play’ approach achieves state-of-the-art performance across various tasks, including question answering, translation, and summarization, and can adapt a single encoder to multiple decoders. It also enables efficient storage of compressed contexts, paving the way for more scalable and flexible LLM applications.
Large Language Models (LLMs) have become incredibly powerful, but their increasing use in complex applications like retrieval-augmented generation (RAG) and chain-of-thought reasoning often requires them to process very long texts. This leads to significant computational costs during inference and can even push against the LLM’s context window limits, potentially degrading performance.
Traditional solutions for compressing these long contexts often involve fine-tuning the LLM itself or even altering its core architecture. While effective, these methods can compromise the model’s general abilities when not used for the specific compression task.
Introducing ARC-Encoder: A Plug-and-Play Solution
A new research paper, “ARC-ENCODER: LEARNING COMPRESSED TEXT REPRESENTATIONS FOR LARGE LANGUAGE MODELS” by Hippolyte Pilchen, Edouard Grave, and Patrick P´erez from Kyutai, introduces an innovative alternative: the Adaptable text Representations Compressor, or ARC-Encoder. This method compresses the input context into a more compact, continuous representation that can be directly used by decoder-only LLMs, effectively replacing the original token embeddings.
The key breakthrough of ARC-Encoder is its ability to achieve significant compression without requiring any modifications or fine-tuning of the target LLM. This makes it a truly “plug-and-play” solution, preserving the LLM’s original capabilities while boosting efficiency.
How ARC-Encoder Works
The ARC-Encoder system consists of a text encoder and a small MLP (Multi-Layer Perceptron) projector, which work in conjunction with a frozen (unchanged) target decoder LLM. The encoder, built on an LLM transformer backbone, is designed to reduce the number of elements in the text sequence by a fixed factor, typically 4 or 8 times fewer representations than original text tokens.
This compression is achieved through a clever “pooling” mechanism. Instead of fixed-size memory tokens, ARC-Encoder averages consecutive queries within the last self-attention module of the encoder. This process effectively condenses the information from multiple tokens into a single, richer continuous representation, while the keys and values in the attention mechanism remain unchanged.
Training for Performance and Adaptability
The training of ARC-Encoder involves a systematic approach:
- Base Pretraining: The model is pretrained using two alternating tasks: reconstruction (where the decoder tries to replicate the original text from compressed tokens) and continuation (where the decoder continues a text sequence after a compressed segment). Special learned tokens are used to guide these tasks.
- Fine-Tuning: After pretraining, ARC-Encoder can be fine-tuned for specific downstream tasks like question answering, summarization, or translation. Crucially, this fine-tuning only affects the encoder and MLP, leaving the decoder LLM untouched.
- Multi-Decoder Training: One of ARC-Encoder’s most impressive features is its ability to work with multiple decoders simultaneously. A single shared encoder can be adapted to different LLMs by specializing only the small MLP projector and a few special tokens, which account for less than 1% of the encoder’s parameters. This allows for a highly flexible and portable compression solution.
Also Read:
- Compressing Text for LLMs: How Images Can Halve Token Usage
- Streamlining LLM Context: A Simple Mean-Pooling Approach for Efficient Document Compression
Impressive Results Across the Board
The researchers evaluated ARC-Encoder across a variety of LLM usage scenarios, including in-context learning and context window extension, using both instruct and base decoders like Llama3.1 8B and Mistral 7B. The results are compelling:
- ARC-Encoder achieved state-of-the-art performance on several benchmarks for question answering (e.g., HotpotQA, SQuAD), translation (FLORES), and summarization (CNN-DailyMail).
- It significantly improved computational efficiency during inference, with a 4x pooling factor leading to nearly the same accuracy as an LLM operating on full text, but with substantial gains in prefilling FLOPs.
- For long-context understanding, ARC-Encoder successfully extended the effective context window of models like Llama2 7B Chat, allowing it to process up to 8 times more input than its original limit without any internal modifications to the LLM.
- The compressed representations of large datasets like Wikipedia can be stored efficiently, making it feasible to pre-compute and store these compressed contexts for even greater speed-ups in RAG systems.
The ARC-Encoder represents a significant step forward in making LLMs more efficient and adaptable, offering a flexible and powerful solution for managing long contexts without compromising the integrity or general abilities of the underlying language models. For more details, you can read the full research paper here.


