TLDR: Latent Self-Consistency (LSC) is a new method that helps Large Language Models (LLMs) produce more consistent and reliable answers for both short and long questions. It uses small, learned “summary tokens” to efficiently capture the meaning of each response, outperforming previous methods like Self-Consistency (SC), Universal Self-Consistency (USC), and Weighted Unigram Consistency Score (WUCS) in accuracy and consistency, while adding less than 1% to inference time and negligible memory overhead.
Large Language Models, or LLMs, have become incredibly powerful tools for many tasks, from writing code to answering complex questions. However, a common challenge with these models is their tendency to produce slightly different or even inconsistent answers when asked the same thing repeatedly. This variability can make them less reliable for important applications.
To tackle this, researchers developed methods like Self-Consistency (SC), which works by generating several possible answers and then picking the one that appears most often. This is great for short, exact answers, like in math problems or multiple-choice questions. But for longer, more creative responses—like summaries or detailed explanations—SC struggles because semantically similar answers might be phrased very differently, making exact matching impossible.
Other methods, such as Universal Self-Consistency (USC) and Weighted Unigram Consistency Score (WUCS), tried to extend this idea to longer answers. However, USC can be slow and use a lot of memory because it asks the LLM itself to judge the consistency of all generated responses. WUCS is faster but often loses accuracy on short answers and doesn’t always grasp the deeper meaning of complex reasoning.
This is where a new approach called Latent Self-Consistency (LSC) comes in. LSC aims to combine the best features of these methods, offering high accuracy for both short and long answers with very little extra computational cost. It introduces a clever way to understand the “semantic essence” of each response.
How LSC Works
Imagine an LLM generates several different answers to a question. LSC then appends a small set of special, trainable “summary tokens” to the end of each answer. These tokens are like tiny, learned tags that are trained to capture the main meaning of the entire response. Instead of re-processing the whole answer, LSC only needs a quick pass to get the representations of these summary tokens. This is highly efficient because it can reuse much of the LLM’s previous work, adding less than 1% to the inference time and almost no extra memory.
Once LSC has these compact “meaning embeddings” for each answer, it compares them using cosine similarity to see how semantically close they are. It then uses a smart selection process, including an “exponentially-weighted mean selection” to prioritize answers that are strongly similar to many others, and a “dynamic Top-K boundary detection” to filter out noisy or outlier responses. This ensures that the selected answer truly represents the most consistent and semantically aligned response.
Key Advantages
LSC offers several significant benefits:
- It provides a universal way to select the best answer, working reliably across both short-answer tasks (like math problems) and long-answer tasks (like code generation or summarization) without needing different settings for each.
- It achieves this semantic understanding very efficiently, making it practical for real-time use.
- LSC also gives reliable confidence estimates, meaning that when it’s highly confident in an answer, that answer is usually very accurate.
- Its dynamic noise reduction helps it focus on the most coherent group of answers, improving accuracy.
Performance and Efficiency
Across a wide range of benchmarks, including mathematical reasoning, factual knowledge, code generation, and summarization, LSC consistently outperformed existing methods like SC, USC, and WUCS. For instance, it matched or surpassed SC on short-answer tasks and achieved the highest scores on long-answer tasks. Crucially, it did this with minimal computational overhead—less than 1% additional inference time and negligible memory usage, unlike USC which can add 10% time and 15% memory.
The research also highlighted LSC’s near-perfect “consistency” in identifying the true majority answer, averaging 97.8% on short-answer tasks and 92.4% on long-answer tasks. This means LSC is exceptionally good at finding the most agreed-upon response among many generated options, validating the core idea that majority agreement often correlates with correctness.
Also Read:
- Making LLMs More Honest: ConfTuner Teaches Models to Express True Confidence
- SCOPE: Smarter Prompts for Faster, Cheaper Large Language Models
Conclusion
Latent Self-Consistency (LSC) represents a significant step forward in making Large Language Models more reliable and consistent. By using learnable summary tokens to capture the semantic essence of responses, LSC offers a practical and efficient solution that works across diverse answer formats, from brief numerical answers to detailed explanations and code. This innovation paves the way for more trustworthy and versatile AI applications. For more in-depth details, you can refer to the original research paper. Read the full paper here.


