TLDR: This paper introduces new voting rules and algorithms for “linear social choice” that significantly reduce “distortion” – the suboptimality arising when decisions are based on preference rankings instead of true underlying utilities. Unlike previous methods, these new rules achieve distortion bounds that depend only on the dimension of candidate features, not the number of candidates or voters, making them highly effective for large-scale AI applications like recommendation systems and value alignment. The study proposes deterministic (Max Coordinate Plurality) and randomized (Linear/Pure Stable Lotteries) rules, along with instance-optimal algorithms, demonstrating their superior performance empirically.
A new research paper delves into the complexities of social choice theory, specifically addressing how voting systems perform when voters have underlying preferences that can be represented mathematically. Titled “Optimized Distortion in Linear Social Choice,” the study by Luise Ge, Gregory Kehne, and Yevgeniy Vorobeychik from Washington University in St. Louis introduces novel approaches to minimize “distortion” in these systems.
Social choice theory traditionally focuses on selecting a winner based on voters’ ranked preferences. However, voters often have deeper “utilities” or values for each option, which rankings alone might not fully capture. When a voting rule relies solely on these rankings, it can lead to outcomes that are not ideal in terms of overall societal well-being, or “utilitarian social welfare.” This gap between the ideal outcome and the actual outcome is measured by “distortion.”
The concept of distortion has been a key tool for evaluating and designing voting rules, especially when there’s limited information about voters’ true preferences. Previous research often found that distortion bounds depended heavily on the number of candidates or voters, which becomes problematic in modern applications where the number of options can be astronomically large.
A New Perspective: Linear Utility Functions
This paper breaks new ground by undertaking the first study of distortion specifically for “linear utility functions.” In many contemporary settings, such as recommendation systems, value alignment in AI (Reinforcement Learning from Human Feedback – RLHF), or multi-objective decision making, alternatives can be naturally represented as vectors. A voter’s utility for an option then becomes a simple mathematical function (an inner product) of their preference vector and the option’s embedding vector. This “linear utility model” is a more structured and realistic assumption for these domains.
A significant advantage of this linear utility model is that the distortion bounds derived in this study depend only on the “dimension” of the candidate embedding – essentially, how many features describe each option. Crucially, these bounds are independent of the total number of candidates or voters. This is a major step forward, as it means the rules can perform well even when there are an immense number of potential choices, a common scenario in AI applications.
Introducing Novel Voting Rules
The researchers introduce several new voting rules designed to minimize distortion under this linear utility framework:
- Maximum Coordinate Plurality (MCP) Rule: This is a deterministic rule. It first identifies a small subset of candidates that are “maximal” in each dimension (e.g., the best option for each specific feature). Then, it selects a plurality winner from this reduced set. The paper proves that MCP achieves a distortion bound proportional to the cube of the dimension (O(d^3)).
- Linear Stable Lotteries Rule (LSLR): For situations where the exact vector representations (embeddings) of candidates are known, this randomized rule offers a significant improvement. It adapts an existing “stable lottery” concept and achieves an asymptotically optimal distortion bound proportional to the square root of the dimension (O(√d)).
- Pure Stable Lotteries Rule (PSLR): Even when candidate embeddings are unknown, a version of the stable lotteries rule can still be applied. This randomized rule selects candidates from suitably sized, randomly chosen committees and achieves a distortion bound proportional to the dimension (O(d)).
Beyond these theoretical guarantees, the paper also introduces practical, polynomial-time algorithms that can compute “instance-optimal” candidates. These algorithms are designed to minimize distortion for a specific collection of candidates and votes, offering a tailored solution for real-world scenarios.
Also Read:
- ADPO: Enhancing Preference Optimization for AI Models with Robustness and Flexibility
- AI Learns Faster When Given More Choices: New Algorithm Improves Reinforcement Learning from Human Feedback
Real-World Impact and Surprising Findings
The effectiveness of these new rules and algorithms was empirically evaluated using two real-world datasets: MovieLens for recommendation systems and an abortion opinion survey utilizing language model embeddings. The instance-optimal algorithms consistently outperformed several standard voting rules.
One particularly surprising finding from the empirical evaluation was that, contrary to theoretical lower bounds which suggest distortion increases with dimension, the worst-case distortion bounds actually decreased with dimension in practice. This implies that as candidate representations become richer (higher dimension), using preference rankings instead of full utility information becomes empirically less detrimental.
This research marks a significant advancement in social choice theory, particularly for its relevance to AI and representation-based models. By focusing on linear utility functions and achieving distortion bounds dependent on embedding dimension rather than the number of candidates, the study provides robust and efficient methods for making collective decisions in complex, data-rich environments. You can read the full paper here.


