TLDR: This research paper introduces a data-driven framework to evaluate how frequently multi-winner voting rules violate desirable properties (axioms) in practice, moving beyond traditional worst-case analysis. It demonstrates that neural networks can be trained to act as voting rules, outperforming existing methods in minimizing axiom violations across diverse voter preferences. The findings suggest that data-driven approaches can significantly inform the design of new, more effective voting systems.
In the world of decision-making, especially when it comes to selecting groups or committees, we often rely on voting rules. These rules are designed to aggregate individual preferences into a collective choice. Traditionally, researchers have focused on whether a voting rule perfectly satisfies certain desirable properties, known as axioms, under all possible circumstances. This approach often leads to a binary answer: a rule either satisfies an axiom or it doesn’t, often based on worst-case scenarios.
However, a recent research paper titled “What Voting Rules Actually Do: A Data-Driven Analysis of Multi-Winner Voting” by Joshua Caiata, Ben Armstrong, and Kate Larson, introduces a fresh perspective. Instead of just looking at worst-case scenarios, their work proposes a data-driven framework to understand how frequently voting rules actually violate these axioms in practice, across a wide range of voter preferences. This shift from a binary ‘yes/no’ to a more nuanced ‘how often’ provides a richer understanding of how different voting rules behave in real-world situations.
A New Lens for Evaluation
The core of this new framework lies in two key measures: the Axiom Violation Rate (AVR) and Rule Differences. The AVR quantifies how often a voting rule fails to satisfy a specific axiom when applied to various preference profiles. This allows for a more fine-grained evaluation, revealing that a rule might theoretically violate an axiom but rarely do so in practice. Rule Differences, on the other hand, measure the overlap between the committees selected by different voting rules, indicating how similar their outcomes are, regardless of their internal mechanisms.
The researchers applied this framework to analyze the relationship between multi-winner voting rules and their axiomatic performance under several common preference distributions. These distributions model different ways voters might express their preferences, from highly structured (like everyone agreeing) to completely random.
Learning Better Voting Rules with AI
Perhaps one of the most exciting aspects of this research is the exploration of neural networks as a new type of voting rule. The paper demonstrates that neural networks, specifically multi-layer perceptrons, can be trained to select committees that minimize axiom violations. These “learned rules” were shown to outperform many traditional voting rules in reducing the frequency of axiom breaches. This suggests that data-driven approaches, particularly machine learning, can inform the design of entirely new and potentially more effective voting systems.
The study made several significant contributions. Firstly, it introduced a quantifiable, data-driven measure for axiomatic violations. Secondly, it explored how voter preference distributions influence the axiomatic properties of committees chosen by common multi-winner rules, highlighting their sensitivity to the underlying voter population. Thirdly, it empirically investigated how different multi-winner voting rules vary in their committee selections and axiom violation frequencies. Finally, and crucially, it proved that machine learning can discover novel multi-winner voting rules that perform better under this new evaluation framework.
Also Read:
- Study Reveals Large Language Models Exhibit Bias Towards AI-Generated Content, Raising Concerns for Human Contributions
- Google AI Achieves 10,000x Reduction in LLM Training Data Through Active Learning
Key Findings and Implications
The research yielded several interesting observations. For instance, rules designed to elect individually popular alternatives often surprisingly performed well on axioms related to proportionality. This suggests that identifying individually liked alternatives might be an easier path to achieving strong proportional properties. The study also found that certain axioms, such as Condorcet Winner, Dummett’s Condition, and Local Stability, are consistently harder to satisfy across all rules, indicating inherent challenges in these areas.
The learned rules, especially those trained on all axioms (FNN-all), showed remarkably low axiom violation rates, often outperforming existing rules. Even when trained on a smaller set of “root axioms,” the learned rules (FNN-root) still performed better than most traditional methods. This indicates that providing more axiomatic feedback during the learning process can lead to even stronger performance.
The paper also touched upon the application of these learned rules to real-world data from PrefLib, an online repository of human preference data. While the training data didn’t perfectly match the real-world distributions, the learned rules still generalized well, demonstrating their potential applicability.
Furthermore, the researchers explored optimizing traditional positional scoring rules, like the Borda rule, using simulated annealing. They found that even with limited optimization, it was possible to discover score vectors that consistently outperformed the standard Borda rule in minimizing axiom violations, hinting at new classes of interpretable voting rules.
This work underscores the power of combining machine learning with social choice theory, moving beyond theoretical worst-case analyses to practical, data-driven evaluations. It opens up exciting avenues for designing more robust and fair voting systems in the future. For more details, you can read the full paper here.


