TLDR: This research paper introduces two innovative models, the ‘minimalist’ and ‘generalist’ receivers, designed to overcome a long-standing challenge in AI communication: the inability of agents in signaling games to genuinely interpret compositional messages. By allowing receivers to learn from individual message components rather than just whole messages, these models demonstrate how AI can achieve true compositional understanding, retaining partial information even when parts of a message are unknown or replaced. The paper argues that the limitations of previous models stemmed from overly simplistic receiver designs and that more sophisticated learning architectures are crucial for understanding the evolution of complex communication.
In the fascinating world of artificial intelligence and communication, researchers often use ‘signaling games’ to understand how arbitrary signals can acquire shared meanings between different agents. These games involve a sender who knows the state of the world and sends a message, and a receiver who then performs an action based on that message. Both agents are rewarded if the receiver’s action matches the world state.
However, a significant challenge arises when trying to model complex communication systems, like human language, which are inherently compositional. This means that the meaning of a complex expression is determined by its constituent parts and how they are combined. Traditional signaling game models have struggled with this, leading to what is known as the ‘puzzle of compositional understanding’.
The core issue is that even when senders transmit messages that are structurally compositional, the receivers in these models often fail to interpret them in a truly compositional way. Instead, they treat each complex message as a single, atomic unit. A clear example of this problem is seen in a 4x4x4 two-sender signaling game. Imagine two senders describing a professor’s clothing: one sender indicates ‘dress’ or ‘suit’, and the other indicates ‘red’ or ‘blue’. A receiver should ideally learn that ‘dress’ refers to a type of garment and ‘red’ to a color, and combine these meanings to understand ‘red dress’.
But in conventional models, if the sender suddenly replaces ‘red’ with an unknown signal, say ‘rouge’ (French for red), the receiver loses all understanding of the combined message ‘dress and rouge’. Crucially, they also lose the information about ‘dress’ that they previously understood. This suggests the receiver wasn’t interpreting ‘dress’ compositionally, but rather as an inseparable part of ‘red dress’. If true compositional understanding were present, the receiver should still understand ‘dress’, even if ‘rouge’ is unknown, implying only partial information loss.
David Peter Wallis Freeborn, in his research paper “Compositional Understanding in Signaling Games”, argues that this puzzle can be solved by designing more sophisticated receivers. He proposes two novel models: the minimalist receiver and the generalist receiver.
The Minimalist Approach
The minimalist receiver learns by reinforcing actions based on each *atomic* message component received, rather than just the full combined message. For instance, in our clothing example, the receiver would have separate ‘urns’ (representing learned associations) for ‘dress’, ‘suit’, ‘red’, and ‘blue’. When ‘red dress’ is received and rewarded, both the ‘red’ urn and the ‘dress’ urn are reinforced for the correct action. To ensure the receiver makes the correct choice when combining these atomic meanings, a special ‘activation function’ (like a tempered softmax function) is used. This function helps the receiver prioritize the most strongly reinforced action. This approach can be thought of as a simple feed-forward neural network, where atomic messages are inputs and actions are outputs.
Also Read:
- Unpacking AI’s Social Smarts: Why Small Language Models Struggle with Generalizable Theory of Mind
- Unlocking Universal Ethics: How AI Could Reveal Hidden Moral Structures
The Generalist Approach
The generalist receiver takes a broader approach, learning about *all* message components and their combinations, effectively building a full joint probability distribution of messages and actions. This means the receiver learns not only about ‘red dress’ but also about ‘red’ on its own, ‘dress’ on its own, and even correlations between ‘red’ and ‘dress’. When a new, unknown signal is introduced (like ‘rouge’), the generalist model can be designed to either ‘information-erase’ (treating the new signal as completely unknown and losing information from other components) or ‘information-preserve’ (assuming the new signal is statistically independent of known components, thus retaining partial information from the understood parts). The information-preserving method allows the receiver to maintain understanding of the ‘dress’ component even if ‘rouge’ is new.
Freeborn argues that these models are simpler and more natural generalizations of traditional signaling games. They directly address the problem by allowing receivers to consider the component information available from atomic messages, rather than being restricted to only the entire message. While the final action still depends on the full message, the way the receiver processes and interprets the atomic messages to reach that conclusion is genuinely compositional.
These new models offer promising avenues for future research into the emergence of communication and language, suggesting that more sophisticated AI agents are key to unlocking a deeper understanding of compositional meaning in artificial systems.


