TLDR: The paper introduces the Measure Theory of Semantic Information (MTSI) by George M. Coghill, a novel framework designed to accurately quantify semantic information. It critiques previous attempts, particularly Floridi’s Theory of Strongly Semantic Information (TSSI), for failing to resolve the “Bar-Hillel–Carnap paradox,” where contradictions are deemed maximally informative. MTSI, based on a unit circle analogy, successfully assigns zero informativeness to both contradictions and tautologies, providing a more intuitive and robust measure of meaning in information.
Understanding and measuring information, especially its meaning or ‘semantic’ content, has been a long-standing challenge in philosophy and information theory. While early work by figures like Shannon focused on the technical aspects of information transmission (how much data can be sent), the deeper question of what makes information meaningful and how to quantify that meaning remained.
A classic attempt to quantify semantic information came from Bar-Hillel and Carnap. Their theory proposed that the informativeness of a statement was inversely related to its probability. In simple terms, the less likely something is, the more informative it becomes. While this seems intuitive for many cases, it led to a peculiar outcome: a contradiction (a statement that is always false, like ‘it is raining and it is not raining’) was considered maximally informative. This counter-intuitive result became known as the Bar-Hillel–Carnap paradox.
In response to this paradox, Luciano Floridi developed his ‘Theory of Strongly Semantic Information’ (TSSI). Floridi’s approach aimed to resolve the paradox by insisting that for something to count as information, it must be true. He introduced a new way to measure informativeness based on the ‘discrepancy’ from the actual state of affairs, using a quadratic relationship. However, as critiqued by George M. Coghill in his recent paper, Floridi’s TSSI, despite its advancements, did not fully succeed in its primary goal. It still faced issues where contradictions did not consistently yield zero informativeness, and its measurement metrics were inconsistent across different types of information.
George M. Coghill, in his paper “Towards a Measure Theory of Semantic Information”, proposes a novel solution: the ‘Measure Theory of Semantic Information’ (MTSI). Drawing inspiration from the unit circle used in fields like signal theory and, by analogy, von Neumann’s quantum probability, Coghill introduces a new information space. Instead of a single linear scale, MTSI uses a multi-dimensional approach where ‘true’ and ‘false’ aspects of information are represented on orthogonal axes.
How MTSI Resolves the Paradox
The core innovation of MTSI is its ability to assign zero informativeness to both contradictions and tautologies (statements that are always true, like ‘it is raining or it is not raining’). This aligns with our intuition that such statements, while grammatically correct, convey no useful information. Furthermore, MTSI demonstrates that contradictory messages (like ‘x is true’ and ‘x is false’) can be equally informative, a concept illustrated through a classic riddle involving truth-telling and lying guards. The theory posits that the informativeness of a message is derived from its ‘distance’ from a state of complete misinformation or complete vacuity, within this new geometric space.
By squaring the metric values, similar to how wave amplitudes are squared to get probabilities in quantum mechanics, MTSI transforms distances into ‘measure values’ for vacuity (being uninformed), misinformation, and informativeness. This results in a balanced equation where the sum of informativeness, misinformation, and vacuity always equals one, providing a robust and coherent framework.
Also Read:
- A New Definition for Interpretable AI: Bridging the Gap Between Models and Human Understanding
- Navigating the Mathematical Landscape: LLMs in Formal and Informal Reasoning
Implications and Future Directions
MTSI successfully addresses the Bar-Hillel–Carnap paradox and meets the criteria Floridi set for an adequate theory of semantic information. It offers a more general and intuitive way to quantify meaning, ensuring that only truthful and non-trivial statements are considered informative. This new framework opens doors for further research into how semantic information relates to broader philosophical concepts, such as truthlikeness and different levels of abstraction in information analysis, potentially integrating with other comprehensive information theories.


