TLDR: Researchers developed MADD, a novel multi-agent simulation framework, to model the spread of disinformation and the effectiveness of social bots in correcting it. MADD incorporates realistic network structures and detailed user/bot behaviors. Experiments demonstrate that early, fact-based interventions by legitimate bots are most effective in curbing disinformation and enhancing user trust, while late interventions show limited impact and can even backfire. The study highlights the complexities of political misinformation and the critical importance of timely corrective actions.
In today’s digital age, social media platforms are a complex ecosystem where humans and automated accounts, known as social bots, interact. These bots play a significant role in both spreading and correcting false information. Understanding their influence is crucial for managing online risks and improving governance of information. However, existing research often simplifies user and network behaviors, overlooks the dynamic nature of bots, and lacks quantitative ways to evaluate correction strategies.
To address these challenges, researchers have proposed a new framework called MADD, which stands for Multi-Agent-based framework for Disinformation Dissemination. MADD aims to create a more realistic simulation of how disinformation spreads and how it can be corrected by social bots.
Building a Realistic Online World
MADD constructs a propagation network that closely mimics real-world social networks. It integrates two key models: the Barabási–Albert Model, which accounts for the ‘scale-free’ nature of networks (where a few nodes have many connections, like influential users), and the Stochastic Block Model, which captures community structures (dense connections within groups, sparse between them). This allows MADD to simulate how information flows within and across different online communities.
The framework also designs detailed attributes for its agents, which include regular users, malicious bots (MBots) that spread disinformation, and legitimate bots (LBots) that correct it. These attributes are based on real-world user data and include factors like a user’s interest in specific communities, their ‘trust threshold’ (how likely they are to believe information), their ‘dissemination tendency’ (how likely they are to share), their ‘social influence’ (how many followers they have), and their ‘activation time’ (when they are active online).
Disinformation itself is modeled by its topic and ‘plausibility’ (how believable it seems). The simulation also considers factors like the ratio of malicious and legitimate bots and how frequently they are active. User interactions, such as reposts and quotes, are used to track how information spreads.
Strategies for Correction
MADD evaluates two primary correction strategies: fact-based and narrative-based. Fact-based correction directly refutes disinformation with accurate data and scientific evidence, similar to professional fact-checking. Narrative-based correction, on the other hand, uses emotionally engaging stories or eyewitness accounts to convey the truth, aiming to weaken the emotional appeal of false information.
The timing of intervention is also critical. MADD explores three stages: early, mid-stage, and late intervention, to see when corrective actions are most effective.
Measuring the Impact
The impact of disinformation and correction is assessed at both individual and group levels. At the individual level, MADD tracks changes in a user’s ‘trust threshold’ over time, showing how repeated exposure to false or corrective information affects their ability to discern truth. At the group level, it measures the proportion of users who are susceptible (unexposed), exposed (encountered disinformation), infected (believe and spread disinformation), and uninfected (exposed but don’t believe, and might spread corrections).
Also Read:
- HalMit: A New Approach to Detect and Mitigate LLM Hallucinations in AI Agents
- Optimizing Public Spending: AI’s Role in Fair Participatory Budgeting
Key Findings from Simulations
The experiments conducted using MADD showed several important insights:
- The framework’s user attributes and network configurations align well with real social networks, and its simulations of disinformation spread are consistent with existing empirical studies.
- Disinformation tends to spread rapidly within communities but more slowly across them, due to the inherent community structure.
- Early intervention is significantly more effective than mid-stage or late intervention in curbing disinformation spread. In some cases, late interventions can even have negative impacts, potentially strengthening existing false beliefs due to ‘echo chamber’ effects.
- Fact-based correction generally proves more effective than narrative-based correction, especially in communities like ‘Business’, ‘Politics’, and ‘Technology’. However, in ‘Entertainment’, neither strategy had a significant impact, possibly due to the subjective nature of the content.
- The ‘Politics’ community showed unique and fluctuating patterns of disinformation spread, highlighting the challenges of correcting political misinformation.
- Even without external intervention, some spontaneous debunking occurs within the network, but its impact is limited.
- Increasing the ratio of legitimate bots leads to a faster decline in the infection rate of disinformation.
- Long-term simulations confirmed that early fact-based interventions can significantly reduce the influence of disinformation and enhance users’ ability to identify it.
While MADD offers valuable insights, the researchers acknowledge limitations such as computational resource constraints, which limited the duration and scale of some simulations. Future work could explore larger networks and longer simulation periods.
This research provides a robust framework for understanding the complex dynamics of disinformation and the crucial role of social bots in both its spread and correction. For more details, you can read the full research paper here.


