spot_img
HomeNews & Current EventsAI Models Enhance Online Political Discourse Quality, Study Reveals

AI Models Enhance Online Political Discourse Quality, Study Reveals

TLDR: A recent study published in Science Advances demonstrates that AI-powered large language models (LLMs) can significantly improve the quality of online political discussions. By generating polite, evidence-based counterarguments, these AI systems nearly doubled the chances of high-quality conversations and substantially increased participants’ openness to alternative viewpoints, without altering their core political ideologies.

Researchers have successfully trained an artificial intelligence model to respond to online political posts, leading to a notable improvement in the quality of digital discourse. The findings, published in the journal Science Advances on July 27, 2025, indicate that AI-powered large language models (LLMs) can foster more constructive online interactions.

The study involved nearly 3,000 participants from the United States and the United Kingdom. Participants were asked to express their views on a politically charged issue, and their responses were then countered by ChatGPT, acting as a ‘fictitious social media user.’ This AI system was specifically trained to provide polite, evidence-based counterarguments tailored to the original post’s position and reasoning.

The results were compelling: the AI-generated, evidence-based replies nearly doubled the chances of a high-quality online conversation. Furthermore, they substantially increased participants’ openness to alternative viewpoints. Specifically, evidence-based counterarguments increased the likelihood of a high-quality response by six percentage points, willingness to compromise by five percentage points, and respectfulness by nine percentage points. It is crucial to note, however, that while participants became more receptive to different perspectives, their fundamental political ideologies remained unchanged.

Gregory Eady, an associate professor of political science and data science at the University of Copenhagen, Denmark, and an author of the study, highlighted the potential of LLMs to offer ‘light-touch suggestions.’ These could include alerting social media users to disrespectful tones in their posts. Eady suggested that such AI systems could be integrated into school curricula to teach young people best practices for discussing contentious topics online.

Also Read:

Despite the promising results, experts caution against a complete reliance on AI systems for regulating online discourse. AI-powered models have faced scrutiny for inherent biases, including political and racial biases, and for operating as ‘black boxes’ where internal processes are untraceable. Experts like Kapoor, who was not involved in the study, emphasize that while promising, the study itself involved human ratings of responses, and factors such as context, culture, and timing must be considered, especially in diverse nations like India, before widespread implementation.

Meera Iyer
Meera Iyerhttps://blogs.edgentiq.com
Meera Iyer is an AI news editor who blends journalistic rigor with storytelling elegance. Formerly a content strategist in a leading tech firm, Meera now tracks the pulse of India's Generative AI scene, from policy updates to academic breakthroughs. She's particularly focused on bringing nuanced, balanced perspectives to the fast-evolving world of AI-powered tools and media. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -