TLDR: A study investigated ChatGPT’s persuasive ability on ethically sensitive topics. It found that while ChatGPT constructs coherent arguments with consistent structures, its linguistic richness is limited, and its persuasive efficacy is constrained, especially on ethical issues. Users often acknowledged benefits but ethical concerns persisted or intensified, suggesting human resistance to AI persuasion on such topics.
In an era where artificial intelligence increasingly shapes our digital landscape, understanding how AI-generated content influences human thought is paramount. A recent study delves into this very question, examining the persuasive capabilities of Large Language Models (LLMs) like ChatGPT, particularly when discussing topics with ethical dimensions.
The research, titled “How Persuasive Could LLMs Be? A First Study Combining Linguistic-Rhetorical Analysis and User Experiments”, was conducted by Daniel Raffini, Agnese Macori, Lorenzo Porcaro, Tiziana Catarci, and Marco Angelini. Their work combines a linguistic and rhetorical analysis of AI-generated texts with a user experiment to gauge the impact on human readers.
The Study’s Approach
The researchers engaged 62 participants in a pre-post interaction survey. Participants were divided into two groups, each assigned a different topic for interaction with ChatGPT: incentivizing the use of robots for elderly care, and the impact of a 4-day working week on productivity. These topics were chosen for their general interest and their subtle, yet complex, ethical implications, aiming to minimize pre-existing biases.
Before interacting with ChatGPT, participants completed a survey to establish their initial opinions and knowledge. They then used the free version of ChatGPT4, providing a standardized prompt to generate an argumentative text of about 3000 characters that supported a specific opinion on their assigned topic. Following this, a post-interaction survey assessed any changes in opinion or knowledge.
AI’s Argumentative Style
The linguistic and rhetorical analysis of the 62 generated texts revealed a consistent pattern. ChatGPT demonstrated a clear ability to construct coherent argumentative texts, often adhering to a fixed six-paragraph structure: an introduction stating a thesis, two paragraphs supporting it, two paragraphs addressing and refuting potential objections, and a concluding paragraph reinforcing the main claim. Even counterarguments were strategically managed to ultimately support the initial thesis.
However, the study also noted a lack of linguistic richness and a reliance on formulaic expressions. The language tended to be neutral and linear, with limited use of complex rhetorical figures beyond common metaphors or antithesis. This stylistic approach, while clear and direct, often resulted in a discourse that felt somewhat superficial, especially when navigating ethical complexities. For instance, when confronted with objections, ChatGPT sometimes struggled to provide robust counter-arguments, instead offering compromises that didn’t fully overturn the criticism.
User Perceptions and Persuasion
The user study revealed nuanced findings regarding ChatGPT’s persuasive efficacy. For both topics, initial opinions were generally positive. After reading the AI-generated texts, participants showed a slight positive shift in opinion, with variations depending on the topic. For the 4-day working week, the perception of productivity benefits notably increased, and concerns generally decreased or remained unchanged.
However, for the topic of robots in elderly care, a different pattern emerged. While participants often acknowledged the benefits highlighted by ChatGPT, ethical concerns tended to persist or even intensify post-interaction. This suggests a human resistance to persuasion when ethical issues are involved, especially if the AI’s arguments appear to downplay or circumvent these concerns. The study posits that a strong rhetorical connotation and the circumvention of ethical issues might make AI texts less persuasive and could lead to less trust in AI systems.
Also Read:
- Unpacking AI’s Moral Compass: How Language Models Navigate Ethical Dilemmas
- Assessing AI’s Readiness for Critical Social Decisions: A Look at Homelessness Resource Allocation
Implications for AI and Human Interaction
The findings underscore that while LLMs can generate structurally sound argumentative texts, their persuasive power is not uniform. The study highlights a potential utilitarian approach in AI’s argumentation, where ethical considerations might be sidelined to strengthen a particular stance. This can be ineffective, particularly in sensitive domains, as human users demonstrate a capacity to maintain their ethical concerns despite recognizing perceived benefits.
This research provides valuable insights for the development and deployment of AI systems, especially in contexts where influencing public opinion is a factor. It suggests that future LLMs might need to evolve beyond formulaic persuasion, incorporating a deeper understanding and more nuanced engagement with ethical complexities to build greater trust and achieve more genuine persuasive outcomes.


