spot_img
HomeResearch & DevelopmentUnpacking Generative Propaganda: A New Look at AI's Impact...

Unpacking Generative Propaganda: A New Look at AI’s Impact on Public Opinion

TLDR: A research paper on “Generative Propaganda” examines how AI influences public opinion, focusing on Taiwan and India. It finds that while deepfakes were less prevalent than expected in 2024 elections, AI is broadly used for persuasion and distortion, often with obvious AI use to mitigate risks. Key AI benefits are efficiency gains in detectability, multimodality, and multilingualism. Social factors like legal risks and reputational costs, rather than just technical limitations, significantly constrain internal actors. The paper recommends a socio-technical defense strategy, including precise AI use identification, watermarks, scaled detection, cross-lingual monitoring, media literacy, user verification, and improved reputation systems.

A recent working paper titled “Generative Propaganda” delves into how generative artificial intelligence (AI) is being used to influence public opinion, moving beyond the common focus on “deepfakes.” Authored by Madeleine I. G. Daepp, Alejandro Cuevas, Robert Osazuwa Ness, Vickie Yu-Ping Wang, Bharat Kumar Nayak, Dibyendu Mishra, Ti-Chung Cheng, Shaily Desai, and Joyojeet Pal, the research offers a nuanced view of AI’s role in political communication, particularly in Taiwan and India.

The paper highlights that while there was widespread concern about a “deepfake deluge” during the 2024 election year, the actual observed use of deceptive AI-generated content was less frequent than anticipated. Instead, the study found that generative AI is employed in a much broader range of applications, often with the intent to persuade or distort narratives rather than solely to deceive.

Understanding Generative AI’s Diverse Uses

To better categorize the various ways AI is used, the researchers developed a taxonomy. This framework distinguishes between content where AI’s use is “obvious” versus “hidden,” and whether the depiction is “promotional” or “derogatory.”

  • Soft Fakes (Promotional / Obvious): These are humorous or laudatory AI-generated representations, often used by influencers. An example includes portraying a political leader with religious iconography or superimposed onto a dancing video, where the AI use is clearly visible.
  • Auth Fakes (Promotional / Hidden): This refers to authorized AI-generated portrayals of candidates, such as holy wishes in a candidate’s voice or speeches dubbed into different languages. The AI use here is intended to be less obvious, serving to enhance communication.
  • Deep Roasts (Derogatory / Obvious): These are satirical representations designed to mock or denigrate candidates, like face-swaps of opposition leaders into comedic scenes or filters that make a candidate appear angry. The intent is clearly humorous or critical, with AI use being apparent.
  • Deep Fakes (Derogatory / Hidden): This category is reserved for the traditional understanding of deepfakes—serious, derogatory representations of events or activities that did not actually occur, where the AI use is concealed to deceive. The study found these to be a smaller subset of overall AI misuse.

Beyond these representational uses, the paper also identifies other significant applications:

  • AIPasta: The use of generative AI to create artificial variations in social media posts, making them appear more organic and harder for human and algorithmic detectors to flag as spam.
  • Precision Propaganda: AI’s ability to micro-target and tailor messages, such as videos, to specific audiences based on demographics like caste or occupation.
  • AI Slop: Low-quality or absurd AI-generated content that proliferates online.

Motivations and Efficiency Gains

The primary motivations for using AI, especially among creators in India, were persuasion and distortion. Creators often made AI’s use obvious to reduce legal and reputational risks, finding that transparency did not necessarily compromise the content’s persuasive power. In Taiwan, distortion—the amplification of certain narratives and distraction from others—was a significant concern, with AI enabling rapid and scaled proliferation of online narratives.

Generative AI offers three crucial efficiency gains for these motivations:

  • Reduced Detectability: AIPasta, for instance, helps evade detection by introducing artificial variation in posts, making it harder for fact-checkers and algorithms to identify coordinated campaigns.
  • Multimodality: AI enables actors to easily expand operations across different media, turning text or images into videos. This is particularly impactful given the rise of short-form video platforms, which are harder for defenders to monitor.
  • Multilingualism: AI significantly enhances the ability of creators to operate across different languages, allowing small teams to scale their influence to new regions and overcome linguistic barriers that previously served as a defense mechanism.

Threats and Constraints

The research identifies various threat actors, including nation-states, political campaigns, troll groups, influencers, and content farms. It also examines the constraints these actors face. Legal risks, such as defamation charges or prison time for deepfake distribution, were found to be significant deterrents for internal actors with traceable public profiles (like local influencers and political campaigns). Reputational economics also played a role, as influencers feared losing their accrued value if discovered spreading deceptive content.

Social context is another vital constraint. Defenders often rely on their geopolitical knowledge and understanding of public reaction to discern fake content, rather than solely technical detection tools. However, external actors or those with disposable online identities (like many troll groups and content farms) are less affected by these social constraints.

Interestingly, the paper argues that existing theories often overemphasize technical limitations (model capabilities, guardrails, distribution bottlenecks) as primary constraints. While these can affect participatory actors, technically savvy or well-funded actors can often bypass them using open-source models or grey markets for distribution.

Also Read:

Recommendations for a Robust Defense

The paper concludes with several implications for security researchers and policymakers, emphasizing the need to look beyond deepfakes and address the broader scope of generative propaganda. Key recommendations include:

  • Identify AI’s Use Precisely: Differentiate adversarial deepfakes (hidden, derogatory) from other uses to avoid false positives and focus interventions effectively.
  • Incorporate Visible/Audible Watermarks: While not foolproof, default watermarks can indicate deceptive intent if removed and raise public AI literacy.
  • Scale and Contextualize AI Detection: Develop better tools to detect large-scale AI-generated content, especially AIPasta, and integrate social context into detection.
  • Leverage Local Data for Alignment: Incorporate fact-checker reports and crowd-sourced data into AI model development to improve guardrails across diverse languages and cultures.
  • Develop Cross-Platform and Cross-Lingual Monitoring: Create innovative methods for researchers to access and analyze data across platforms and languages, matching the adversaries’ efficiency gains.
  • Create Multilingual and Multimodal Media Literacy Content: Empower audiences by providing educational content across various platforms and languages, teaching not just literacy but also competency in digital public spheres.
  • Piggyback on Existing Systems for Verifying Users: Enhance user traceability through methods like payment account linking, which can bolster legal enforceability.
  • Improve Reputation Indicators: Develop robust reputation systems for social media accounts, especially for those operated by troll groups and content farms, to constrain the distribution of generative propaganda.
  • Empower Defenders with Certified Content: Provide tools like cryptographic binding of images with metadata (e.g., C2PA standard) to verify the provenance of legitimate content, preempting distortion with accurate records.

This comprehensive study underscores that effective defenses against generative propaganda require a socio-technical approach, recognizing that human factors and social contexts are as crucial as technological advancements. For more details, you can read the full paper here.

Karthik Mehta
Karthik Mehtahttps://blogs.edgentiq.com
Karthik Mehta is a data journalist known for his data-rich, insightful coverage of AI news and developments. Armed with a degree in Data Science from IIT Bombay and years of newsroom experience, Karthik merges storytelling with metrics to surface deeper narratives in AI-related events. His writing cuts through hype, revealing the real-world impact of Generative AI on industries, policy, and society. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -