spot_img
HomeResearch & DevelopmentAI Models Show Strong Potential for Automating Metaphor Identification...

AI Models Show Strong Potential for Automating Metaphor Identification in Texts

TLDR: A new research paper explores how Large Language Models (LLMs) can automate metaphor identification in full texts, comparing Retrieval-Augmented Generation (RAG), Prompt Engineering (zero-shot, few-shot, chain-of-thought), and Fine-tuning. The study found that fine-tuning achieved the highest accuracy, with chain-of-thought prompting also performing strongly, especially with advanced closed-source LLMs. The analysis of discrepancies between human and AI annotations revealed that LLM errors often mirror known challenges in metaphor theory, suggesting LLMs can not only automate the task but also help refine theoretical understandings of metaphor. This work paves the way for more scalable and robust metaphor analysis.

Metaphors are everywhere in our daily conversations and writings, acting as a powerful tool to understand how we think, feel, and express our beliefs. However, analyzing metaphors on a large scale has always been difficult because it requires people to manually identify them, which is a time-consuming and often subjective process. This challenge limits how much metaphor analysis can be done and how widely its findings can be applied.

Recent advancements in Artificial Intelligence, particularly with Large Language Models (LLMs), offer a promising solution to automate this complex task. A new study, titled Metaphor Identification Using Large Language Models: A Comparison of RAG, Prompt Engineering, and Fine-Tuning, explores the potential of these advanced AI systems to identify metaphors in complete texts, moving beyond the simpler, single-word classifications of previous research.

The researchers, Matteo Fuoli, Weihang Huang, Jeannette Littlemore, Sarah Turner, and Ellen Wilding from the University of Birmingham and Coventry University, investigated three main approaches to deploy LLMs for metaphor identification:

Retrieval-Augmented Generation (RAG)

In this method, the LLM is given a detailed guide (a ‘codebook’) with rules and examples of metaphors. The model then uses this external knowledge to annotate texts, mimicking how a human annotator would use a guide.

Prompt Engineering

This approach involves carefully crafting verbal instructions, or ‘prompts,’ to guide the LLM. The study tested several strategies within prompt engineering:

  • Zero-shot: The model receives only the task description and relies on its existing knowledge.
  • Few-shot: The model is given a few examples of correctly annotated sentences to learn from before tackling new texts.
  • Chain-of-thought: Similar to few-shot, but with added explanations of the reasoning behind each example’s annotation, helping the model understand the decision-making process.

The study also explored how the number of examples (four or eight) and the mix of conventional versus creative metaphors in those examples affected performance.

Also Read:

Fine-tuning

This method involves further training the LLM on a specific dataset of hand-coded texts. Unlike prompt engineering, fine-tuning actually modifies the model’s internal parameters, allowing it to optimize its performance for the metaphor identification task based on direct exposure to labeled data.

The study applied these methods to a corpus of IMDb film reviews that had been manually annotated by human experts. The performance was measured using precision, recall, and F1 scores, which assess how accurately the models identified metaphors compared to the human annotations.

The results showed that fine-tuning achieved the highest accuracy, with a median F1 score of 0.79. Prompt engineering followed, and RAG came in third. Notably, advanced closed-source LLMs consistently outperformed open-source models and were more responsive to the different methods. Within prompt engineering, the chain-of-thought strategy, especially with best-in-class reasoning LLMs, yielded the best results, demonstrating that providing examples and explicit reasoning steps significantly improves performance.

An interesting aspect of the study was the qualitative analysis of discrepancies between human and LLM annotations. The researchers found that most errors made by the LLMs were not random but systematic, often reflecting well-known challenges and ‘grey areas’ in metaphor theory that even human experts grapple with. These included difficulties with highly conventional metaphors, personification, explicit comparisons (like similes), grammatical cues for metaphoricity, distinguishing between source and target domains, phrase-level meanings, and ‘twice-true’ metaphors that are both literal and metaphorical in context.

This suggests that LLMs can serve not just as tools for automation but also as a ‘testbed’ for refining our theoretical understanding of metaphor and improving annotation protocols. The study concludes that LLMs offer a viable path to at least partially automate metaphor identification, making large-scale metaphor analysis more feasible and its findings more generalizable. This could free up researchers to focus on higher-level interpretation and theory-building, transforming the field of metaphor studies.

Karthik Mehta
Karthik Mehtahttps://blogs.edgentiq.com
Karthik Mehta is a data journalist known for his data-rich, insightful coverage of AI news and developments. Armed with a degree in Data Science from IIT Bombay and years of newsroom experience, Karthik merges storytelling with metrics to surface deeper narratives in AI-related events. His writing cuts through hype, revealing the real-world impact of Generative AI on industries, policy, and society. You can reach him out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -