TLDR: MORABLES is a new human-verified benchmark using fables to evaluate Large Language Models’ (LLMs) abstract moral reasoning. It reveals that while larger LLMs perform better on multiple-choice moral inference, they often rely on superficial patterns, are vulnerable to adversarial attacks, and show inconsistencies, suggesting that model scale, not true reasoning, drives performance. The benchmark highlights challenges in LLM moral understanding and the limitations of current evaluation metrics.
As large language models (LLMs) continue to advance, their ability to handle complex abstract reasoning and inference is becoming a critical area of evaluation. Moving beyond basic reading comprehension, researchers are exploring benchmarks that delve into deeper understanding, particularly in areas rich with narrative and moral depth.
A new benchmark called MORABLES has been introduced to assess the abstract moral reasoning capabilities of LLMs using fables and short stories from historical literature. This human-verified benchmark is structured around multiple-choice questions that specifically target moral inference. The questions feature carefully designed incorrect options, known as distractors, which challenge models to go beyond simple text extraction and engage in genuine moral reasoning.
To further test the robustness of these models, MORABLES includes adversarial variants. These variants are designed to expose vulnerabilities and shortcuts LLMs might take, such as relying on data contamination or superficial patterns rather than true understanding. The benchmark comprises 709 short stories and fables, primarily from Western literary tradition, each paired with its original moral. The main task requires models to select the correct moral from five candidates.
The findings from MORABLES indicate that while larger LLMs generally outperform smaller ones, they are still susceptible to adversarial manipulation. This suggests that even the most advanced models often depend on superficial patterns rather than a deep, genuine moral understanding. A significant observation is the brittleness of these models, with the best performers contradicting their own answers in approximately 20% of cases, depending on how the moral choice is framed. Interestingly, models enhanced with explicit reasoning capabilities do not significantly bridge this performance gap, suggesting that the sheer scale of a model, rather than its reasoning ability, is the primary factor driving its performance on this task.
The study also explored alternative evaluation procedures, including a True/False (TF) variant and a ‘None of the other options’ (NOTO) variant. The TF variant revealed that models tend to over-predict the ‘True’ class, showing high recall (identifying the correct moral) but low precision (mistakenly accepting distractors). The NOTO variant highlighted a strong reluctance in models to select ‘None of the others,’ often choosing an incorrect moral even if they had previously rejected it in the TF setting. This behavior points to potential sycophancy, where models prefer to select a wrong answer over no answer, or strong memory-based associations.
Adversarial modifications, such as character swaps, trait injections, and the insertion of tautologies (short, self-contained sentences that add no meaning), significantly impacted model performance. Tautologies, especially when appended to the end of a story, proved to be the most influential modification. Both GPT-4o and Llama 3.3 70B showed noticeable performance drops, indicating that models might rely on positional biases, focusing on initial or final tokens of narratives.
In addition to multiple-choice questions, the research also evaluated the quality of LLM-generated free-text morals. While models like GPT-4o, Claude 3.5, and Llama 3.3 70B produced morals that generally aligned with the fables, there was considerable variability. Automated semantic similarity scores, such as BERTScore, showed only a weak correlation with human judgments of moral alignment, underscoring the complexity and multi-faceted nature of moral inference.
Also Read:
- Unmasking AI’s Shallow Grasp of Puns: New Research Reveals LLMs Struggle with True Humor Understanding
- AI’s Physical Intuition: Why Large Multimodal Models Struggle to Learn New Physics from Visual Examples
The MORABLES benchmark highlights important challenges for future research in LLM development. It suggests that current LLMs, despite their scale, still lack robust abstract moral reasoning. Future work aims to expand the dataset to include fables from diverse cultures to study cultural biases and differing ethical perspectives, and to use the holistic nature of moral lessons as a testbed for explainability research. You can find more details about this research paper here.


