spot_img
HomeResearch & DevelopmentUnpacking How AI Models Reason About Obligations and Permissions

Unpacking How AI Models Reason About Obligations and Permissions

TLDR: A new research paper evaluates Large Language Models’ (LLMs) ability to perform normative reasoning, which involves concepts like obligation and permission. The study compares LLMs’ performance on normative and epistemic (knowledge-based) reasoning tasks, using a new dataset that includes formal logical patterns and non-formal cognitive factors. Key findings indicate that LLMs show inconsistencies in basic normative inferences, exhibit human-like cognitive biases (content effects), and struggle particularly with reasoning involving negation. The research highlights challenges in achieving logical consistency in LLMs’ normative reasoning and offers insights for improving their reliability in ethical and social contexts.

Large Language Models (LLMs) have shown impressive capabilities in various reasoning tasks, but their understanding of ‘normative reasoning’ – the kind that involves concepts like obligation and permission – has been less explored. A recent study by researchers from Keio University and the University of Tokyo delves into this crucial area, evaluating how well LLMs handle these complex moral and social inferences.

Normative reasoning is vital for LLMs, especially as they are deployed in roles requiring adherence to social, ethical, and legal principles. This research highlights that while previous studies often focused on the cultural and social aspects influencing LLM behavior, the logical and formal side of normative reasoning in these models remained largely unexamined.

Evaluating LLMs on Normative Logic

The researchers systematically assessed LLMs’ normative reasoning by comparing it with ‘epistemic reasoning,’ which deals with knowledge and beliefs. Both types of reasoning share similar formal structures, making them ideal for a comparative benchmark. To achieve this, a new dataset was introduced, covering a wide array of formal reasoning patterns in both normative and epistemic domains. This dataset also incorporated ‘non-formal cognitive factors’ – elements that influence human reasoning, such as content effects.

The evaluation focused on two main types of normative reasoning:

  • Deontic Logic Reasoning: This involves single-premise logical inferences, testing basic understanding of modal concepts like obligation (e.g., “It is obligatory that A”) and permission (e.g., “It is permissible that A”). It also examined how LLMs handle well-known paradoxes in deontic logic, such as Ross’s paradox and the Free Choice paradox, which often reveal a tension between strict logical validity and human intuition.
  • Syllogistic Reasoning: This involves multi-premise logical inferences, incorporating normative rules and generalizations. These syllogisms can be categorical (universally quantified statements) or hypothetical (if-then conditional statements). The study also looked at how the presence or absence of negation impacts the difficulty of these inferences.

Key Findings and Human-like Biases

The study revealed several significant insights into LLMs’ normative reasoning:

  • Inconsistencies in Basic Reasoning: Even top-performing models showed inconsistencies in fundamental normative inferences. For instance, they struggled with the inference from obligation to permission (e.g., from “You must take care of your health” to “You can choose to take care of your health”), often interpreting “can choose to” as an option rather than a statement of permission. This suggests a challenge in achieving logical consistency.
  • Cognitive Biases: LLMs exhibited cognitive biases similar to those observed in humans. They were influenced by ‘content effects,’ meaning their reasoning was affected by whether the premise and conclusion aligned with common sense, contradicted it, or were nonsensical. Models generally performed better on problems with ‘congruent’ (common-sense) content.
  • Negation as a Challenge: Reasoning patterns involving negation proved particularly difficult for the models, corroborating findings from previous studies on LLMs’ struggles with negation.
  • Domain Specificity Varies: While cognitive science often suggests that normative reasoning is easier for humans than epistemic reasoning, LLMs did not consistently follow this pattern. Their relative performance in the two domains varied depending on the specific task.

Also Read:

Prompting Strategies and Future Directions

The researchers also experimented with different prompting strategies: Zero-Shot (no examples), Few-Shot (with examples), and Chain-of-Thought (CoT, step-by-step reasoning). Few-Shot prompting generally improved performance, likely due to models leveraging syntactic similarities. However, Chain-of-Thought prompting often yielded minimal improvement or even negative effects, sometimes introducing errors in intermediate reasoning steps.

These findings underscore the challenges in ensuring logical consistency in LLMs’ normative reasoning and offer valuable insights for enhancing their reliability. The research emphasizes the need for further improvements to make LLMs more robust in handling the complexities of ethical and social decision-making. For more details, you can read the full paper here.

Rhea Bhattacharya
Rhea Bhattacharyahttps://blogs.edgentiq.com
Rhea Bhattacharya is an AI correspondent with a keen eye for cultural, social, and ethical trends in Generative AI. With a background in sociology and digital ethics, she delivers high-context stories that explore the intersection of AI with everyday lives, governance, and global equity. Her news coverage is analytical, human-centric, and always ahead of the curve. You can reach her out at: [email protected]

- Advertisement -

spot_img

Gen AI News and Updates

spot_img

- Advertisement -