TLDR: A new study introduces the “Group Query Attack” (GQA), where multiple questions are given to Large Language Models (LLMs) simultaneously. The research found that GQA significantly degrades the performance of fine-tuned LLMs, can trigger potential backdoors, and is particularly effective in reducing accuracy for reasoning tasks like mathematical problems and code generation in both pre-trained and aligned models. This highlights critical vulnerabilities in LLM robustness and security.
Large Language Models (LLMs) have become an integral part of our daily lives, powering everything from AI chatbots to code generation tools. As these models become more widespread, understanding their limitations and potential failure modes is crucial. A recent research paper, An Investigation on Group Query Hallucination Attacks, delves into a specific type of interaction that can significantly impact LLM performance and even trigger hidden risks.
The paper, authored by Kehao Miao from the University of Science and Technology of China and Xiaolong Jin from Purdue University, introduces a novel concept called the “Group Query Attack” (GQA). This attack simulates a common real-world scenario where users pose multiple questions or requests within a single conversation or input to an LLM. Instead of asking one question at a time, an impatient user might, for instance, provide several multiple-choice questions or a series of coding problems simultaneously.
How Group Query Attack Works
The core idea behind GQA is to observe how the accumulated context from these consecutive prompts influences the LLM’s outputs. The researchers define the “Query Group Size” (QGS) as the number of queries in a single input. They specifically focused on evaluating the response to the first query within a group to ensure reliable results.
Key Findings from the Research
The study yielded several significant insights into how LLMs behave under GQA:
-
Degradation in Fine-Tuned Models: For LLMs that have been fine-tuned on specific tasks, such as multiple-choice questions, the Group Query Attack led to a significant drop in accuracy. When the Query Group Size increased from one to two, many fine-tuned models showed substantial performance degradation, often defaulting to the same incorrect output option.
-
Triggering Potential Backdoors: A concerning discovery was GQA’s ability to trigger potential backdoors in LLMs. When models were fine-tuned on datasets intentionally embedded with backdoors, their performance on single queries remained normal. However, when presented with a group of two queries (QGS=2), these models consistently tended to output a predetermined malicious response, often option ‘A’. This highlights a serious security risk.
-
Impact on Reasoning Tasks: GQA proved particularly effective in degrading performance on tasks requiring complex reasoning, such as mathematical reasoning and code generation. This effect was observed in both pre-trained and aligned (instruction-tuned) models. As the number of grouped queries increased, the performance in these areas noticeably declined. The researchers suggest this is due to the cumulative effect of performance degradation as the text output progresses.
-
Limited Impact on Other Tasks: Interestingly, GQA had a limited impact on multiple-choice questions (for models not specifically fine-tuned for them) and translation tasks. This suggests that the nature of the task plays a role in how susceptible an LLM is to grouped queries.
Also Read:
- Unmasking Privacy Vulnerabilities in AI Recommender Systems: New Attacks on LLMs
- Unmasking Truth: How Quantization Affects LLM Honesty
Why This Research Matters
This investigation underscores the critical need for LLMs to maintain robustness and stability when faced with varying inputs, especially in common user interaction scenarios. The findings provide valuable insights for improving the “prompt invariance” of LLMs – their ability to produce consistent and meaningful outputs regardless of how semantically equivalent prompts are phrased. Understanding these failure modes is essential for the secure and reliable deployment of LLMs in real-world applications.
The researchers hope their work will contribute to making LLMs more resilient and trustworthy, particularly as they become more integrated into our daily lives.


