TLDR: A new study evaluates state-of-the-art LLMs’ ability to imitate individual writing styles from few examples. Using metrics like authorship attribution and AI detection across diverse domains (news, email, forums, blogs) and 400+ authors, it found LLMs struggle with nuanced, informal styles, often producing generic, detectable AI content. Increasing examples offered limited gains, highlighting a significant gap in personalized LLM adaptation.
Large language models (LLMs) are becoming increasingly common in our daily writing, from drafting emails to rephrasing sentences. However, a recent study delves into a critical question: can these powerful AI tools truly capture and imitate an individual’s unique writing style from just a few examples?
Personal writing style is often described as a “linguistic fingerprint”—a subtle, subconscious reflection of how someone expresses themselves. Unlike explicit instructions or configurable settings like ‘tone’ or ‘formality,’ this implicit style is nuanced and hard to define. While LLMs have shown impressive capabilities in mimicking the styles of public figures or fictional characters with vast amounts of data, replicating the unique voice of an everyday author with only a handful of casual interactions has remained largely unexplored.
Evaluating the AI’s Imitation Skills
Researchers at Stony Brook University, The Pennsylvania State University, and Bosch Center for AI conducted a comprehensive evaluation to assess this challenge. Their study focused on how well state-of-the-art LLMs could mimic personal writing styles using “in-context learning”—meaning the models were given a few examples of an author’s previous work and a summary of new content to generate, without any explicit instructions on style.
To robustly measure style imitation, the team introduced an ensemble of four complementary metrics:
- Authorship Attribution (AA): Could a model correctly identify the original author of an AI-generated text?
- Authorship Verification (AV): Did the model determine if an AI-generated text and a human-authored text came from the same writer?
- Style Matching: How similar were the stylistic features of AI-generated text to the author’s overall style model?
- AI Detection: Could specialized tools detect the AI-generated text as human-written or AI-written?
The evaluation was extensive, involving over 40,000 generations per model across diverse writing domains like news articles, emails, online forums, and personal blogs. The dataset included writing samples from more than 400 real-world authors, providing a rich and varied testbed.
Key Findings: A Mixed Bag for Personalization
The results revealed a significant gap in LLMs’ ability to truly personalize writing. While LLMs showed some success in approximating user styles in more structured formats, such as news articles and emails, they consistently struggled with the nuanced, informal writing found in blogs and online forums. In these less formal domains, the generated outputs often defaulted to a generic, average tone and were still easily detectable as AI-written content.
Interestingly, the study found that simply increasing the number of writing examples provided to the LLM offered only limited improvements in stylistic alignment. This suggests that the problem isn’t just about the quantity of data, but rather the inherent difficulty for current models to grasp and reproduce the subtle, implicit patterns that define an individual’s unique voice.
The researchers also performed follow-up studies, exploring different prompting strategies. They looked at selecting examples based on content similarity or length alignment, and even including a small snippet of the original text. While some strategies offered modest gains in certain metrics, no single approach consistently excelled across all evaluation criteria, highlighting the complexity of effective personalization.
Also Read:
- Unmasking AI: Evaluation Awareness Grows Predictably with Model Scale
- Unlocking Word Meanings: How Large Language Models Grasp Contextual Senses
The Road Ahead for Personalized AI Writing
This research underscores a fundamental limitation in current LLM adaptation for personalized writing. Despite their impressive fluency, LLMs still struggle to move beyond a generic style to truly imitate the implicit linguistic fingerprints of everyday authors. The findings highlight a clear need for improved techniques that can support more nuanced, style-consistent generation, especially for informal and diverse writing contexts.
For those interested in the full details of this comprehensive study, the research paper is available here: Catch Me If You Can? Not Yet: LLMs Still Struggle to Imitate the Implicit Writing Styles of Everyday Authors.


