TLDR: A new research paper argues that while AI has advanced quantitative science, qualitative research has been neglected, forcing researchers to use inadequate general-purpose AI tools. The authors propose developing dedicated “safe qualitative AI” systems that prioritize transparency, reproducibility, and privacy, and are designed from the ground up to support the interpretive, context-sensitive nature of qualitative inquiry, rather than replacing human judgment or amplifying existing biases.
Artificial intelligence (AI) is rapidly transforming scientific research, but a new paper highlights a significant oversight: qualitative research has largely been left behind. While AI has made substantial strides in quantitative methods, the nuanced, interpretive dimensions crucial for understanding meaning and human experience remain poorly integrated. Researchers in qualitative fields often find themselves relying on general-purpose AI tools like ChatGPT, despite acknowledging their inherent limitations in terms of bias, opacity, irreproducibility, and privacy concerns.
The paper, titled “Not Everything That Counts Can Be Counted: A Case for Safe Qualitative AI” by Stine L. Beltoft and Lukas Galke from the University of Southern Denmark, argues for a fundamental shift. They advocate for the development of dedicated qualitative AI systems, built from the ground up to support interpretive research. Such systems, they contend, must prioritize transparency, reproducibility, and privacy to truly serve the unique demands of qualitative inquiry.
The Unique Value of Qualitative Research
Qualitative research is essential for exploring meaning, context, and complexity – aspects that often cannot be captured by numbers alone. It delves into how people understand and construct their realities, focusing on narratives, contradictions, and ambiguities. This approach is vital for addressing critical contemporary issues like mental health, social change, identity, and political opinion, where nuance and plurality are paramount. Unlike quantitative research, which seeks objective truths and predictive patterns, qualitative inquiry aims to produce transferable insights that are situated, interpretive, and often provisional.
Current AI Tools and Their Shortcomings
While a few specialized AI tools are emerging for qualitative tasks, such as Interviewbot for semi-structured interviews, Cody for coding, The Ethnobot for contextual anecdotes, and JourneyBot for sense-making, most remain prototypes or add-ons. Even mainstream qualitative analysis suites like MAXQDA and NVivo are only beginning to integrate AI features. These existing tools often operate on English-language data, struggle with multilingual input, and are typically locked within proprietary ecosystems, limiting broader scrutiny and collaboration. More critically, general-purpose AI models like ChatGPT pose significant ethical challenges due to their lack of transparency (unknown training data, internal routing), poor reproducibility (models change, are non-deterministic), and privacy risks (cloud-based processing of sensitive data).
The Paradox of AI Use in Qualitative Research
Despite these well-documented concerns, many qualitative researchers are already incorporating generative AI into their workflows for tasks like interview summarization, preliminary coding, and literature synthesis. This creates a paradox: AI is used out of necessity or convenience, yet its legitimacy is often questioned, and its limitations are acknowledged in research papers. This tension highlights a critical infrastructure gap – researchers are turning to inadequate tools because purpose-built alternatives do not exist.
Addressing Bias and Epistemic Loss
The paper also raises concerns about structural inequalities and epistemic loss. Large language models, trained on vast internet corpora, tend to amplify existing biases and dominant narratives, potentially erasing the perspectives of marginalized or underrepresented groups. When AI-generated outputs replace lived experiences, there’s a risk of creating hollow, flattened representations of reality that obscure power dynamics rather than exposing them. Furthermore, using AI to simulate participant responses, while seemingly efficient, fundamentally misunderstands qualitative research. Language models lack lived experience, emotional investment, or socio-cultural embeddedness, making their outputs simulations without genuine grounding.
Also Read:
- Consensus Sampling: A New Strategy for Secure Generative AI Outputs
- Protecting Data from Learning: A New Approach to Unlearnable Examples
Designing for a Better Future
To move forward, Beltoft and Galke propose a reorientation towards designing AI systems that support, scaffold, and extend interpretive processes, rather than replacing them. Key design principles include:
- Context-sensitivity: Understanding who is speaking, when, and in what setting.
- Temporal awareness: Recognizing how attitudes and conditions change over time.
- Human-in-the-loop: Ensuring human interpretation, ethical judgment, and domain expertise remain central.
- Seamless extension: AI should augment, not complicate, the researcher’s self.
- Non-reductive reasoning: Embracing ambiguity, contradiction, and complexity as inherent features of data.
Crucially, these systems must be built with transparency, reproducibility, and privacy protection as core technical foundations. This means explainable AI, consistent results with the same parameters, and local processing of sensitive data, moving away from cloud-based, black-box commercial tools. Such an approach could foster interdisciplinary collaboration and mixed-methods integration, allowing researchers to seamlessly connect statistical patterns with narrative themes.
The paper concludes that qualitative research is a vital pillar of scientific understanding, revealing meaning, complexity, and often-excluded voices. To ensure AI serves science in its fullest sense, it must be built not only for efficiency but also for empathy, ambiguity, and ethical depth. Supporting qualitative research through thoughtful AI design is not just a methodological improvement; it is a commitment to better, fairer, and more human-centered knowledge.


