TLDR: The paper redefines AI as any techno-social relationship where technology appears to perform human cognitive labor. It categorizes these relationships into harmful displacement, beneficial enhancement, and neutral replacement. The author argues that all AI inherently involves human cognition, often hidden, and calls for centering human understanding and rejecting AI hype to foster a more critical and responsible approach to AI development.
The concept of ‘Human-Centred AI’ (HCAI) is widely discussed, aiming to put human experience and behavior at the core of artificial intelligence development. However, a recent research paper by Olivia Guest, titled What Does ‘Human-Centred AI’ Mean?, challenges conventional understandings, arguing that AI is inherently human-centered, not because it mimics human behavior, but because it fundamentally involves human cognition in its operation.
The paper proposes a fresh perspective: AI should be understood as a relationship between technology and humans, where machines or algorithms appear to take over some aspect of human cognitive work. This isn’t just about advanced systems; it extends to simpler tools that offload mental tasks, like an abacus or a calculator.
Understanding AI Relationships: Displacement, Enhancement, and Replacement
To clarify this human-technology relationship, the paper introduces a framework categorizing AI interactions into three types:
- Displacement: This is a harmful relationship where AI replaces human cognitive labor, often leading to a loss of skills (deskilling) and obscuring the human effort still involved. An example is how digital computers displaced human “computers” (people who performed calculations), or how modern large language models might displace essay writing or artistic creation. The paper highlights that much of the “intelligence” in these systems often relies on hidden human labor, such as data labeling by low-paid workers.
- Enhancement: This is a beneficial relationship where AI supports and improves human skills, leading to new capabilities (reskilling). The abacus, for instance, doesn’t replace mental arithmetic but enhances it, helping users develop stronger numerical skills. Similarly, a camera enhances human vision by allowing us to capture and revisit moments.
- Replacement: This type of relationship is considered neutral. Here, AI takes over a task that humans previously performed, but without significantly harming or benefiting existing human skills. A calculator, for an adult who already knows arithmetic, simply provides a faster, more reliable way to perform calculations without necessarily improving or degrading their core math skills. An alarm clock replacing a “knocker-upper” (a person hired to wake others) is another example.
The Hidden Human in the Machine
A central argument is that all AI, regardless of its complexity, involves human cognition. This can be through the user’s interaction, the “human-in-the-loop” (people who monitor or refine AI outputs), or even as an inspiration for the AI’s design. The paper critiques the “technological veil” that often hides the human cognitive labor embedded within AI systems, leading to a fetishization of the technology itself.
The author argues against the common narrative of AI as a unique, rapidly evolving phenomenon detached from history. Instead, by recognizing AI’s long history—from ancient mechanisms like the Antikythera mechanism to modern neural networks—we can better understand its true nature and avoid being swayed by hype and exaggerated claims. The paper emphasizes that simply increasing the complexity or scale of an AI system does not fundamentally change its nature; it remains an engineered system following human instructions.
Also Read:
- Navigating the AI Frontier: Large Language Models in Social Simulation
- Navigating Algorithmic Fairness in Dynamic AI Systems
Reclaiming Human-Centred AI
To truly center humans in AI, the paper suggests several crucial shifts:
- Recognize and address displacement AI relationships that cause harm.
- Understand that “human-aligned values” in AI systems are not automatic but require active upholding through ongoing effort.
- Reject the idea that AI’s performance on benchmarks indicates human-like intelligence, as correlations do not equate to understanding or true cognition.
- Be wary of treating AI systems as if they have human-like psychology, as this can distort scientific understanding.
Ultimately, the paper calls for a de-fetishization of AI, urging us to look beyond the machine and acknowledge the indispensable human element—the “ghost in the machine”—that makes AI possible and gives it meaning. By doing so, we can foster a more critical and responsible engagement with artificial intelligence, ensuring it serves humanity rather than displacing or distorting it.


