TLDR: A new study reveals a significant ‘gap’ between human and LLM-generated tasks. Humans are driven by personal values and embodied experience, leading to diverse, social, and physical goals. LLMs, even when given psychological profiles, show a bias towards abstract, mentally demanding tasks, with less social and physical grounding. Paradoxically, LLM tasks were rated as more novel and fun, highlighting a form of ‘ungrounded creativity.’ The research suggests that truly human-aligned AI needs to incorporate intrinsic motivation and physical grounding.
Large Language Models (LLMs) have made incredible strides in simulating human-like behavior, from generating plans to engaging in social interactions within virtual worlds. However, a recent study titled Mind the Gap: The Divergence Between Human and LLM-Generated Tasks delves into a crucial question: do these models truly operate on the same cognitive principles as humans, or are they merely sophisticated mimics of surface patterns?
The research, conducted by Yi-Long Lu, Jiajun Song, Chunhui Zhang, and Wei Wang, highlights a significant divergence in how humans and LLMs generate tasks. Humans are constantly driven by internal motivations, such as personal values and cognitive styles, which shape their goals and actions. These motivations are deeply intertwined with our embodied experience – how we interact with the physical and social world.
To investigate this, the researchers designed a task-generation experiment comparing human responses with those of an LLM agent (GPT-4o). The experiment involved presenting both humans and the LLM with various environmental and social contexts, asking them to generate tasks using a list of common room items. Crucially, in one condition, the LLM was even provided with explicit psychological profiles of human participants, including their personal values and cognitive styles, to see if it could replicate human-like behavior.
The Human Signature: Value-Driven and Environmentally Sensitive
The study first established a baseline for human task generation. It found that human goal-setting is systematically influenced by stable personal values, particularly ‘Openness to Change,’ which predicted higher ratings for task novelty and fun. Furthermore, human creativity adapted to environmental complexity; individuals with a systematic thinking style thrived in simpler environments, while intuitive thinkers performed better in complex ones. This demonstrates that human task generation is deeply rooted in internal psychological drivers and flexible adaptation to surroundings.
The LLM Gap: Disembodied and Asocial Simulation
When comparing human outputs to the LLM’s, a clear gap emerged. The LLM, even when given psychological profiles, failed to exhibit core behavioral signatures of human goal generation. There were systematic differences in the content and structure of the tasks produced:
-
Thematic Bias: Humans generated a balanced mix of activities, including physical and social tasks. In stark contrast, LLMs showed a strong bias towards abstract tasks, such as music and writing. ‘Mental & Artistic’ tasks accounted for a vast majority of the LLM’s output, while ‘Relaxation & Household Activities’ were almost entirely neglected.
-
Limited Social Interaction: In scenarios where a ‘person’ was present, humans generated significantly more multiplayer tasks (58%) compared to the LLM (24-29%). This suggests a lower propensity for social engagement in LLM-generated goals.
-
Disembodiment: LLM-generated tasks were perceived as significantly more mentally demanding and less physically demanding than human tasks. They required greater ‘brain engagement’ but less involvement of the arms and lower body, reflecting a lack of grounding in physical interaction and sensorimotor knowledge.
The Paradox of Ungrounded Creativity
Interestingly, despite these fundamental disconnects, the LLM’s tasks were rated as more novel and more fun than those created by humans. This highlights a paradox: LLMs excel at combinatorial creativity, drawing on their vast training data to produce imaginative textual descriptions. This ‘ungrounded’ creativity, unconstrained by physical feasibility or intrinsic human values, might be amplified by current alignment methods like Reinforcement Learning from Human Feedback (RLHF), which optimize for outputs that human raters find immediately engaging, potentially filtering out more mundane but essential human activities like resting or cleaning.
Also Read:
- Pro2Guard: Ensuring LLM Agent Safety Before Incidents Occur
- Navigating the Mathematical Landscape: LLMs in Formal and Informal Reasoning
Implications for AI and Cognitive Science
The findings suggest that creating truly human-like autonomous agents requires more than just scaling up language models. Addressing the ’embodiment gap’ and ‘motivation gap’ will likely necessitate integrating world models, intrinsic reward systems, or sensorimotor learning into AI design. For cognitive science, this task-based paradigm offers a valuable tool for studying goal-setting in naturalistic contexts and making direct comparisons between human cognition and machine outputs.
While the study acknowledges limitations, such as its text-based paradigm and the use of a single LLM architecture, it provides compelling evidence that current LLMs, despite their impressive linguistic proficiency, still lack the deep cognitive mechanisms that drive value-driven, embodied human goal generation. Bridging this gap is crucial for developing more human-aligned artificial intelligence.


