TLDR: A study on AI-based decision support systems found that users often don’t read AI explanations in detail, especially when the AI agrees with their initial decision. Despite this, their accuracy improved with AI assistance. The research explored factors influencing engagement time and mind-changing behavior, noting that more complex explanations (text, bar charts) led to longer engagement and higher likelihood of switching decisions. Prior AI experience made users more skeptical, while those without experience were more easily swayed, sometimes leading to overtrust. The findings suggest a need to rethink how AI explanations are designed and evaluated to truly aid human-AI collaboration.
In the evolving landscape of artificial intelligence, AI-based decision support systems (DSS) are increasingly common, designed to assist humans in making informed choices. A core belief behind these systems is that providing explanations for AI suggestions can empower users to discern when to trust the AI and when to question it, thereby preventing errors and biased decisions. However, a recent study challenges this fundamental assumption, revealing that users often engage with AI explanations far less deeply than anticipated.
Researchers Laura Spillner, Rachel Ringe, Robert Porzel, and Rainer Malaka from the University of Bremen embarked on a quest to understand how users interact with AI explanations in DSS. Their initial hypothesis was that participants in their studies would meticulously read and consider all AI explanations. To their surprise, the data collected from an online study indicated a different reality: participants frequently spent minimal time on explanations and did not always scrutinize them in detail.
The study involved participants making binary decisions, such as predicting the academic outcome of university students (whether they would complete their studies or drop out). The researchers employed a “two-step workflow” where participants first made an initial decision based on provided data, then saw the AI’s suggestion along with an explanation, and finally submitted their definitive choice. Various explanation formats were tested: no explanation, highlighting of relevant data, bar charts illustrating feature importance, and full text descriptions of feature importance.
A particularly striking finding emerged from an attention test embedded within the explanations. A significant majority of participants (87%) failed this test, suggesting they did not read the explanations thoroughly. Yet, despite this lack of detailed engagement, participants performed remarkably well on the tasks, improving their accuracy when collaborating with the AI. This led the researchers to hypothesize that users might only engage deeply with explanations when the AI’s suggestion contradicts their initial intuition, seeking new information in such cases.
The exploratory analysis delved into factors influencing how much time participants spent on AI explanations and whether they changed their minds. It was found that participants spent considerably less time on explanations when the AI’s suggestion aligned with their initial decision. The type of explanation also played a role: text and bar chart explanations, being more complex, naturally led to longer deliberation times compared to simple highlighting or no explanation at all.
Regarding the likelihood of changing one’s mind, the study revealed that text and bar chart explanations were more effective in convincing participants to switch their initial decision, especially when the AI disagreed with them. Interestingly, participants who had prior experience with AI tended to be more skeptical and less likely to be swayed by explanations, particularly text-based ones. Conversely, those without prior AI experience were more easily convinced, sometimes leading to “overtrust” – trusting the AI even when it was incorrect.
While the AI collaboration generally improved overall accuracy, the explanations themselves did not consistently lead to ideal outcomes in terms of fostering warranted trust (trusting the AI when it’s right) while preventing overtrust (trusting the AI when it’s wrong). For participants without prior AI experience, text explanations were highly persuasive, leading to both warranted trust and overtrust at similar rates. This suggests they struggled to differentiate between correct and incorrect AI suggestions based on the explanation provided.
Also Read:
- Explanatory AI: A New Approach to Making AI Understandable for Everyone
- What Users Really Think: Ethical AI’s Surprising Link to Satisfaction
The findings from this research, detailed in their paper Can AI Explanations Make You Change Your Mind?, raise crucial questions for the future of explainable AI. It highlights the need to reconsider how user engagement with explanations is measured and how DSS are designed. Future research may explore alternative methods of presenting AI insights, such as allowing users to choose explanations or designing collaborative decision workflows that don’t force an initial choice, to better align with actual user behavior and optimize human-AI collaboration.


