Abstract
Several strands of research have aimed to bridge the gap between artificial intelligence (AI) and human decision-makers in AI-assisted decision-making, where humans are the consumers of AI model predictions and the ultimate decision-makers in high-stakes applications. However, people’s perception and understanding are often distorted by their cognitive biases, such as confirmation bias, anchoring bias, availability bias, to name a few. In this work, we use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting, and mitigate their negative effects on collaborative performance. To this end, we mathematically model cognitive biases and provide a general framework through which researchers and practitioners can understand the interplay between cognitive biases and human-AI accuracy. We then focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration. We implement a time-based de-anchoring strategy and conduct our first user experiment that validates its effectiveness in human-AI collaborative decision-making. With this result, we design a time allocation strategy for a resource-constrained setting that achieves optimal human-AI collaboration under some assumptions. We, then, conduct a second user experiment which shows that our time allocation strategy with explanation can effectively de-anchor the human and improve collaborative performance when the AI model has low confidence and is incorrect.
They are working on AI-assisted decision making, so they do not have a definition of deliberation.
They see that when decisions are made by the AI (thereby creating an “anchor”), it induces anchoring bias where they limit exploration of alternative hypotheses.
Anchoring bias according to Tversky and Kahneman manifests through anchoring-and-adjustment heuristic where, when asked a question and presented with any anchor, people adjust away insufficiently from the anchor.
They find that time is a useful resource to sufficiently adjust away from anchor. However, giving infinite time does not account for limited availability of time. So they create a time allocation problem that factors in the effects of anchoring bias. In the second user study in which they test the allocation policy, they conclude that it did indeed help participants de-anchor. The policy is basically that they vary the amount of time according to the AI’s confidence.