They are working on AI-assisted decision making, so they do not have a definition of deliberation.

They see that when decisions are made by the AI (thereby creating an “anchor”), it induces anchoring bias where they limit exploration of alternative hypotheses.

Anchoring bias according to Tversky and Kahneman manifests through anchoring-and-adjustment heuristic where, when asked a question and presented with any anchor, people adjust away insufficiently from the anchor.

They find that time is a useful resource to sufficiently adjust away from anchor. However, giving infinite time does not account for limited availability of time. So they create a time allocation problem that factors in the effects of anchoring bias. In the second user study in which they test the allocation policy, they conclude that it did indeed help participants de-anchor. The policy is basically that they vary the amount of time according to the AI’s confidence.