This paper is not really focused on decision-making but uses it as a testing bed for the helpfulness of XAI techniques in describing model outputs.

They find in previous literature three properties for ideal AI explanations:

  1. Improve people’s understanding of the AI model
  2. Help people recognize the model uncertainty
  3. Support people’s calibrated trust in the model

They then evaluate four types of XAI methods, and see if they satisfy the three properties on two decision making tasks.

They find:

  1. Effects of AI explanations are largely different on decision making tasks where people have varying levels of domain expertise in
  2. Many AI explanations do not satisfy any of the desirable properties for tasks that people have little domain expertise in
  3. For high domain expertise, feature contribution explanation satisfied more desiderata, while counterfactual did not improve calibrated trust