This paper is not really focused on decision-making but uses it as a testing bed for the helpfulness of XAI techniques in describing model outputs.
They find in previous literature three properties for ideal AI explanations:
- Improve people’s understanding of the AI model
- Help people recognize the model uncertainty
- Support people’s calibrated trust in the model
They then evaluate four types of XAI methods, and see if they satisfy the three properties on two decision making tasks.
They find:
- Effects of AI explanations are largely different on decision making tasks where people have varying levels of domain expertise in
- Many AI explanations do not satisfy any of the desirable properties for tasks that people have little domain expertise in
- For high domain expertise, feature contribution explanation satisfied more desiderata, while counterfactual did not improve calibrated trust