Abstract
This paper introduces ADEPT, a system using Large Language Model (LLM) personas to simulate multi-perspective ethical debates. ADEPT assembles panels of ‘AI personas’, each embodying a distinct ethical framework or stakeholder perspective (like a deontologist, consequentialist, or disability rights advocate), to deliberate on complex moral issues. Its application is demonstrated through a scenario about prioritizing patients for a limited number of ventilators inspired by real-world challenges in allocating scarce medical resources. Two debates, each with six LLM personas, were conducted; they only differed in the moral viewpoints represented: one included a Catholic bioethicist and a care theorist, the other substituted a rule-based Kantian philosopher and a legal adviser. Both panels ultimately favoured the same policy — a lottery system weighted for clinical need and fairness, crucially avoiding the withdrawal of ventilators for reallocation. However, each panel reached that conclusion through different lines of argument, and their voting coalitions shifted once duty- and rights-based voices were present. Examination of the debate transcripts shows that the altered membership redirected attention toward moral injury, legal risk and public trust, which in turn changed four continuing personas’ final positions. The work offers three contributions: (i) a transparent, replicable workflow for running and analysing multi-agent AI debates in bioethics; (ii) evidence that the moral perspectives included in such panels can materially change the outcome even when the factual inputs remain constant; and (iii) an analysis of the implications and future directions for such AI-mediated approaches to ethical deliberation and policy.
I used to want to do a study just like this. It seems like this author isn’t making a claim that the agents are supposed to be mimicking humans, but now I do see in hindsight that simulating an ethical framework like deontologist, consequentialist… aren’t really the kind of frameworks humans actually “stick to” within a conversation. We’re more complicated than that.
Agent takes a distinct ethical framework or stakeholder perspective to deliberate on complex moral issues.
Two debates, each with 6 LLM personas, were conducted. Only differed in the moral viewpoints represented.
Both panels favored the same policy, but each panel reached the conclusion through different lines of argument, and voting coalitions shifted once duty- and rights-based voices were present.
They claim: “evidence that moral perspectives included in such panels can materially change the outcome even when the Factual inputs remain constant.”