I used to want to do a study just like this. It seems like this author isn’t making a claim that the agents are supposed to be mimicking humans, but now I do see in hindsight that simulating an ethical framework like deontologist, consequentialist… aren’t really the kind of frameworks humans actually “stick to” within a conversation. We’re more complicated than that.

Agent takes a distinct ethical framework or stakeholder perspective to deliberate on complex moral issues.

Two debates, each with 6 LLM personas, were conducted. Only differed in the moral viewpoints represented.

Both panels favored the same policy, but each panel reached the conclusion through different lines of argument, and voting coalitions shifted once duty- and rights-based voices were present.

They claim: “evidence that moral perspectives included in such panels can materially change the outcome even when the Factual inputs remain constant.”