Like other agent papers, this persona, again, is not based on real people. It is rather a guess based on contextual information about the topic and assembly setting, what three stakeholder personas would be affected by the assembly’s decisions.
They use a citizen’s assembly as an operationalized deliberative forum, where participants learn about a topic, hear from experts, deliberate, and ultimately formulate recommendations (I like this as a product).
They have the LLM listen to the assembly, generate 3 relevant personas, generate relevant points of disagreement and highlight missing perspectives within the discussion using the personas, and see how the participants felt post-deliberation.
They found that participants considered perspectives they had not previously considered, but noted that the responses were sometimes overly general, and raised concerns about overreliance on AI for perspective-taking.
The facilitators suggested that explicitly framing the tool as role-playing or as a mechanism for surfacing overlooked perspectives and NOT to replace people could help clarify its purpose.