Abstract
Artificial Intelligence (AI) has recently emerged as a central issue for deliberative democracy, both as a subject of deliberation and as a way to improve processes of democratic deliberation. Deliberative democracy scholars and practitioners are already working on the potential of AI-based tools to ‘improve’, ‘enhance’, ‘augment’ or make deliberation more efficient. The present article studies the case of the Habermas Machine (HM), an AI-based tool designed by Google DeepMind to generate agreements among deliberative groups of people with opposing political positions. This article explores the choice of norms and values embedded in the design of the HM and the normative implications of such choices for deliberative democracy. Therefore, the research question guiding this article is: ‘What are the norms and values embedded in the design of the Habermas Machine, and how do they relate to the normative requirements of deliberative democracy?’ The article explores this by addressing how the machine conceives 1) the values for a good deliberation, 2) the space for deliberative politics, 3) the role of humans/participants in deliberation, and 4) the goal of deliberation. The HM has a very narrow conception of deliberative democracy embedded in its design, which 1) overemphasises the desirability of agreement in deliberative processes, 2) limits participation to an individual and private space, and 3) segments and compartmentalises the deliberative process and reduces human participation to the generation and evaluation of political opinions. Instead of empowering and supporting human participation, the normative conception of deliberative democracy embedded in the design of the HM narrows the realm of the political, simplifies the deliberative process, and relegates humans to a secondary role. Building on this specific tool, the article aims to promote and contribute to the conversation between political theory and the work of developers of AI-based tools designed for deliberative purposes. The objective is not simply to guide or orient the design of the technological artefacts, but to evaluate their contributions, limitations, and impacts on the basis of the normative standards on which they are grounded.
Critique of tesslerAICanHelp2024.
Criticizes HM for having a narrow conception of deliberation:
- Overemphasizes the desirability of agreement in deliberative processes
- Goal of HM is necessary agreement.
- This misunderstands Habermas—his orientation toward agreement is an ideal, not a necessary end.
- Limits participation to an individual and private space
- No argumentative interaction
- Maximizing the existing preferences of the group, seeing preference-formation as external to the deliberative process
- Segments and compartmentalizes the deliberative process, reducing human participation to the generation and evaluation of political opinions
- Limited human tasks—producing written opinions, and ranking machine-generated statements
- AI limits human intervention to overly simple tasks
- People perform the secondary role, as a political patient rather than a political agent
Authors suggest that goal of tech should reinforce democracy and empower/support human participation (this resonates with my story idea!)
Argues use and design should be oriented toward:
- Reinforcing popular authorization and democratic legitimacy of the public sphere
- Help citizens navigate its plurality and complexity