Discussion of AI and political persuasion
This is a synthetic workshop discussion of two new papers:
Hackenberg et al., The levers of political persuasion with conversational artificial intelligence, 390 Science 1016 (2025), https://www.science.org/doi/10.1126/science.aea3884.
Lin et al., Persuading voters using human–artificial intelligence dialogues, Nature (2025), https://doi.org/10.1038/s41586-025-09771-9.
This is a synthetic academic workshop generated using enTalkenator (using an AI-generated adapted “workshop hot bench” template and authored by Claude Opus 4.5).
Abstract for Hackenberg et al.: “There are widespread fears that conversational artificial intelligence (AI) could soon exert unprecedented influence over human beliefs. In this work, in three large- scale experiments (N = 76,977 participants), we deployed 19 large language models (LLMs)—including some post- trained explicitly for persuasion—to evaluate their persuasiveness on 707 political issues. We then checked the factual accuracy of 466,769 resulting LLM claims. We show that the persuasive power of current and near- future AI is likely to stem more from post- training and prompting methods—which boosted persuasiveness by as much as 51 and 27%, respectively—than from personalization or increasing model scale, which had smaller effects. We further show that these methods increased persuasion by exploiting LLMs’ ability to rapidly access and strategically deploy information and that, notably, where they increased AI persuasiveness, they also systematically decreased factual accuracy.”
Abstract for Lin et al: “There is great public concern about the potential use of generative artificial intelligence (AI) for political persuasion and the resulting impacts on elections and democracy. We inform these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes. In the context of the 2024 US presidential election, the 2025 Canadian federal election and the 2025 Polish presidential election, we assigned participants randomly to have a conversation with an AI model that advocated for one of the top two candidates. We observed significant treatment effects on candidate preference that are larger than typically observed from traditional video advertisements. We also document large persuasion effects on Massachusetts residents’ support for a ballot measure legalizing psychedelics. Examining the persuasion strategies9 used by the models indicates that they persuade with relevant facts and evidence, rather than using sophisticated psychological persuasion techniques. Not all facts and evidence presented, however, were accurate; across all three countries, the AI models advocating for candidates on the political right made more inaccurate claims. Together, these findings highlight the potential for AI to influence voters and the important role it might play in future elections.”