top of page

Bots And Fake Accounts In Elections: The State Of Research in 2026

  • 3 days ago
  • 6 min read

Should bots be taken seriously in discussions surrounding elections? In short: Yes. But it used to be different.


Primitive Twitter bots of 2017 were mostly noise. That has changed dramatically. Today's generation of AI-powered agents functions fundamentally differently. They conduct personalized, one-on-one conversations that voters cannot identify as machine-generated, and even fake accounts are increasingly disturbingly authentic.


“A single short conversation with an AI chatbot shifted the voting intentions of opposition voters in Canada and Poland by around 10 percentage points.” — Lin et al., Nature 648, 2025

This article is aimed at communications professionals, authorities, and political organizations who want to know what current research says about the real influence of bots and fake accounts on elections and referendums – and what this means for direct democracy in Switzerland.


Elections under pressure: How bots and fake accounts distort democratic voting
Elections under pressure: How bots and fake accounts distort democratic voting

Are bots and fake accounts really effective in elections?


TL;DR: Yes – and the effect of AI-powered bots surpasses that of traditional election advertising fourfold.


Evidence:


  • A pre-registered study by Cornell University (Lin et al., Nature, Dec. 2025) had over 2,300 US citizens conduct brief conversations with a politically oriented AI chatbot two months before the 2024 presidential election. The result: The pro-Harris bot shifted potential Trump voters by 3.9 points on a 100-point scale. According to the researchers, this is four times the effect of traditional campaign ads, as measured by TV spots from 2016 and 2020.

  • In the same research project ( Science , Rand et al., 2025), chatbots shifted the voting intentions of opposition voters in Canada and Poland by approximately 10 percentage points, in experiments involving over 3,600 participants.

  • The effect arises not from psychological manipulation, but from the sheer volume of apparent facts that an LLM produces in a short time. If facts are removed from the model, its persuasive power collapses dramatically.

  • A study with almost 77,000 participants from Great Britain spanning over 700 political topics (Rand et al., Science, 2025) confirms the scalability of these effects across cultural spaces.


Why is the argument "too many bots, nobody takes them seriously" outdated?


TL;DR: The quantity argument applied to primitive spam bots of 2017 – not to personalized LLM agents of 2025.


Evidence:

  • Keller & Klinger (2019, Political Communication, University of Zurich) analyzed the Twitter followers of all seven German Bundestag parties before and during the 2017 federal election: Bots made up 7.1% (before) to 9.9% (during) of the followers, but the most active bots hardly spread any German-language political hashtags. Direct agenda-setting effect: low.

  • This finding, however, describes a specific class of bot: follower inflation and hashtag amplification – visible and recognizable, but actually irrelevant to many. Today, though, this needs to be viewed more critically, because fake following can also demote accounts, which could be part of a plan (e.g., as a quality signal on Instagram).

  • Today's LLM-based agents operate differently: They conduct coherent, responsive one-on-one conversations , adapt their argumentation to the conversation partner, and are predominantly not identified as bots in tests by humans.

  • The crucial difference: Earlier bots were effective through sheer volume (the number of posts). New agents are effective through quality (individual conviction). This shift structurally invalidates the quantity argument.



Who is particularly vulnerable to disinformation through fake accounts?


TL;DR: Ironically, politically engaged, informed citizens are particularly vulnerable to persuasion effects.

Evidence:

  • Kartal & Tyran (2022, American Economic Review ) show in laboratory experiments that overconfidence – i.e., exaggerated confidence in one's own knowledge – significantly increases the effect of misinformation on collective decision quality.

  • According to Kartal & Tyran, the effect at the group level (election result) is significantly stronger than at the individual level: Even a moderate spread of fake news can noticeably distort the collective result.

  • Voters who consider themselves well-informed are less likely to verify sources – and are therefore more susceptible to seemingly fact-based chatbot arguments.

  • People who frequently use Twitter (now X) for political information are demonstrably more engaged with partisan bot content (Keller & Klinger 2019).



How relevant are bots, especially for Swiss referendums?


TL;DR: Direct democracy and close results make Switzerland particularly vulnerable – and social media are rapidly gaining importance among young voters.


"The effect of political AI chatbots on voting intentions is four times greater than the effect of traditional TV election advertising." — Cornell University, 2025

Evidence:

  • A study by the University of Zurich as part of the DDS-21 project (Fischer et al., 2025) analyzed the vote on the environmental responsibility initiative of February 9, 2025: While only a minority overall used social media to form their opinions, around a third of the youngest age group specifically sought information via social platforms. Only about 10% of voters did not use social media at all.

  • The UZH researchers explicitly warn: " The supposedly small influence of social media should not be underestimated " – as media habits shift across generations and today's young cohort will form the core of the voting population in the future.

  • Swiss referendums are regularly decided by less than 5 percentage points (examples: RASA 2016 with 41.1% yes, Transparency Initiative 2021 with 56.4% yes, AHV 21 2022 with 50.5% yes). A proven chatbot effect of 3.9 to 10 percentage points would fall precisely within this range.

  • Election campaigns in Switzerland are also subject to budgetary constraints – regulated, easily deployable bot infrastructure can represent more cost-effective campaign channels than traditional advertising.



What five indicators show whether a vote is particularly vulnerable to bots?


TL;DR: Five factors measurably increase the risk of coordinated bot campaigns in voting.


Evidence:

Rank

indicator

Why relevant

1

High potential for societal polarization

Emotional topics drive more organic amplification of bot content.

2

A close result is expected.

The smaller the margin, the greater the potential leverage effect of small shifts in attitude.

3

The young target group is strongly represented

Increased social media use increases exposure to bot content.

4

International interests

Foreign state or economic actors with a motive to influence the outcome

5

Low public media literacy on the topic

A deep awareness of AI-generated content increases vulnerability.



How do primitive bots and AI agents differ in elections — and what are the implications?


TL;DR: Primitive bots are a noise problem; AI agents are a persuasion problem. This requires fundamentally different counter-strategies.


Evidence:

dimension

Primitive Bots (2015–2021)

AI agents (2023–present)

Mode of action

Follower inflation, hashtag amplification

Personalized one-to-one communication

Recognizability

Often identifiable (language, posting frequency)

Mostly not identifiable as a bot

Effect size on voting intention

Low to moderate

3.9 to 10 percentage points (measured)

Required resources

Low (scripts, API)

Moderate (API costs drop significantly, scalable)

Counter-strategies

Bot detection, blocking

Media literacy, verification requirements, transparency, solid trust building


  • Mont'Alverne et al. ( Public Opinion Quarterly , 2024) show, based on 42 million clicks during the 2022 Brazilian presidential election, that the use of legacy media protects against belief in election disinformation; digital platforms alone do not.

  • This means that credible, trust-based information sources are the strongest structural counterweight to bot campaigns – not just technical blocking measures.



FAQ


Can bots decide a Swiss referendum on their own? No, but they can shift results by several percentage points, which is crucial in close votes. Proving direct causality for specific voting results is methodologically difficult.


Is the problem limited to social media? Increasingly not. Messenger services like WhatsApp, direct advertising formats, and, soon, voice interactions are new channels; researchers are observing cross-platform coordination.


What legally distinguishes political advertising from bot campaigns? In Switzerland, there is currently no explicit legal regulation for AI-generated political content and fake accounts. Transparency requirements for political advertising are under development.


Are small parties or NGOs also at risk – or only large parties? Small actors with polarizing issues are particularly vulnerable, as counter-campaigns are more expensive and media literacy in their environment is often lower.


Do bot detection tools help? They improve the situation, but they don't offer complete protection: Current LLMs convincingly mimic human writing behavior. The combination of technical detection and media literacy measures is the more robust strategy.


Can the effect of bots be empirically measured? Yes, with pre-registered experiments – as Lin et al. and Rand et al. (2025) have shown. The challenge lies in external validity: laboratory conditions do not fully correspond to real voting situations.


What can my organization do specifically? Three priorities: (1) Assess target group exposure on social platforms, (2) Integrate media literacy measures into the communication strategy, (3) Provide verifiable source information for own content.



Methods & Data Appendix

study

method

N

country

Published

Lin et al. (2025)

Pre-registered RCT, chatbot experiment

2,300 (US), 1,530 (CA), 2,118 (PL)

USA, Canada, Poland

Nature 648, Dec. 2025

Rand et al. (2025)

Crossover experiment, 700+ political topics

~77,000

UK

Science, Dec. 2025

Keller & Klinger (2019)

Bot detection (Botometer), Twitter follower analysis

638,674 / 838,026 Accounts

Germany

Political Communication 36

Kartal & Tyran (2022)

Laboratory experiment + theoretical model

na (laboratory sample)

Austria

American Economic Review 112(10)

Mont'Alverne et al. (2024)

42 million clicks + 4-wave panel survey

2,200 internet users

Brazil

Public Opinion Quarterly 88

Fischer et al./DDS-21 (2025)

Post-vote survey UVI

Swiss voters

Switzerland

UZH, June 2025




Is your organization prepared for bot campaigns for the next election?




bottom of page