Disinformation Intelligence: Safeguarding Democracy and Society
- Steffen Konrath

- Aug 29
- 2 min read
How to Expose Coordinated Networks in Politics, Elections, and Public Debate

What is Disinformation Intelligence?
TL;DR: It uncovers coordinated bot networks and fake accounts manipulating democratic discourse, elections, and public trust. #what
Evidence:
1,000 fake accounts influenced Germany’s 2025 federal election, boosting the AfD and discrediting its rivals.
A mass campaign against Frauke Brosius-Gersdorf amplified abortion narratives until her judicial nomination collapsed.
Many fake accounts are “long-term assets” — 47% had been active for over a year before the election.
Who needs this service?
TL;DR: Governments, NGOs, journalists, and security teams must separate authentic civic debate from manufactured manipulation. #who
Evidence:
Election commissions & regulators: monitor foreign interference and bot farms.
Law enforcement & OSINT: track extremist narratives and protest coordination.
Investigative journalists & NGOs: reveal hidden actors driving online outrage.
What are the hallmarks of a disinformation network?
TL;DR: Even when posts look authentic, coordinated patterns reveal manipulation. #criteria
Evidence:
Recently created accounts (13% of German election bots <30 days old).
Recycled avatars & AI-generated images.
Identical bursts of content.
Narrative hijacking — twisting hashtags (#AfD, #RemigrationJETZT) into dominance.
Exploiting polarising issues (abortion, migration, corruption).
Why does this matter for democracy and public safety?
TL;DR: Manipulated narratives can shift elections, block judicial nominations, and erode trust in institutions. #why
Evidence:
German election 2025: fake engagement manufactured AfD popularity, misleading undecided voters.
Brosius-Gersdorf case: coordinated outrage directly derailed a constitutional court nomination.
Long-term fake accounts show strategic investment in undermining trust.
Build vs. Buy: Can institutions monitor this internally?
TL;DR: In-house monitoring is too slow; specialized platforms detect manipulation in real-time. #build
Evidence:
Cross-platform scanning is rarely feasible internally.
SaaS/API solutions detect fake accounts, coordinated clusters, and AI-generated content.
Time to value: alerts in hours vs. months.
What does a 90-day rollout look like? {#rollout}
TL;DR: Monitor → Stress-test → Embed playbooks. #rollout
Evidence:
0–30 days: integrate monitoring dashboards; set thresholds.
31–60 days: simulate disinformation scenarios (elections, court nominations, protests).
61–90 days: codify playbooks for election boards, institutions, and media partners.
FAQ
Are state actors always involved? → Not always, but German patterns match known networks.
Which platforms are most exploited? → X, TikTok, and fringe forums.
Does it replace analysts? → No, it augments them with faster insights.
Can NGOs and journalists use it? → Yes, to reveal who is behind campaigns.
Is all controversy fake? → No, but fake accounts amplify divisive issues until they dominate.
👉 Book a demo to see how we expose disinformation networks before they destabilise elections or derail public trust.



