top of page

Disinformation? The DeepSeek Hype Was All Made Up

Updated: Sep 1

From AI Darling to Disinformation Playbook: How Fake Accounts Managed a Market Frenzy and What DeepSeek’s Hype Teaches Leaders


DeepSeek, a new Chinese AI model, stormed app charts and briefly sent shockwaves through global markets. However, our disinformation research team found that much of this excitement was artificial — driven by thousands of fake profiles working in tandem. Their behavior bore the hallmarks of state-linked bot networks, amplifying hype and distorting market perception. For tech leaders and investors, the warning is clear: without the ability to separate genuine adoption from manufactured buzz, decisions risk being guided by disinformation rather than reality.

DeepSeek Disinformaton Campaign: Diagram showing two scenarios: on the left, fake profiles liking and commenting on each other’s posts; on the right, fake profiles inserting themselves into authentic user discussions to gain trust.
DeepSeek Disinformaton Campaign: Fake profiles used two main methods: amplifying each other to simulate popularity, and blending into authentic conversations to appear credible.

What happened with the DeepSeek launch?


TL;DR: DeepSeek’s record-breaking hype wasn’t organic — it was powered by thousands of coordinated fake accounts. #what-happened


Evidence:

  • Our disinformation research team analysed 41,864 profiles discussing DeepSeek.

  • 3,388 were fake accounts — most of them active on X, accounting for about 15% of all engagement on the platform, which is double the usual baseline.

  • These accounts generated 2,158 posts on X in a single day, reaching their peak activity.

  • The orchestrated amplification made DeepSeek trend across platforms and influence market narratives.

  • 44.7% of those fake profiles were created in 2024, aligning with the timing of DeepSeek’s launch


How did fake accounts manufacture excitement and drive disinformation around DeepSeek?


TL;DR: They acted in sync, boosting each other and hijacking authentic conversations to create an illusion of momentum. #how-manufactured


Screenshot of coordinated fake accounts using the DeepSeek hashtag to promote scams and a rival AI tool, piggybacking on manufactured hype.
Fake profiles hijacked the DeepSeek hashtag to push scams and even promote competing AI platforms — exploiting hype for multiple agendas.

Evidence:

  • Mutual amplification: Fake profiles commented and liked each other’s posts to simulate popularity.

  • Piggybacking on trending posts: They flooded viral tweets (e.g., @FanTV_official, 480K+ views) with DeepSeek praise.

  • Integration with real users: Bots joined genuine conversations, tricking users into engaging with disinformation.

  • Synchronized timing: Bursts of identical content at the same moment maximised visibility.



What are the hallmarks of a coordinated bot network?


TL;DR: Recent creation dates, avatar recycling, identical posts, and synchronous activity are tell-tale signs. #criteria


Screenshot showing multiple fake accounts posting identical content with recycled profile photos, illustrating coordinated bot activity.
Fake profiles exhibited classic bot behavior: identical, praise-filled posts, generic avatars, and synchronized timing.

Evidence:

  • Avatar recycling: Many used generic stock photos, often of Chinese women.

  • Recent creation: 44.7% of accounts were created in 2024, coinciding with DeepSeek’s rise.

  • Copy-pasting: Identical praise-filled comments posted en masse.

  • Simultaneous posting: Coordinated bursts created artificial virality.

  • So what: These patterns match known behaviour of Chinese bot networks.



Why does manufactured hype matter for markets and brands?

TL;DR: Artificial excitement can sway investment, distort perception, and mislead decision-makers.


Evidence:

  • Market impact: DeepSeek’s hype briefly moved US markets, wiping out billions in valuation as investors scrambled.

  • Reputation distortion: Fake sentiment shaped the AI arms-race narrative in China’s favour.

  • Precedent: Similar tactics have been used to influence elections and protests — now redirected toward tech adoption.

  • Risk: Investors, boards, and communications teams risk making strategic decisions based on false signals.



Build vs. Buy: Can teams monitor this themselves?


TL;DR: Building internal detection is impractical; specialised tools are faster, broader, and more reliable. #build-vs-buy


Evidence:

  • Build: Requires data pipelines across platforms, AI expertise, and 24/7 monitoring.

  • Buy: Platforms like ours are pre-trained to identify fake accounts, coordinated behaviour, and narrative manipulation.

  • Time to value: Tools deliver insights in hours vs. months.

  • Cost of delay: One misinformation-fuelled hype cycle can move billions before teams react.



How should organisations roll out disinformation detection in 90 days?


TL;DR: Start with monitoring, then stress-test with simulations, and embed playbooks for rapid response. #rollout


Evidence:

  • 0–30 days: Connect monitoring dashboards to social platforms; set thresholds for unusual spikes.

  • 31–60 days: Run simulations of bot-driven hype or backlash; align comms and risk teams.

  • 61–90 days: Develop and codify playbooks for investor messaging, market communications, and board reporting.

  • Roles: Comms & IR (messaging), Strategy (risk), Insights/Research (detection).



FAQ


  1. Is all AI hype fake? No — but fake accounts can amplify genuine interest.

  2. How many fakes are “normal”? Typically 5–7%; DeepSeek hit 15%.

  3. Which platforms are most exploited? X and TikTok, due to virality and weaker controls.

  4. Are state actors always involved? Not always — but DeepSeek’s patterns matched Chinese bot networks.

  5. Can small firms be targeted too? Yes — hype tactics are cheap to deploy at any scale.

  6. Do detection tools replace analysts? No — they augment analysts with faster, deeper signals.



Methods & Data


  • Source: Our disinformation research team analyzed 41,864 profiles discussing DeepSeek between January 21 and February 4.

  • Findings: Identified 3,388 fakes, most of them active on X (15%), showing hallmarks of coordinated bot networks.

  • Approach: Sentiment analysis, network mapping, and account authenticity scoring.

  • TODO: evidence needed — proposed investor survey on how hype narratives influence decision-making.




👉 Learn more about how our brand threat consulting and software services can help or enable you to identify coordinated fake accounts before they sway markets and reputations.


bottom of page