top of page

How Disinformation Hijacked American Eagle’s Sydney Sweeney Campaign — and What Brands Can Learn

When American Eagle launched its new campaign featuring actress Sydney Sweeney, it opened with glowing reactions and even a bump in the brand’s stock price. Within days, however, the conversation flipped: accusations of racism, coordinated calls for boycotts, and harmful narratives spread across TikTok, X, and Facebook.


Without early detection, brands risk losing control of their story before they even see the warning signs.

The brand threat disinformation analysis with Cyabra's software platform revealed that inauthentic accounts fueled much of the outrage, amplifying narratives that reached millions and creating a toxic environment for the brand. For CMOs and communications leaders, the lesson is clear — without real-time disinformation detection, even the strongest campaigns can be hijacked before teams have time to respond.


Line chart showing timeline of negative post sentiment around American Eagle’s Sydney Sweeney ad from July 23 to July 29, with screenshots of TikTok and X posts from fake accounts calling for boycotts.
Negative narratives toward American Eagle’s Sydney Sweeney campaign spiked 400% in under a week, with fake accounts fueling much of the backlash.


What exactly happened with American Eagle’s Sydney Sweeney campaign?


TL;DR: The campaign launched with praise but spiraled into accusations of racism and boycott calls within days, with fake accounts amplifying the controversy. #what-happened


A single slogan flipped from marketing hook to reputational hazard in less than 72 hours.

Evidence:

  • Initial approval: July 23–24 — positive reception and stock price increase showed strong early momentum.

  • Sentiment shift: By July 25, concerns over the slogan “Sydney Sweeney Has Great Jeans” began to spread.

  • Backlash surge: July 27–29 — accusations of Nazi undertones dominated; boycott narratives peaked.

  • Inauthentic accounts: 13–15% of TikTok commenters were fake, contributing >77,000 engagements.



What narratives drove the backlash?


TL;DR: Two narratives dominated: racism/extremism accusations and criticism of narrow beauty standards. #narratives


Two narratives — racism accusations and beauty exclusivity — accounted for millions of engagements and defined the campaign’s legacy.”

Cluster map showing online accounts discussing American Eagle’s Sydney Sweeney ad. Two main clusters are labelled: racism and extremism accusations, and criticism of beauty standards.
Network diagram of accounts amplifying backlash, highlighting two dominant narratives: accusations of racism/extremism and criticism of exclusive beauty standards.

Evidence:

  • Narrative 1: “Genes not jeans” pun interpreted as racial purity messaging.

  • Narrative 2: The campaign reinforced outdated, exclusive beauty ideals.

  • Engagement scale: Posts echoing these narratives generated over 2.7 million engagements and a potential reach of 881 million users.

  • Polarisation: While backlash grew, a counter-narrative praised the campaign’s “non-woke” stance, creating cultural division.

  • Visual placement: Insert Image 2 — network diagram + two key narratives to show amplification patterns.



How did fake accounts amplify harmful sentiment?


TL;DR: A minority of fake profiles seeded and spread boycott calls, shaping perceptions far beyond their numbers. #fake-accounts


Fake accounts drove 77,000+ engagements, proving that coordinated inauthentic activity can punch far above its weight.

Infographic showing total profiles analysed on TikTok (1,999), number of inauthentic profiles (272), and sample avatars used by fake accounts.
Out of 1,999 TikTok profiles analyzed, 272 were identified as fake, generating over 77,000 engagements that contributed to the boycott narrative.

Evidence:

  • Scope: 272 inauthentic TikTok profiles identified; responsible for much of the critical sentiment.

  • Coordination: Many used avatars or AI-generated images; they echoed the same language across threads.

  • Impact: Generated >77,000 engagements, driving visibility and credibility of backlash.

  • Tactic: Mimicked “authentic outrage” to sway undecided audiences.



What warning signs should brand teams monitor in real time?

TL;DR: Spikes in harmful sentiment, repeating narratives, and clusters of suspect profiles are early indicators of manipulation. #criteria


Evidence:

  • Negative narratives spike disproportionately to campaign activity = early red flag.

  • Sudden slogan reinterpretations that gain traction.

  • Unusual profile clusters with similar avatars or posting cadence.

  • Amplification from fringe influencers driving narratives into the mainstream.

  • Boycott calls are spreading faster than product conversations.

  • Platform-specific anomalies (e.g., TikTok comment floods).

  • So what: Monitoring these six criteria helps teams intervene before backlash reaches its peak.



Build vs. Buy: Should brands develop detection internally or adopt specialised tools?


TL;DR: Specialist tools deliver faster, more reliable detection than internal builds for most consumer brands. #build-vs-buy


Evidence:

  • Build: Requires data access, AI expertise, and cross-platform monitoring infrastructure — rarely feasible at speed.

  • Buy: The Swiss Disinformation Research Center, evAI, and software vendor Cyabra partner to offer full-service brand threat and security consulting and software with pre-trained detection on fake accounts, narrative clustering, and sentiment shifts.

  • Time to value: External platforms can surface risks in hours vs. months of internal build.

  • Cost/risk trade-off: False negatives during a crisis are far costlier than software licensing.



How should a brand respond in the first 90 days of adopting detection tools?


TL;DR: Roll out in three phases: monitor, simulate, act. #rollout


Evidence:

  • 0–30 days: Integrate monitoring across campaigns; set alert thresholds.

  • 31–60 days: Run “war-game” simulations of disinformation scenarios; align PR/legal teams.

  • 61–90 days: Establish playbooks for public response, influencer engagement, and corrective narratives.

  • Owner roles: CMO (strategy), Comms Lead (execution), Data/Insights (monitoring).



FAQ

  1. Is all backlash disinformation? No — but fake accounts often intensify authentic concerns.

  2. Which platforms are the highest risk? TikTok and X, due to speed and virality.

  3. Can sentiment recover quickly? Yes, if brands act early with transparent communication.

  4. What about smaller brands? They are equally vulnerable; scale doesn’t prevent manipulation.

  5. How accurate are detection tools? High precision on fake profiles; continual improvement via training data.

  6. Do detection tools replace PR? No — they empower PR teams with faster intelligence.




👉 Discover how we can enable you to identify fake accounts, harmful narratives, and emerging threats before they escalate.


bottom of page