top of page

How K-Pop Is Used to Win Propaganda Wars

  • Writer: Brandon Abrams
    Brandon Abrams
  • Sep 28
  • 2 min read

Influence campaigns don’t begin the day influence is needed. They can start years earlier - building and mobilizing communities before eventually weaponizing them. 


ree

Malicious actors first cultivate trust among large audiences through networks of sophisticated bot accounts across social media, long before these accounts are used to spread political messages. By the time propaganda is deployed, the network already has reach and credibility among unsuspecting users who once followed pages devoted to a favorite boy band or sports team. To appear authentic and avoid raising red flags, these accounts often focus on niche topics that attract loyal fan bases—religion, travel, sports, celebrity updates, music, and more—frequently using generative AI to generate content.


K-Pop and War-time Propaganda


Brinker uncovered a clear example of this strategy in action during the recent conflict between Cambodia and Thailand.


Almost immediately after the border tensions escalated, we observed a surge in political propaganda from unusual sources. Structured political messaging came from accounts previously entirely dedicated to K-pop music and stars like BTS, BlackPink, 2NE1, and others with massive international fanbases.


An account that was dedicated to sharing content about Korean singer Baek A-yeon that was dormant since May 2023, the day the Thailand/Cambodia border conflict started, the account became 100% dedicated to sharing political messaging
An account that was dedicated to sharing content about Korean singer Baek A-yeon that was dormant since May 2023, the day the Thailand/Cambodia border conflict started, the account became 100% dedicated to sharing political messaging

These accounts, which had spent months or years posting red carpet photos, concert clips, and fan edits, suddenly pivoted on July 24th. Overnight, their feeds were filled with nationalist content, inflammatory language, and AI-generated images designed to provoke emotional reactions among viewers.


What were the signs that the fan accounts were part of an influence campaign?


  • Accounts that were dedicated to K-Pop and other asian celebrities and pop-stars had a sudden and sharp shift to political messaging and propaganda mere hours after the fighting started

  • Posts were often identical between users, always using the same hashtags in the same order.

  • AI-generated images were used repeatedly across different profiles.


A post shared by a fan page dedicated to Chinese actor and singer Xiaozhan, hundreds of other posts featuring the same text, order of hashtags, and image were detected.
A post shared by a fan page dedicated to Chinese actor and singer Xiaozhan, hundreds of other posts featuring the same text, order of hashtags, and image were detected.

Prior to the conflict, there were no signs that these accounts had any connection to either Thailand or Cambodia. Most were in English and targeted global K-pop fans, which made the sudden shift in content to hyperfocusing on the small regional conflict surprising.


An example post made by a fan page of K-pop boy band “Enhyphen”, the post was shared by a “BTS” fan page
An example post made by a fan page of K-pop boy band “Enhyphen”, the post was shared by a “BTS” fan page

Its Happening Everywhere all the Time


This isn’t an isolated incident, similar patterns have emerged in other regions, including a recent disinformation campaign surrounding the 2024 elections in Romania, in which social media pages originally dedicated to religious issues sharply pivoted to promoting pro-Russian political propaganda shortly before the elections.


Bad Actors Are Playing the Long Game


Narrative tracking intelligence and digital forensics tools are more important than ever. To overcome disinformation you need to detect it early. When the viral information swell is already in motion, it's hard to fight back, but you can possibly detect a campaign before it starts by identifying the communities that are being formed based on historical data and AI’s ability to recognize patterns.


Don't get caught off guard.
Talk to us. 

One of our represntaitves will be in touch soon

Disinformation Threat Mitigation 

Brinker is an end-to-end disinformation threat mitigation platform that serves the public sector and major enterprises. It combats disinformation attacks and influence campaigns, using proprietary narrative intelligence technology, AI-enabled detection, and automated OSINT investigations.

Address: 398 Kansas Street
San Francisco, CA 94103

  • LinkedIn
bottom of page