News reports of recent years are full of headlines about manipulating public opinion with enviable frequency. A special body of the North Atlantic Alliance carried out an experiment that clearly showed how this happens and how effectively social networks can resist such phenomena.
The NATO Strategic Communications Center of Excellence has commissioned a study from an accredited specialist firm in Riga, Latvia. It, in turn, bought 337,768 social interactions from three Russian Social Media agencies. These included likes, views, and reposts of targeted posts on the five most popular social networks – Facebook, Instagram, Twitter, Youtube, and Tiktok.
The targeted content was the recordings of two U.S. senators: Iowa Republican Chuck Grassley and Connecticut Democrat Chris Murphy. Both “subjects” had confirmed accounts (usually ticked) on all social networks, and they gave their consent to participate in the experiment. In order not to inadvertently interfere in various geopolitical processes, both politicians have placed harmless apolitical content – photos of food and dogs.
After the experiment began, a flurry of social interactions hit the targeted recordings and videos. The researchers knew they were fake, but the social media algorithms were not. Theoretically, popular resources should have mechanisms that filter out such “likes”, views and reposts from bots according to certain criteria. In fact, in most cases it did not work.
- Mum files petition after a man took pic of her breastfeeding then refused to delete them as it’s legal
- US confesses it is likely to miss COVID-19 vaccine target by July 4
- Man, 40, thought he was meeting 13-year old girl for sex caught by online paedophile hunters
- Cognitive cute: a funny video with a cat about benefits of vaccination conquered network
- COVID changes blood cells of those who have recovered – scientists
Four months later, 98% of fake social interactions remained in place. Even more annoying, 97% of the accounts that researchers have flagged as “inauthentic” (spam, misinformation, or suspicious activity) have also survived. At the same time, judging by indirect signs, the algorithms of social networks evaluated the target content on a par with the rest, not paying attention to the purchased traffic.
In other words, the researchers were able to easily and simply create the appearance of interesting content that the algorithms of the sites were promoting, although all the social activity associated with it was fake. In the context of politics and important social topics, this means that resources such as Facebook, Instagram, Twitter, Youtube, and Tiktok do not actually take any action to limit the manipulation of public opinion.
The cost of such manipulations is also striking – the Latvian research firm spent only $ 300 to buy more than 300,000 social interactions. An attacker can relatively easily and cheaply focus public opinion on favorable records on the Internet. Or create the appearance of actively discussing one topic by hiding another. Finally, it is much easier to imitate the importance of public figures: it is difficult for an inexperienced Internet user to distinguish between a “fake” account with hundreds of thousands of “likes” and reposts from real politics.
A similar experiment was conducted in 2019. Comparing the current results with those, study lead author Sebastian Bay noted positive changes. According to him, Twitter began to respond to user complaints faster, and Facebook created serious obstacles to registering “fake” accounts. However, the rest of the sites – YouTube, Instagram, and Tiktok – look completely defenseless. It turns out that Internet users are in a different security situation, depending on the social network.
The results are reported by the Associated Press.