6.5 C
New York
Monday, January 25, 2021

New research shows how easy it is to manipulate the opinions of public politicians, even U.S. senators

Must Read

Scientists discover a promising new way to reduce the risk of preterm birth

This new promising can help about 15 million women each year, who are exposed to the risk of preterm birth

British strain of coronavirus in the US may “cause further damage, including death” – Anthony Fauci

The White House's leading epidemiologist said that the new British strain of coronavirus is already present "in at least...

Scientists find differences in COVID-19 mortality in different races

Different races carry the coronavirus differently
Aakash Molpariya
Aakash started in Nov 2018 as a writer at Revyuh.com. Since joining, as writer, he is mainly responsible for Software, Science, programming, system administration and the Technology ecosystem, but due to his versatility he is used for everything possible. He writes about topics ranging from AI to hardware to games, stands in front of and behind the camera, creates creative product images and much more. He is a trained IT systems engineer and has studied computer science. By the way, he is enthusiastic about his own small projects in game development, hardware-handicraft, digital art, gaming and music. Email: aakash (at) revyuh (dot) com

News reports of recent years are full of headlines about manipulating public opinion with enviable frequency. A special body of the North Atlantic Alliance carried out an experiment that clearly showed how this happens and how effectively social networks can resist such phenomena.

The NATO Strategic Communications Center of Excellence has commissioned a study from an accredited specialist firm in Riga, Latvia. It, in turn, bought 337,768 social interactions from three Russian Social Media agencies. These included likes, views, and reposts of targeted posts on the five most popular social networks – Facebook, Instagram, Twitter, Youtube, and Tiktok.

The targeted content was the recordings of two U.S. senators: Iowa Republican Chuck Grassley and Connecticut Democrat Chris Murphy. Both “subjects” had confirmed accounts (usually ticked) on all social networks, and they gave their consent to participate in the experiment. In order not to inadvertently interfere in various geopolitical processes, both politicians have placed harmless apolitical content – photos of food and dogs.

After the experiment began, a flurry of social interactions hit the targeted recordings and videos. The researchers knew they were fake, but the social media algorithms were not. Theoretically, popular resources should have mechanisms that filter out such “likes”, views and reposts from bots according to certain criteria. In fact, in most cases it did not work.

Four months later, 98% of fake social interactions remained in place. Even more annoying, 97% of the accounts that researchers have flagged as “inauthentic” (spam, misinformation, or suspicious activity) have also survived. At the same time, judging by indirect signs, the algorithms of social networks evaluated the target content on a par with the rest, not paying attention to the purchased traffic.

In other words, the researchers were able to easily and simply create the appearance of interesting content that the algorithms of the sites were promoting, although all the social activity associated with it was fake. In the context of politics and important social topics, this means that resources such as Facebook, Instagram, Twitter, Youtube, and Tiktok do not actually take any action to limit the manipulation of public opinion.

The cost of such manipulations is also striking – the Latvian research firm spent only $ 300 to buy more than 300,000 social interactions. An attacker can relatively easily and cheaply focus public opinion on favorable records on the Internet. Or create the appearance of actively discussing one topic by hiding another. Finally, it is much easier to imitate the importance of public figures: it is difficult for an inexperienced Internet user to distinguish between a “fake” account with hundreds of thousands of “likes” and reposts from real politics.

A similar experiment was conducted in 2019. Comparing the current results with those, study lead author Sebastian Bay noted positive changes. According to him, Twitter began to respond to user complaints faster, and Facebook created serious obstacles to registering “fake” accounts. However, the rest of the sites – YouTube, Instagram, and Tiktok – look completely defenseless. It turns out that Internet users are in a different security situation, depending on the social network.

The results are reported by the Associated Press.

- Advertisement -
- Advertisement -

Latest News

Scientists discover a promising new way to reduce the risk of preterm birth

This new promising can help about 15 million women each year, who are exposed to the risk of preterm birth
- Advertisement -

More Articles Like This

- Advertisement -