Computational propaganda – new threat to democracy – doubles its scale just within two years

Source: niemanlab 

International

Computational propaganda – the use of algorithms, automation, and big data to shape public life – is becoming a pervasive and ubiquitous part of everyday life, the Oxford Computational propaganda research claims. Authorities and political parties worldwide are increasingly using this tool of manipulation and influence to attain favorable voting at elections or suppress opponents in the authoritarian regimes. The use of the tool has more than doubled in the last two years: from 28 targeted countries in 2017 to 70 in 2019. Social media, which was once heralded as a force for freedom and democracy, has degraded to a space for disinformation, inciting violence, and undermining the legitimacy of democratic institutions.

Co-authored by Professor Philip Howard, Director of the Oxford Internet Institute (OII), and Samantha Bradshaw, Researcher at the OII, this Global investigation of disinformation is the only regular inventory of its kind to look at the use of algorithms, automation, and big data by politicians to shape public life. The 2019 report has shown a dramatic increase in the use of Computational propaganda just within the last 2 years, with China becoming a global player in this field for the first time.

70 countries where computational propaganda was detected. Source: the report

Who orders and who executes computational propaganda?

Source: the report

In all 70 countries where computation propaganda was detected, either the government or political parties hire cyber troops tasked with manipulating public opinion online.

Government agencies using computational propaganda were detected in 44 countries, while political parties or politicians running for office with the help of computational propaganda were found in 45 countries. These numbers include using advertising to target voters with manipulated media, such as in India, or instances of illegal micro-targeting such as the use of the firm Cambridge Analytica in the UK Brexit referendum.

Cyber troops use four types of fake accounts to spread computational propaganda: bot, human, cyborg, and stolen, but the first two are the most common. Bots are highly automated accounts designed to mimic human behavior online. A small team can manage thousands of such accounts to achieve a required scale of presence in social media. Human-run accounts are even more common for they can engage in conversations by posting comments or tweets, or accomplish effective private messaging.

Source: the report

Human-operated accounts were found in 60 out of the 70 countries. They’re particularly effective in countries with a cheap labor force, such as China or Russia, where thousands of citizens, in particular students, are hired to manage accounts.

Yet more, in countries like Vietnam or Tajikistan, state actors encourage cyber troops to use their real accounts rather than to create fake ones. Dissemination of pro-government propaganda, trolling political dissidents, or mass-report content becomes more effective through real accounts as social media companies become more aggressive in taking down accounts associated with cyber troop activity. The co-option of real accounts is predicted to become a more prominent strategy.

Cyborg accounts combine automation with human action while stolen high profile accounts are strategically used by cyber troops to censor freedom of speech through revoking access to the account by its rightful owner. In authoritarian regimes, computational propaganda has become a tool of information control that is strategically used in combination with surveillance, censorship, and threats of violence.

Types of fake accounts used in different countries. Source: the report

A business of computational propaganda

Computational propaganda remains a big business. The researchers found large amounts of money being spent on “PR” or strategic communication firms to work on campaigns in countries such as the Philippines, Guatemala, and Syria. The finances involved can reach multi-million-dollar contracts with global companies like Cambridge Analytica.

Some teams are comprised of a handful of people who manage hundreds of fake accounts. In other countries – like China, Vietnam, or Venezuela – dozens of thousands of people are hired by the state to actively shape opinions and police speech online.

Thus, the skills of computational propaganda attain value and are being disseminated worldwide. For example, during the investigations into cyber troop activity in Myanmar, evidence emerged that military officials were trained by Russian operatives on how to use social media. Similarly, cyber troops in Sri Lanka received formal training in India. Leaked emails also showed evidence of the Information Network Agency in Ethiopia sending staff members to receive formal training in China.

Main strategies of computational propaganda

Source: the report

1. The creation of disinformation or manipulated media, mass-reporting of content or accounts was most popular in 52 out of the 70 countries. Cyber troops actively created content such as memes, videos, fake news websites or manipulated media in order to mislead users.

2. Data-driven instruments are the second important group of strategies. The content created by cyber troops is targeted at specific communities or segments of users. This includes targeted advertisement and micro-targeting to maximize impact.

3. Trolling or harassment was used in 47 countries. Especially in authoritarian regimes, cyber troops censor speech and expression through the mass-reporting of content or accounts. Posts by activists, political dissidents, or journalists often get reported by a coordinated network of cyber troop accounts in order to manipulate the automated systems that social media companies use to take down inappropriate content.

Trolling and the takedown of accounts or posts can happen alongside real-world violence, which can have a deep and chilling effect on the expression of fundamental human rights.

Who is the leader in computational propaganda

It’s hard to name one country, although there is plenty of where the use of computational propaganda is especially wide-spread. Among the most dangerous are authoritarian regimes that suppress not only activism on the internet but any possible alternative of expression or protest.

Researchers distinguish minimal, low, medium and high cyber troop capacity, depending on the team-size and budget that a country spends for propaganda goals.

High cyber troop capacity involves large numbers of staff and large budgetary expenditures on psychological operations or information warfare. There might also be significant funds spent on research and development, as well as evidence of a multitude of techniques being used. These teams do not only operate during elections but involve full-time staff dedicated to shaping and controlling the information space. The table below lists all countries with the most developed computational propaganda.

The vast majority of countries have middle cyber troop capacity which is still considerably high.

Another important criterion of the capacity of the country’s cyber troop is its attempt for foreign or global influence. Facebook and Twitter – that have begun publishing limited information about influence operations on their platforms – have taken action against cyber troops engaged in foreign influence operations in seven countries: China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela.

Although this measure does not capture the extent to which foreign influence operations are taking place, we can confidently begin to build a picture of this highly secretive phenomenon.

China has become a major player in the global disinformation order. Until the 2019 protests in Hong Kong, most evidence of Chinese computational propaganda occurred on domestic platforms such as Weibo, WeChat, and QQ. But China’s new interest in aggressively using Facebook, Twitter, and YouTube should raise concerns for democracies worldwide. The Chinese government began to employ global social media platforms to paint Hong Kong’s democracy advocates as violent radicals with no popular appeal, yet the growing sophistication and use of global social networking technologies demonstrates how China is also turning to these technologies as a tool of geopolitical power and influence

The scale of computational propaganda by country and agency. Source: the report

Computational propaganda has become a normal part of the digital public sphere. These techniques will also continue to evolve as new technologies – including Artificial Intelligence, Virtual Reality, or the Internet of Things – are poised to fundamentally reshape society and politics. But since computational propaganda is a symptom of long-standing challenges to democracy, it is important that solutions take into consideration the need for access to high-quality information and an ability for citizens to come together to debate, discuss, deliberate, empathize, and make concessions. Are social media platforms really creating a space for public deliberation and democracy? Or are they amplifying content that keeps citizens addicted, disinformed, and angry?

Read more:

Since you’re here – we have a favor to ask. Russia’s hybrid war against Ukraine is ongoing, but major news agencies have gone away. But we’re here to stay, and will keep on providing quality, independent, open-access information on Ukrainian reforms, Russia’s hybrid war, human rights violations, political prisoners, Ukrainian history, and more. We are a non-profit, don’t have any political sponsors, and never will. If you like what you see, please help keep us online with a donation!

Tags: , , , ,