Copyright © 2021 Euromaidanpress.com

The work of Euromaidan Press is supported by the International Renaissance Foundation

When referencing our materials, please include an active hyperlink to the Euromaidan Press material and a maximum 500-character extract of the story. To reprint anything longer, written permission must be acquired from [email protected].

Privacy and Cookie Policies.

Why Facebook is more revealing on nudity than on Russian disinformation

Why Facebook is more revealing on nudity than on Russian disinformation
Edited by: Yuri Zoria

Researchers, election observers, and governments alike all struggle to measure the scope and impact of Russian disinformation campaigns. So far, sporadic press releases by online platforms constitute the bulk of first-hand evidence of these campaigns, as is the case with Facebook’s latest takedown of Russian operations targeting African countries. More systematic access to this evidence will be key to tackling disinformation and election meddling.

Even though Facebook and the Russian government have their Brussels offices next door to each other, they appear to live in worlds apart.

Last week, Russia’s state-operated news agency RIA Novosti proclaimed that “Russia does not interfere in internal affairs of African States.” Just two days later, Facebook announced the takedown of a major disinformation campaign targeting at least eight African countries. On its news blog, the company explicitly attributed these campaigns to “entities associated with Russian financier Yevgeniy Prigozhin,” popularly known as Putin’s chef, who has previously been indicted by the US Justice Department for involvement in Russia’s interference in the 2016 US election. Indicating tactical evolution, Russian operatives also worked with locals in the African countries to set up Facebook accounts that were disguised as authentic to avoid detection.

But Facebook’s press release is interesting for another reason: for the first time on its news blog, the company explicitly refers to a Russian campaign as “foreign interference.”

So far, experts are divided about whether Russia’s disinformation campaigns constitute foreign interference. In line with international law, this would boil down to classifying the Kremlin’s operations as acts of coercion. Some say this is a stretch, given that Russian social media operations merely “impact people’s opinions, which may or may not have impacted subsequent votes.” Others are more hawkish in their assessment, noting that the Kremlin’s efforts indeed constitute coercion insofar as they are “purposively designed to exert control over a sovereign matter.”

Not only scholars of international law struggle to assess the scope and impact of disinformation campaigns. By refusing to grant systematic access to public interest data, the leading online platforms currently monopolize the ability to assess whether elections may have been compromised by manipulative campaigns.

For the EU, for instance, election observation missions are a key tool for supporting democracy and promoting human rights around the world, including in African countries. Yet with these companies failing to systematically provide evidence of malicious activity on their platforms, it is virtually impossible to assess the degree to which the Kremlin’s operations may have distorted electoral processes in African countries, or violated national or international electoral laws. Sporadic updates on the news blogs of online platforms are an insufficient basis for democratic actors to do their job.

As The Guardian recently noted, less than 10% of Facebook users live in the US, arguing that in order to protect the remaining 90% from harm, the company should tailor its transparency and integrity policies to respective social and political contexts. In countries that are targeted by Russian disinformation campaigns, this also implies reporting regularly and comprehensively on the degree of information operations via the different Facebook services.

Currently, Facebook systematically reports on violations of its adult nudity policy, but nowhere discloses the full volume and extent of foreign interference campaigns on its platform. This means that when platforms take down malicious networks emanating from Russia or Iran, for instance, they don’t do so on grounds of foreign interference, but based on other provisions of their terms of service, such as those relating to fake accounts or so-called “inauthentic coordinated behavior.”

Besides systematic self-reporting, researchers have also called for insights into accounts that platforms themselves have taken down and attributed to foreign actors.  This would help researchers identify behavioral patterns, and thus detect future disinformation campaigns faster.


Further reading:

Edited by: Yuri Zoria
You could close this page. Or you could join our community and help us produce more materials like this.  We keep our reporting open and accessible to everyone because we believe in the power of free information. This is why our small, cost-effective team depends on the support of readers like you to bring deliver timely news, quality analysis, and on-the-ground reports about Russia's war against Ukraine and Ukraine's struggle to build a democratic society. A little bit goes a long way: for as little as the cost of one cup of coffee a month, you can help build bridges between Ukraine and the rest of the world, plus become a co-creator and vote for topics we should cover next. Become a patron or see other ways to support. Become a Patron!
Total
0
Shares