This is according to a study by the NATO Strategic Communications Center of Excellence, whose authors note that new approaches to tackling disinformation are “flourishing” as online efforts to manipulate public opinion become an increasingly “pressing policy concern.”
The study, titled “Government Responses to Malicious Use of Social Media,” breaks down the new regulations into 10 categories: content takedowns by social media platforms, transparency of online ads, data protection, criminalization of disinformation, expanding the definition of illegal content, media literacy and watchdogs, journalistic controls, parliamentary inquiries, creation of cybersecurity units, and monitoring initiatives.
Ireland, Italy, and Australia, for instance, are among the countries that introduced criminal penalties for producing or sharing disinformation, or for organizing a bot campaign targeting a political issue.
Among other measures, Croatia recently funded a new media literacy initiative, the US Congress is investigating Russian interference in the 2016 US presidential election, and G7 countries are developing a Rapid Response Mechanism to fight disinformation and foreign interference in elections.
They point out that most of the government initiatives so far have focused chiefly on regulating free speech on social media rather than on addressing the deeper systemic problems that lie beneath attempts to influence public opinion online.
Some authoritarian governments, they say, have also co-opted the fight against disinformation to introduce legislation aimed at tightening their grip on the digital sphere and legitimizing censorship online.
Instead, the report urges policymakers to demand greater accountability and cooperation from social media platforms.
“A core issue is a lack of willingness of the social media platforms to engage in constructive dialogue as technology becomes more complex,” the authors note.
The report encourages governments to shift away from the measures aimed at controlling online content and work together to “develop global standards and best practices for data protection, algorithmic transparency, and ethic product design.”
The European Union has stepped up its own efforts to counter disinformation and in December presented an Action Plan aimed at tackling online disinformation in EU countries and beyond.
The Action Plan will also ensure that tech companies comply with the European Commission’s Code of Practice, a document that commits online platforms to increase transparency for political advertising and to reduce the number of fake accounts.
The platforms are required to report to the Commission on a monthly basis ahead of the European elections in May and face regulatory action if they fail to meet their commitments.
Further reading:
- France adopts bill aiming to crack down on the “manipulation of information” during election periods
- French think tanks issue 50 recommendations to combat information manipulations
- Commanding the Trolls
- Russian online disinformation started targeting US in 2013
- Ukraine-related narratives dominate Russian propaganda – disinformation watchdogs
- How we defend ourselves against the disinformation virus: NYT series final episode
- How Hungary became a weapon of Russian disinformation
- Central and Eastern Europe in the fight with disinformation: How is Ukraine doing?
- You can’t fight disinformation in the EU without naming its main source – Russia