Home lifestyle Social networks need clear content criteria against coronavirus


Social networks need clear content criteria against coronavirus

by ace
Social networks need clear content criteria against coronavirus

Engin_Akyurt / Pixabay

The season of hunting for tweets, posts and messages that misinform about the new season has begun coronavirus. Nicolás Maduro suggested a homemade drink to fight covid-19? The publication has been deleted. Bolsonaro said "Thank God the medicine is there" while visited a butcher shop in Sobradinho? The publication has been deleted. Regina Duarte was confused when publishing Anvisa's statement on the hydroxychloroquine? The post got a warning that it was "partially false". In the midst of the pandemic, the Whatsapp once again limited the sending of messages.

Times of crisis call for strong measures, right? But what has changed to make Internet companies start to adopt more stringent measures regarding the content that circulates on their platforms? And what lessons can we learn from this first wave of actions that were taken to combat covid-19 related misinformation?

President Bolsonaro usually refers to the social networks as an instrument of direct communication with the Brazilian people, thus circumventing the mediation between the press and what is said in the media. Removing your publications on Twitter, Facebook and in Instagram show that the reality is quite different.

The internet has not eliminated intermediaries in communication. It just transformed them. The companies that manage social networks are – as their name reveals – "platforms" on which their users create and disseminate the most diverse content. They act as intermediaries for discourse and can, based on pre-established rules, manage how these texts, photos and videos published in the application reach (or not) other users.

Then comes the question of pre-established rules. What rules are these? What do they stipulate? Most users of social networks and applications in general have never read the "Terms of Use" for the platforms they use every single day. These documents act as a contract between the user and the company that manages the social network.

There are stipulated the behaviors that can lead to the suspension or termination of accounts, in addition to the removal of posts. It turns out that the terms of the Terms are generally generic. How to know if a new post really violates the contract signed with the social network? And if the post is effectively removed, can we know exactly what the contractual provision was violated?

It is here that cases like the recent ones serve to shed light on the rationale of companies that manage social networks. The platforms daily remove, as violations of their Terms of Use, content involving piracy, defamation, bullying and hate speech, for example. Sometimes they do this by determining a court order, but they also act directly on the content because it has been reported by other users or identified by the company itself.

When it is a judge who orders the removal of the content, it is easier to know the reasons that led to the exclusion of the content. Just read the decision. But when companies act directly on content allegedly in violation of the Terms of Use, we may fall into a scenario of extreme opacity.

Combating coronavirus-related misinformation represents an important testing ground for how companies manage content removal on their platforms and how these measures can be more transparent, predictable and auditable. Given the global crisis on the subject, several companies have updated their policies to face the outbreak of false, mismatched and malicious information running on their platforms.

Removing a publication from a president, as happened with Maduro and Bolsonaro, is not a simple decision. The scope of their accounts and the representativeness of the position makes it questionable whether presidents and other authorities on social networks should or should not be treated like any other user. In addition, there was no lack of comments on the same social networks, pointing out that other governments in other countries have already done worse, including threatening to bomb enemy countries. And your posts have not been removed.

Then something changed in the attitude of the companies, which now began to apply more rigorously the conditions of their own "Terms of Use". But is this change here to stay? Or as soon as the coronavirus crisis ends, will we return to the same regime of opacity and subjectivity?

If so, companies may miss out on a rare second chance to start again and join a time when the whole world, out of necessity, looks to the internet not as the cause of all ills, which educates children , cover up criminals and derail democracies.

Today it is the internet that, in times of social isolation, enables communication between family and friends. It is on the network that many companies have been able to transform the routine of their employees, who now work remotely. The Public Power itself, in the most different spheres, migrated to non-face-to-face sessions and meetings.

Social networks and messaging apps can be remembered as part of the solution, and not as part of the problem, if they get their hands on how they fight misinformation about covid-19 on their platforms. But that will only be possible if they apply their own rules with more transparency, consistency and predictability.

Facebook itself recently published a white paper called "Online Content Regulation"signed by its vice president for content policies, Monika Bickert. There the company suggests that one of the solutions for the future of content management on platforms is a kind of" procedural transparency "(procedural accountability). According to the document, a a regulatory model based on "procedural transparency" could include, at a minimum, requirements for companies to publish their content control standards, have tools for people to report any content that violates these standards, and offer effective response to these notifications. Other measures are also indicated such as creating a resource model on the decision to remove or maintain a post, in addition to publishing periodic reports on removal requests.

These measures are important because they help to create a floor on which different platforms can develop different models of notification, appeal and transparency. But it is worth remembering that all of these resolutions will only work if there is not only transparency, but also consistency in their application. An important part of this procedural shift lies in the ability of companies to assert at all times what they themselves have promised to do.

A golden rule of contract practice is never to write in a contract something that, at the start, you have doubts about if you will be able to fully comply. As beautiful as the clause is, the contractual instrument is not the place for positive, but uncompromised, statements. On the contrary, it is always important to question whether what is expected to be accomplished can actually be delivered and demanded.

Consistency also involves questioning what would happen if the terms of the contract were valid for some and not for others. Take the example of an employment contract. What is the effect of firing an employee who violated company rules when it is known that so many others had already practiced the same conduct? Selective punishment, in such cases, in addition to not ending well in the Judiciary, also spreads distrust about the criteria for the application of the rules.

In the case of combating the coronavirus, it becomes even more important to understand what are the criteria for content moderation, especially when a topic of a scientific nature, such as the use of hydroxychloroquine in the treatment of patients with covid-19 gains political contours. Only then will it be possible to assess the consistency of the moderation systems, especially when other authorities have also shared texts and videos in the same direction. The content below, for example, was one of the positive comments about the drug retweeted by American President Donald Trump.

Hydroxychloroquine proving an effective treatment for coronavirus patien… https://t.co/L3imNmv1ua #OANN @PearsonSharp

– One America News (@OANN) April 2, 2020

Social networks do not need more people distrusting their criteria and pointing out biases with each removal of content and accounts that affect this or that political-ideological spectrum. One way out of this state of perennial discomfort involves "procedural transparency" in applying your own rules in a consistent, auditable (and perhaps collaborative way, as some are doing when outsourcing fact checking). In times of such confusion and uncertainty in the most different aspects of our lives, the remedy seems to pass – even to the dilemmas of social networks – for more protocol and predictability.

Related Articles

Leave a Comment

sixteen − 10 =

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More