What is the aim of this Code of Conduct?
Each of the IT companies (Facebook, YouTube, Twitter, Microsoft, and now Instagram) that signed this Code of Conduct is committed to countering the spread of illegal hate speech online.
When they receive a request to remove content from their online platform, the IT companies assess the request against their rules and community guidelines and, where applicable, national laws on combating racism and xenophobia transposing EU law on combatting racism and xenophobia. The aim of the Code is to make sure that requests to remove content are dealt with speedily. The companies have committed to reviewing the majority of these requests in less than 24 hours and to removing the content if necessary.
How does the Code of Conduct participate in the wider work of the Commission on illegal content?
The results of the Code of Conduct monitoring feed into the wider work of the Commission on how online platforms should be more proactive in the prevention, detection and removal of illegal content.
On 28 September, the Commission adopted a Communication which provides for guidance to platforms on notice-and-action procedures to tackle illegal content online. The importance of countering illegal hate speech online and the need to continue working with the implementation of the Code of Conduct feature prominently in this guidance document.
On 9 January 2018, several European Commissioners met with representatives of online platforms to discuss the progress made in tackling the spread of illegal content online, including online terrorist propaganda and xenophobic, racist illegal hate speech as well as breaches of intellectual property rights (see joint-statement).
What is the definition of illegal hate speech?
Illegal hate speech is defined in EU law (Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law) as the public incitement to violence or hatred directed to groups or individuals on the basis of certain characteristics, including race, colour, religion, descent and national or ethnic origin.
Will the Code of Conduct lead to censorship?
No. The Code of Conduct aims is to tackle online hate speech that is already illegal. The same rules apply both online and offline. Content that is illegal offline should not be allowed to remain legal online.
In the Code, both the IT Companies and the European Commission also stress the need to defend the right to freedom of expression. The Code cannot be used to make IT Companies take down content that does not count as illegal hate speech, or any type of speech that is protected by the right to freedom of expression set out in the EU Charter of Fundamental Rights.
In addition, the results of a 2016 European survey showed that 75% of those following or participating in online debates had come across episodes of abuse, threat or hate speech aimed at journalists. Nearly half of these people said that this deterred them engaging in online discussions. These results show that illegal hate speech should be effectively removed from social media, as it might limit the right to freedom of expression.
Isn't it for courts to decide what is illegal?
Yes, interpreting the law is and remains the responsibility of national courts.
At the same time, IT companies have to act in line with national laws, in particular those transposing the Framework Decision on combatting racism and xenophobia and the 2000 e-commerce Directive. When they receive a valid alert about content allegedly containing illegal hate speech, the IT companies have to assess it, not only against their rules and community guidelines, but, where necessary, against applicable national law (including that implementing EU law), which fully complies with the principle of freedom of expression.
Should one take down ‘I hate you'?
Offensive or controversial statement or content might be legal. As the European Court of Human Rights said, ‘freedom of expression ... is applicable not only to “information” or “ideas” that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population'.
In the Code, both the IT Companies and the European Commission also stress the need to defend the right to freedom of expression.
Assessing what could be illegal hate speech includes taking into account criteria such as the purpose and context of the expression. The expression ‘I hate you' would not appear to qualify as illegal hate speech, unless combined with other statements about for example threat of violence and referring to race, colour, religion, descent and national or ethnic origin, among others.
What prevents government abuse?
The Code of Conduct is a voluntary commitment made by Facebook, Twitter, YouTube and Microsoft. It is not a legal document and does not give governments the right to take down content. Instagram also announced its wish to join the Code.
The Code cannot be used to make these IT Companies take down content that does not count as illegal hate speech, or any type of speech that is protected by the right to freedom of expression set out in the EU Charter of Fundamental Rights.
How did the Commission evaluate the implementation of the Code of Conduct?
The Code of Conduct is evaluated through a monitoring exercise set up in collaboration with a network of civil society organisations located in different EU countries. Using a commonly agreed methodology, these organisations test how the IT companies applied the Code of Conduct in practice. They do this by regularly sending the four IT Companies requests to remove content from their online platforms. The organisations participating in the monitoring exercise record how their requests are handled. They record how long it takes the IT companies to assess the request, how the IT Companies' respond to the request, and the feedback they receive from the IT Companies.
How does the Commission work with the platforms?
The Code of Conduct is based on cooperation involving the European Commission, IT platforms, civil society organisations and national authorities. All stakeholders meet regularly under the umbrella of the High Level Group on combatting racism and xenophobia, to discuss challenges and progress. In addition to the regular monitoring exercises, the Commission engages in a constant dialogue with the platforms to encourage progress on all the commitments in the Code. Workshops and trainings are also organised with companies and other relevant stakeholders. For instance, a training course held jointly with Google in Dublin in November 2017 focused on increasing quality of notices by trusted flaggers to ensure a more effective response by the companies' content reviewers.