By: Amy Sumin
Regulating the spread of hate speech on social media has become a heated topic of concern, particularly following the case study of Facebook’s role in the Rohingya conflict. The majority Buddhist population of Myanmar’s refusal to recognise the existence and rights of the Muslim Rohingya has resulted in what the UN described as “the most persecuted minority in the world”. Social media giant Facebook was seen to be an instrumental platform in spreading the hate speech that inflamed this divide and spurred on this conflict raising the questions. What is Facebook’s role and responsibility to the global community in response? How do we deal about censoring “hate speech” while protecting Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which cover defence of “freedom of expression and information”?
Firstly, there is the problem of defining “hate speech”. Who distinguishes between what is considered ‘offensive, and to be prohibited’ and what is considered ‘offensive, but not required to be prohibited’ in alignment with the upholding of Article 19? The definition of “hate speech” varies slightly within human rights frameworks that don’t completely align to Facebook’s definition. This poses problems in identification when dealing with the matter. For example, can the media spreading ‘misleading information’ or ‘manipulative news’ be considered ‘hate speech’? In terms of Facebook’s response, this poses engineering problems to create a dataset of “objectionable speech” due to the limits of definition. Linguistics has context and terminology that can be harder to pick up on and integrate with standard engineering methods which tend to be objective and quantitative.
Multiple lawsuits arose against Facebook as the social media giant was thrown into the limelight as being a significant platform in the spreading of hate speech in the Rohingya conflict. Prior to April 2017, hate speech ads were circulated on the platform and only 4-hour Facebook employees were able to moderate posts in the dialects of the Burmese language. Prior to April, Facebook held no accountability and the issue was not a priority for them to resolve as it wasn’t immediately profit maximising. However, the lawsuits that recognised the platform as being an unintentional weapon to spread hate within this conflict led to new precedent being set. Facebook now boasts over 100 people to review the content who are trained to better understand the context. They boast that inappropriate posts are taken down within an impressive 48 hours. Furthermore, Facebook is in the process of creating an independent oversight board for content management.
This has set a precedent for dominant social media platforms such as Facebook, Instagram and Twitter to take responsibility to align with international law in regard to mitigating hate speech. This includes a reassessment and recrafting of Facebook’s community standards in being a “shared and inclusive space” through the careful creation of rules so they are not seen to be taking sides or taking down content in an arbitrary manner. It is advised to look back at case studies to determine what posts are deemed “permissible”. However, there is still much to be learned from Facebook’s role in the Rohingya conflict.
However, the spread of hate speech in the Rohingya conflict is not simply limited to social media, but also seen to be effectively combatted through the three main avenues of education, the media, and political institutions. Reform in education is seen as vital, as concerns have been raised that children as young as five to six years old are being separated into Buddhist and Muslim categories and being taught differently, including poetry with discriminatory terms.
Secondly, the media at large must be regulated as they have been accused as a source of “fake news” by impersonating people and agencies, spreading misinformation and bias, using random variety of sources, conducting repeated exposure to the same message, censorship of the press, and suppression of independent media groups such as Reporters Without Borders. Lastly, further research and advocacy is needed to inform policy-making and reform at the political institution level. Overall, collaboration on the issue by stakeholders are encouraged, such as that of the Digital Rights Forum.
We would love to hear your thoughts on the role of social media in relation to hate speech, please contact Amy by www.facebook.com/ElitePlusMagazine