Social media communication is mostly under the control of only a few corporations way in over their heads when it comes to handling misuse of their platforms.
The issues range from whether or not to ban an account for posting a certain meme to how to stop users from utilizing a social media platform to commit international human rights abuses. One thing is clear, though: Each platform has control over the content posted on their service, but the sheer volume makes it a daunting task to moderate effectively.
Corporations such as Facebook and Google have the right to delete accounts, posts and videos that violate their terms of service. But more importantly, they sometimes have a responsibility to do so.
Freedom of speech is not absolute, even in the U.S. There are unprotected forms of speech such as threats of bodily harm and death. The first image that may spring to mind is of a troll posting immature messages, but this behavior pales in comparison to a worst-case scenario.
In Myanmar, hate speech through Facebook along with other social and traditional media has played a part in human rights abuses being committed against Rohingya Muslims in the country, according to the U.N.’s Aug. 27 Myanmar fact-finding mission report.
“Facebook has been a useful instrument for those seeking to spread hate, in a context where for most users Facebook is the Internet,” the U.N. report reads.
And while the situation in Myanmar is an extreme case, it shows how social media corporations have responsibilities much greater than they ever could have imagined. They’re thrust into a position where enforcement falls on them rather than a government agency.
Facebook has specifically addressed the effect its service has had in Myanmar by developing more user-reporting tools as well as automated technology to detect hate speech in the country. While automation and algorithms can help, they are not perfect for catching all of the content that should be removed.
The accounts and pages of Myanmar military officials the U.N. has reported to have committed or enabled human rights abuses were also removed by Facebook. However, there is no fix-all solution as other factors are in play beyond using social media to spread a hateful message through misinformation.
The internet and globalization has led a U.S.-based corporation to do its part to intervene in an international incident. This calls into questions whether or not they should start to be regulated by international human rights law.
The spread of misinformation through social media is a problem everywhere. In the U.S., there are concerns about meddling in past and future elections including the upcoming midterms in November. The responsibilities keep evolving for social media corporations, but what ensures accountability?
Corporations are profit-driven entities, and all of these extra responsibilities are costs they probably wish they could avoid. Their current incentive is to avoid bad press, keep their stock prices up and not draw too much attention to the need for regulation.
And government regulation will come unless these corporations step up and effectively self-regulate. A company like Facebook already has the money and should have or be able to develop the means to tackle any unexpected problems that arise.
And when their house is kept in order, they do everyone a favor by keeping free speech from moving toward government regulation.