Why we need more than transparency reports from social media

transparency reports

Social Media giants Facebook and YouTube both have more than 2 billion users.  With this amount of subscribers comes many violations. Reports recently released by both, indicates just how much wrong doing occurs. YouTube, has recently come under fire for its slow and murky appeals process, lack of statistics and, in contrast, its failure to remove graphic and inappropriate content quickly enough.

In the second and third quarter of 2019, Facebook announced that it removed or labeled more than 54 million pieces of content that it considered to be violent and graphic. This included 11.4 million posts that violated its rules on hate speech, 5.7 million uploads that were considered to be bullying and harassment policies and 18.5 million items determined to be child nudity or sexual exploitation. YouTube’s removal of content it considered ‘violations of children’s safety’ also spiked at the end of last year.

Facebooks report also contained information on the efforts taken to police Instagram. This revealed that it removed at 1.2 million photos or videos involving child nudity or exploitation and 3 million that violated its policies prohibiting sales of illegal drugs over the last six months.

What is glaringly obvious is that these numbers are growing, yet the ratio of inappropriate content to rate of removal still needs work. Even a one incident can create chaos for the content moderators. As an example, the Christchurch shooting, which was covered in the report, generated 4.5 million pieces of content that required removal between March the 15th and September 30th 2019.

Facebook seems to be catching more of such problems via AI and automated systems but YouTube is under fire for its failure to remove inappropriate content quickly enough.

Guy Rosen, Facebook’s vice president of integrity, described Facebook’s progress:

“Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy”.

Obviously, the quicker Social Media giants can detect hate speech, drug and weapon sales, child exploitation and other concerns, the more likely it is that they are able to alert the relevant law enforcement agencies for them to react effectively. For Facebook, this is a positive story conveyed in their latest report.

However, as usual with social media, there is a darker side to the story. This concerns how frequently governments compel Facebook to release user data, typically without informing the person in question or even closing service in a country completely.

The U.S. government leads the way and has the highest number of requests with 50,741 separate demands for user data. Around 88% of requests were accepted by Facebook and the company reported that two thirds of them came with a ‘gag’ order which prevented the company from informing the user about the government request for data.

The report also found that 15 countries have interrupted Facebook 67 times in the first half of the year in comparison with nine countries disrupting service 53 times in the same time period of the previous year. In most cases, disrupting Facebook is seen as an attempt to quash anti-government dissent.

Facebook takes a more rigorous approach to transparency, in comparison to its peers (Google, YouTube etc.,) which CEO Mark Zuckerberg pointed out in a press call addressing the report. They do highlight some of the vertical work that is done to keep us safe but these reports also demonstrate how little room people have if they are accidently caught up in the automated systems. The YouTube appeals process is murky and unclear and human language and social norms can change faster than machine learning systems can catch up to them. It is also worth noting that YouTube only recently included appeals data in their reports and even this was only relevant for the last quarter of 2019. Furthermore, the data they did release showed that 78% of appeals are declined.

If what you want from a social media platform is something in the form of fairness and justice then transparency reports are both informative and necessary, however still not sufficient. There have been calls for oversight boards or third-party intermediaries to intervene as the average user has no way to hold a platform accountable when they do make a mistake


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Ethical Tech section to stay informed and up-to-date with our daily articles.