Facebook strongly supports the enhancement of online safety laws in Australia, as part of our ongoing commitment to ensuring the safety of Australians online. It is consistent with the work we do in building safety in the design of our technology and working with local Australian partners to provide education and direct outreach to communities. In addition to this work, we also support new regulatory frameworks for online content that ensure companies are making decisions about online speech that minimises harm but also respects the fundamental right to free expression.
There is a shared commitment amongst all stakeholders to online safety, and especially the safety of young people online. The shared commitment on online safety sits within a broader debate, currently underway in Australia and around the world. What content should be allowed online, and how much discretion should be afforded to companies or governments to make those decisions. This means we need to carefully consider how this debate relates to new safety legislation.
We support the legislation as an opportunity to enhance the online safety of Australians and to establish a regime that holds companies to account for the commitments they make. There are many elements of the exposure draft legislation that we commend, in particular, the use of voluntary industry codes to set obligations for companies in addressing emerging policy concerns, rather than setting prescriptive requirements in law.
In recognition of concern that private companies are often making decisions about important content issues, we have not waited for legislation but provided ongoing transparency through our regular Community Standards Enforcement Reports. These enable us to be held to account for our efforts to address harmful content on our service: for example, in the fourth quarter of 2020, we removed 6.3 million pieces of bullying and harassment content, up from 3.5 million pieces of content in the previous quarter. This was driven, in part, by increasing our automation abilities and improving our technology to detect and remove more English language comments, which helped our proactive rate increase from 26.4% to 48.8%.
To ensure greater accountability for our content governance, we have also taken proactive, voluntary steps to establish an Oversight Board to making binding rulings on difficult and significant decisions about content on Facebook and Instagram. Together, new legislation and industry governance initiatives like these will improve accountability for content on digital platforms.
Whether content removal decisions are taken by government or industry, however, content regulatory frameworks should set clear and reasonable definitions about otherwise legal content that warrants removal, and also transparency and accountability.
Today, the Minister Paul Fletcher has introduced the new Online Safety Bill into parliament that sets out a new regulatory framework for online safety in Australia. There are many elements of the draft legislation that we commend, in particular, the use of voluntary industry codes to set obligations for companies in addressing emerging policy concerns, rather than setting prescriptive requirements in law.
While we support the online safety legislation’s intent, there are three areas where the scheme seems to go further than the stated policy intent of the legislation and where we believe further public debate is helpful to ensure that the online safety legislation achieves its objectives:
- expansion of cyberbullying takedown schemes to private messaging;
- the low threshold proposed for the new adult cyberabuse scheme;
- and the need for greater transparency and accountability to ensure regulatory decision making applies with the expectations of the community.
Regulating the private communications of Australians
Firstly, we do not believe it is appropriate to wholesale apply cyberbullying takedown schemes to private communications, like messaging and email.
The eSafety Commissioner and law enforcement already have powers around the worst risks to online safety that can arises in private messages – like non-consensually shared intimate images and child sexual abuse material. And most private messaging services (including those provided by Facebook) provide tools and features to give users control over their safety in private messaging, like the ability to delete unwanted messages and block unwanted contacts.
The challenge with applying bullying regulations to private messaging is that human relationships can be very complex. Private messaging could involve interactions that are highly nuanced and context-dependent and could be misinterpreted as bullying, like a group of friends sharing an in-joke, or an argument between adults currently or formerly in a romantic relationship. It does not seem clear that government regulation of these types of conversations are required, given there are already measures to protect against when these conversations become abusive.
Potential to regulate political speech
Secondly, we believe the threshold set for an adult cyberbullying scheme could potentially lead to broader ramifications than anticipated and extend the eSafety Commissioner’s regulatory powers to legitimate political speech and debate. The threshold is very low and we believe there is significant likelihood that the adult cyberbullying scheme will become de facto the regulation for all speech online.
There is also the very real risk that this low threshold, as it has in other legislation, would capture political speech: the heat of political debate may result in legitimate political comments that could be considered offensive. Because the speech of political officials is within scope of the legislation, a regulator will have the discretion to potentially police what politicians say to each other.
Given the highly ambiguous thresholds for regulatory action under the proposed legislation such as ‘serious harm’ and ‘offensive’, there is significant likelihood that the adult cyberbullying scheme becomes de facto the regulation for all speech online.
To address this concern, a possibly more judicious threshold for the adult cyberbullying scheme would be replacing ‘offensive’ with ‘grossly offensive’ (in line with the New Zealand Harmful Digital Communications Act), which would more than adequately capture harmful bullying of adults without the same risks of overreach. Alternatively, the legislation could expressly require the eSafety Commissioner to consider the impact of exercising their discretion on the broader public interest and free speech.
Greater transparency and accountability for the regulator
Thirdly, the legislation grants a single regulator a considerable level of discretion and power over speech online. Clearer guidelines and greater checks and balances could assist in making sure this discretion is applied in ways that are consistent with the community’s expectations. Just as there should be transparency over the approach taken by digital platforms, there should also be transparency for decisions taken by Regulators.
We look forward to continuing to work with the Government, the eSafety Commissioner and all of our safety partners to help keep Australians safe online and to contribute constructively to the ongoing debate about how to ensure that Australia’s online safety laws are robust for the current age.
You can read our full submission here.