Update on Meta’s work to support the 2022 Australian election

Josh Machin, Head of Public Policy - Australia – May 6, 2022

 

Meta has had a comprehensive strategy in place over the course of the Australian election campaign to combat misinformation, voter interference and potentially harmful content on our platforms. We shared details about our plans in a March 2022 blog post, and numerous roundtables with journalists, government agencies, law enforcement and security agencies (including via the Australian Government’s election integrity assurance taskforce), and academics.

In particular, we appreciate the close working relationship with the Australian Electoral Commission who has been referring content to us that they believe may violate Australian electoral law or represent voter interference. We are also continuing our collaboration with civil society organisations such as the Australian Strategic Policy Institute (ASPI) and First Draft.

This week, we and other tech companies received further questions from Australian academics – coordinated by a technology-focussed lobby group – regarding the steps we are taking in the months before the Australian election. We agree that it is important for digital platforms to be transparent and accountable to the Australian public. In that vein, we are publishing responses to the questions below.

We will have continued transparency of our efforts to combat misinformation in particular in our next annual transparency report, required under the voluntary industry code on misinformation and disinformation, which is due in May 2022. Although the reporting period covers 2021, we recognise the level of interest in the steps we are taking to promote integrity of the Australian election, so we will be voluntarily including further information in that report. You can find our previous report here.

We remain committed to engaging with Australian academics and experts on these important policy issues.

 

 

1. How many dedicated human content moderators will be bolstering your AI enabled system specifically for the Australian election?

In addition to the significant investments Meta has made in technology to proactively identify harmful content, we also have more than 40,000 people who work on safety and security at Meta (about 15,000 of which are dedicated content reviewers). As issues potentially arise during the Australian election, we have the benefit of being able to draw from not just the cross-functional team dedicated to the Australian election but also from any members of our global safety and security teams as needed, depending on the expertise and skills required. Another benefit of investing so heavily in safety and security means that we have 24/7 support; even while Australia sleeps, our safety and security teams in other timezones are able to review content that could be harmful and impact the Australian election.

2. What languages do these content moderators speak? (for instance, the top five spoken languages in Australian homes other than English are Cantonese, Mandarin, Italian, Arabic and Greek)

The benefit of drawing from our global investment in safety and security means that Meta supports content moderation in over 70 languages. We also have a global network of more than 80 third-party fact-checkers who cover more than 60 different languages to combat misinformation. We have run a consumer awareness campaign prior to the Australian election to help Australians spot misinformation and raise awareness of fact-checking. As well as running this campaign in English, we have translated campaign assets into Chinese, Vietnamese and Arabic (the top languages other than English spoken at home in Australia according to the Australian Bureau of Statistics).

In the lead-up to the election, we have received feedback from non-government experts that many non-English speaking diaspora communities in Australia may also use digital platforms from their home countries (see here for example). We continue to encourage policymakers to pay regard to the risks of misinformation and disinformation that can occur in Australian communities on non-English platforms.

3. Where are the content moderators dedicated to the Australian election based? If not in Australia, can their security and integrity be ensured during this time of geo-political instability?

Our safety and security teams are located around the world, with centres of excellence in 20 sites around the world, such as Singapore, Dublin and the United States, which ensures around-the-clock coverage.

4. How has your content moderation system taken into account the nuances of Australian English slang? (Using Australian slang is a common strategy for those seeking to evade detection by content moderation system on social media.)

This is an important imperative not just in the context of the Australian election. We have dedicated teams with deep knowledge and expertise in Australia, including the use of language in the Australian context. These teams are trained to be across slang words and other colloquialisms.

We always welcome input from local experts and civil society organisations about trends in possible abuse of our platform, and any academics are welcome to raise examples with us if they believe there is a new or emerging slang which is not properly being accounted for.

5. Who has been consulted in the development of election-related content moderation policies? How will you ensure these policies are adaptive and responsive to the events of the election?

Our election-related policies are global, and we have developed them through our involvement in over 200 elections around the world since 2017. We have engaged extensively with Australian stakeholders about these policies, especially those that have been developed since the 2019 Australian election like our voter interference policies, in the months leading up to the Australian election campaign – including providing transparency in our last report under the voluntary industry code on misinformation and disinformation. No stakeholder has raised concerns with us to date that these policies appear to be inadequate in the Australian context.

6. What AI enabled content moderation system will be deployed during the election, including image recognition technology? What are the error rates?

We provide transparency about our content enforcement approach in our global transparency centre.

7. What provisions have been made to protect communities from foreign interference?

Foreign interference in Australia can occur via a variety of means. In relation to our apps, we take extensive steps to identify and take action against threats to the election, including signs of coordinated inauthentic behaviour and block millions of fake accounts everyday so they can’t spread misinformation. These are outlined in detail in our last transparency report.

We also require all advertisers looking to run political, social and election related ads to complete an authorisation process, confirming their identification and be located in Australia.

In particular, we continue to engage closely with Australian security agencies regarding the overall threat environment for foreign interference in Australia. We continue to monitor this closely over the final weeks of the election campaign.

8. What avenues are in place to enable civil society organisations to flag harmful and false content (beyond the reporting mechanism of the eSafety Commissioner)?

The community has a wide-array of mechanisms available to report potentially harmful or false content to us. We have easy and simple in-app reporting for every piece of content. We also have close working relationships with Australian regulators, like the eSafety Commissioner and Australian Electoral Commission, who are able to quickly and easily refer content to us. We take a ‘no closed door’ approach to content review where we will review any and all content sent to us by an Australian regulator.

We also have built direct relationships with expert civil society organisations who are able to refer content to us directly.

The industry association DIGI maintains a complaints mechanism for any instance of possible breaches of the industry code on misinformation and disinformation.

Finally, for content on our services, any member of the community is able to raise it directly with our third-party fact-checkers for possible fact-checking:

9. In the United States you have shared data about removed coordinated inauthentic behaviour networks with independent researchers. Why have you not implemented this in Australia? (ideally all content and accounts removed under election-related policies should be stored and shared for post-election scrutiny)

The assertion in this question is not correct. In late 2020, we launched a pilot, CrowdTangle-enabled, research archive where we’ve shared over 100 of the recent coordinated inauthentic behavior (CIB) takedowns with a small group of researchers who study and counter influence operations. The Australian Strategic Policy Institute (ASPI) is one of our initial 5 key partners for this archive globally, where they can study and analyse the networks we’ve removed. We are pleased to see Australian representation in the small number of research organisations who are suitable for the pilot.

10. During the last United States election you labelled posts that were believed to be state-controlled media outlets. Why have you not implemented this in Australia?

The assertion in this question is not correct. We label state-controlled media outlets in many countries around the world, including in Australia.

11. Given that over 20% of Australians speak a language other than English at home, what languages will the third-party fact checks be translated into?

We also have a global network of more than 80 third-party fact-checkers who cover more than 60 different languages to combat misinformation.

Fact-checks are available in the language of the original content. A piece of content in a non-English language that arises in Australia is eligible to be fact-checked by other global partners who operate in that language.

12. What non-English language publications will third-party fact checks be provided to?

It’s not clear what is meant by this question. More information about the scope and eligibility of third-party fact checking on our services is available at our Help Centre

13. How will the speed of fact checking be measured during the election?

It’s not possible to give a timeframe around how long it takes a fact checker to verify content after it is posted on Facebook. This is because content is flagged to fact checkers in a variety of ways, and it is at the discretion of the independent fact checkers as to which pieces of content they review. The amount of time it takes a fact checker to verify a claim and undertake a fact check can also vary, depending on the complexity of the claim they are reviewing.

Content is flagged to our third-party fact-checkers to review in several ways:

  • our third-party fact-checkers proactively identify the content themselves
  • our technology identifies potential false stories for third-party fact-checkers to review. For example, when people on Facebook submit feedback about a story being false or comment on an article expressing disbelief, these are signals that a story should be reviewed
  • we also have a similarity detection system that helps us identify more debunked content than what our fact checkers see.

What’s important is once a story is debunked by our third party fact checkers, we make it less visible on Facebook and Instagram.

Artificial intelligence plays an important role in helping scale the efforts of our third party fact checkers. Once fact-checkers deem a piece of content is false and mark it as false, the overlay will appear almost immediately. After one fact check on one piece of content, we’re able to kick off similarity detection which helps us identify duplicates of debunked stories, and reduce their distribution.

These new posts are then fed back into the machine learning model which helps improve its accuracy and speed.

14. How will the reach of fact checking be measured during the election (i.e. how many people have viewed the content and their demographics)?

Once a piece of content is found to be false, we apply a warning label on it so it is not possible to see the content without clicking past the warning label. Once a warning label is applied, 95% of people on Facebook choose not to click through. This dramatically reduces the number of people who see the content.

15. How will your response to fact checking be measured during the election? (i.e. content take down, or reporting to the appropriate authority)

Our approach to combatting misinformation is comprehensive and is broader than simply content takedowns or third-party fact checking. Meta has led the industry in terms of transparency under the voluntary industry code on disinformation and misinformation and we continue to look for opportunities about what integrity data we might possibly be able to make available after the election in the interests of transparency.

We have expanded our third-party fact-checking program in Australia to include RMIT FactLab, who are joining our existing partners Agence France Presse and Australian Associated Press. We have also one-off grants to all our fact-checkers to increase their capacity in the lead up to the election.

16. During the last United States election, Facebook’s algorithm was adapted to reduce the distribution of sensational and misleading material, prioritising content from authoritative sources. Why has this measure not been implemented in Australia?

The assertion in the question is not correct. We provide transparency around how our ranking and recommendation algorithms work via our Content Distribution Guidelines and Recommendation Guidelines. As these policies outline, we reduce the distribution of sensational and misleading material at all times, not just in the lead-up to election campaigns.

We are also taking steps to promote authoritative information about the election, by providing prompts in the Feed of every Australian to direct them to the Australian Electoral Commission’s website.

17. During the last United States election, the distribution of live videos related to the election was limited. Why has this measure not been implemented in Australia?

While we learn lessons from each prior election, no two elections are the same. Working closely with elections authorities and trusted partners in each country, and evaluating the specific risks ahead of each election, we make determinations about which defenses are most appropriate. In the lead-up to each election, we monitor the threats on our platform and respond accordingly.

We have strong integrity measures to protect the abuse of products like Facebook Live at all times, not just for the Australian election. Our Community Standards and third-party fact checking initiatives apply equally to livestreaming as other types of content on our services.

18. Google has restricted the targeting for election ads in Australia. Has Meta considered this? If so, why has it not been implemented?

Digital platforms provide a range of different services, which may lead to different assessments about the best approach to integrity measures. At Meta, our approach is grounded in industry-leading transparency for political and social issue ads in Australia. We require these advertisers to go through an authorisation process, to add a disclaimer, and to agree to their ads appearing in the Ad Library for seven years after they run. We are committed to providing transparency of these ads that appear on our services.

19. During the last United States election, Meta implemented changes to ensure fewer people saw social issue, electoral and political ads that had a “paid for by” disclaimer. Why has this measure not been implemented in Australia?

The assertion in this question is not correct. In response to community feedback, we have been running tests in a number of countries to show political content lower in people’s Feeds. This applies to organic, non-paid content and was announced in the US after the last election.

In 2021, we announced a new control feature that allows people to have more control over the ads they see on Facebook. This feature gives people a choice to see fewer social issues, electoral, and political ads with “Paid for by” disclaimers in Australia.

20. During the last United States election, the creation of new ads about social issues, elections or politics in the last few days of the election was blocked. Why has this measure not been implemented in Australia?

While we learn lessons from each prior election, no two elections are the same. Working closely with elections authorities and trusted partners in each country, and evaluating the specific risks ahead of each election, we make determinations about which defenses are most appropriate. In the lead-up to each election, we monitor the threats on our platform and respond accordingly.

The decision on whether Australia’s blackout period for electoral ads should be extended to digital platforms is a choice for policymakers. We have consistently said over many years we support extending this requirement to digital platforms.

21. What measures do you have in place to screen the placement of ads to ensure all political ads are properly identified and labelled by the advertiser?

We take a number of steps to detect ads that should be classified as political or social issue ads but have not been correctly categorised by the advertiser. We do not make information about these detection steps available, to avoid providing information to bad actors on how to evade our policies. This review process may include the specific components of an ad, such as images, video, text and targeting information, as well as an ad’s associated landing page or other destinations, among other information.

Australian regulators and civil society are welcome to refer ads to us that they believe should be categorised as political or social issues ads and do not have this disclaimer.

22. Beyond the Ad Library, will Meta be making available a comprehensive public archive of all sponsored political content, including targeting data and aggregated engagement statistics by target audiences (accessible by API)?

Meta already makes available a comprehensive public archive of all political and social issue ads on our services, via the Ad Library. The Ad Library provides industry-leading transparency, including metrics such as estimated audience size, impressions, and statistics of the target audience (such as location and gender of the audience). This is also already available via an API to approved academics and experts.

23. What ‘break glass’ measures are on standby during the election?

We maintain a series of systems already to help protect the integrity of elections on our platforms. We continue to monitor threats on our platform and respond accordingly. At this stage, we have not been advised by law enforcement or intelligence agencies that the risk of real-world harm is high.

24. What type of event (in terms of reach and impact) would trigger the implementation of ‘break glass’ measures?

See above.

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy