Australian Expert joins Facebook’s Global Women’s Advisory Panel  

Mia Garlick, Director of Public Policy, Australia, New Zealand & Pacific Islands – July 27, 2021

Women come to Facebook to run thriving businesses, support each other through Groups and make donations to causes they are passionate about. However, like society, it can also be a place where women experience a disproportionate level of harassment and abuse. It is important that we equip women to manage their online experience, stay connected whilst also addressing ways we can minimise harm and stay safe online.

The way in which abuse and harassment manifests online varies country by country, however the sum result globally is that women have a less safe experience than men. One of our key priorities is to ensure safety concerns are addressed, and that women have equal access to all of the economic opportunity, education, and social connection the internet provides in Australia. 

As part of our ongoing work for many years to promote the safety of women, we’re delighted to announce Dr Asher Flynn, as Australia’s representative on Facebook’s

 Global Women’s Safety Expert Advisory Group. Dr Flynn is an Associate Professor of Criminology at Monash University and the Vice President of the Australian and New Zealand Society of Criminology. Her work is focussed widely in the fields of AI-facilitated abuse, deepfakes, gendered violence and image-based sexual abuse.

The Advisory Committee, comprised of 12 women from across the world, will join quarterly meetings dedicated to advancing the safety of women online. We will work with these experts to assess our tools, seek their feedback on new policies and discuss new options that Facebook can implement to empower women online.

Empowering women to feel safe online 

The formation of the Women’s Advisory Group follows the work we’ve been doing for many years on products, policies and initiatives to promote the safety of women who use our services, including launching a dedicated safety page for women on our Safety Centre hub.

We take a multi-faceted approach to making our platform a safer place for women. We develop policies to remove harmful content which can disproportionately target women such as hate speech, misrepresentation, and bullying and harassment of women. We create new tools in specialised areas, such as the non-consensual sharing of intimate images; and we build partnerships to support women and prevent technology facilitated abuse. 

Tools women can use to feel safe online 

Over the last 18 months we have introduced new products and tools that women can use to control their experience on our services. Some of these include: 

Message Tools 

  • Filter messages: We’ve introduced the ability for people to filter messages containing offensive words, phrases and emojis, so they never have to see them. Women can use this tool to block or hide certain words they dont want on their posts or coming into their DM’s. 
  • Switch off DMs: All accounts on Instagram have the option to switch off DMs from people they don’t follow. Messenger also gives people the option to ignore a conversation and automatically move it out of their inbox.

Comment controls

  • Who can comment: Earlier this year we launched a new tool on Facebook that gives public figures, creators, and brands — and all Facebook Pages and profiles — the ability to control who can comment on their organic, public posts.
  • Comment filter: Women can add emojis, words or phrases they find offensive to their comment filter, and comments containing these terms will not appear under their posts. 
  • Comment warnings: We’ve expanded comment warnings to include an additional warning when people repeatedly attempt to post potentially offensive comments. 

Blocking profiles and accounts:

  • Blocking New Accounts: On Instagram we’ve made it harder for someone who’s already been blocked to re-establish contact through a new account. With this feature, whenever a person decides to block someone on Instagram, they have the option to both block their account and new accounts that person may create.

Non-consensual sharing of intimate images (NCII)

We work to prevent people from misusing our services to make extortionate threats to expose naked or semi-naked images or videos for financial gain, additional illicit content, or sextortion.  Launched in partnership with the Office of E-Safety in Australia, we have a channel where people can report NCII to us. 

We know that even the threat to share this imagery can be distressing, so we will also remove content that even threatens or promotes sexual violence or exploitation. We will remove the photo or video and disable the account of the person who shared it. We then use photo-matching technologies to help stop future attempts to share this content on Facebook, Instagram and Messenger. By using machine learning and artificial intelligence, we can now proactively detect near nude images or videos that are shared without the users permission on our services. 

Measures such as this means we can find this content before anyone reports it, which is important as often victims are afraid of retribution so they are reluctant to report the content themselves or are unaware the content has been shared.

We know that there will always be more to do and our job is to continue our investment to stay ahead of this challenge. Removing harmful content, investing in technology and tools to support women, and building resources and partnerships are only part of the solution to prevent technology facilitated abuse. 

We look forward to collaborating with our Women’s Safety Expert Advisors like Dr Flynn to create solutions for change.

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: .Cookies Policy.