Helping to stop scams online

By Josh Machin, Head of Policy - Australia - 8 November 2022

As Australians continue to spend more time online to stay in touch, work and consume entertainment - scammers continue to find new and novel ways to gain access to peoples’ personal details. Now more than ever, Australians need to remain vigilant and make sure they are well equipped to identify a scam and protect themselves online. 

Scammers are becoming increasingly sophisticated in their tactics to target people online. This is why it’s so important for Meta’s community to learn how to identify and avoid scams so that they can protect themselves online.  

We recommend the following tips to follow and protect yourself from scams:  

  • Set up two-factor authentication - it’s simple to do and adds a second layer of protection to your account, by sharing a notification whenever there’s an attempt to access your account. To find out more about setting up two-factor authentication (or 2FA) on Facebook, click here. For Instagram, click here.
  • Protect your personal information at all times - never share your personal information, such as your ID documents, payment login details or passwords.  
  • Look out for suspicious behaviour, links, emails or messages - if in doubt, don’t click or respond. Take action and report your suspicions via our Facebook or Instagram help centres or report it to police. 

Tips on how to avoid and spot some of the most common scams across our services. 

Romance scams

Romance scams are unfortunately common. Typically, the scammer will contact someone and claim to be looking for love or friendship. Once they have built trust, they slowly escalate their conversations and requests and begin asking for money or other favours. 

Remember:

  • Be cautious about who you communicate with online and don’t accept friend requests from people you don’t know. 
  • Never send personal details like your ID or money to someone you don’t know or trust.
  • And make sure you set up two factor authentication on all your accounts. 

Marketplace scams 

There are many benefits to shopping online, but we also need to be aware that there’s a range of online shopping scams.  The most common scams either claim to sell products they don’t have or sell low quality products at top quality prices.  Scammers may even steal a real company’s marketing images, making it difficult to tell them apart. 

Remember:     

  • Check if the Facebook profile appears new or incomplete, as this could be a sign that the account has been set up for scamming. 
  • Check reviews of online sellers to see what previous customers have said. 
  • Insist on meeting in a public space to view the product before completing any transaction. 
  • Don’t hand over money until you see the item for sale, and use payment options that include strong protections, like PayPal.          

Phishing Community standards 

Phishing is when a scammer tries to trick people to share their personal information (like bank account numbers and passwords to impersonate or defraud people. These messages can look very real, and some will even pretend to be from Meta asking you to reset your password or appeal a fake ban on your account.

  • Be wary of emails or online messages that ask you to update or verify your details, or try to force you to act quickly (for example, threatening you with the loss of your Facebook or Instagram account.
  • Facebook will never ask you for your password in an email or send you a password as an attachment.
  • Make sure to set up two factor authentication on all of your accounts 

Family member in need scam 

Scammers can go to great lengths to trick people into sharing money or their personal information, like pretending to be a family member who needs help. They can  pretend to be someone you know or send a message from an unknown number claiming to be a family member who has lost their phone or has been ‘locked out’ of their account. 

If the situation seems strange, be wary.  It might be a scam, and your account might have been compromised.  Remember to: 

  • STOP: Take a few minutes before you respond.
  • THINK: Does this request make sense? Are they asking for money? Remember that scammers prey on people’s kindness, trust, and willingness to help.
  • CALL: Verify that it really is a family member by calling them directly or asking them to share a voice message. Only when you’re 100% sure the request is from someone you know and trust, should you consider it. If it turns out to be untrue, report it to us.

Investment scam 

Investment scams are becoming increasingly common and sophisticated in response to the growing interest in purchasing shares and new digital currencies, and accessibility of online investing. Investment scammers typically try to steal larger amounts of money by tricking people into thinking they’re investing in quick and easy ways to grow their wealth.

Remember:

  • Never share your bank, card details or ID with someone you don’t know and trust. 
  • Use unique passwords for different accounts and never share your passwords.
  • And make sure you set up two factor authentication on all your accounts 

Impersonation scam 

Scammers may try to win your trust by impersonating a friend or family member, or a celebrity or business that you follow, to try and convince you of their story. This can result in them tricking you into sending them money or sharing your personal details.

Remember:

  • Look out for verified Facebook or Instagram accounts when entering competitions or sharing information with a brand that approaches you. Verified accounts carry a blue tick. 
  • Never share your bank or card details with someone you don’t know and trust.
  • And make sure you set up two factor authentication on all your accounts. 

 


                    

Supporting a safe online experience for young people

March 18, 2022

 

Everyone has a right to feel safe and respected, and all of us - including companies like Meta, have a role to play in stopping bullying both on and offline. The National Day of Action Against Bullying and Violence is an opportunity for schools, students, caregivers, community groups, companies, and the government to come together to discuss effective solutions to bullying.

As one of the world’s largest technology companies, we know that we have a responsibility in keeping young people safe. But one form of abuse that has unfortunately always existed when people interact with each other is bullying and harassment. While this challenge isn’t unique to social media, we want to ensure that we’re taking steps to remove this type of content on our platforms and ensure that people are protected from harmful content wherever possible. 

Our work to address bullying and harassment 

To that end, we’ve created rules that define what we will and won’t allow on our platforms, we are continually increasing our investment in new technologies and tools to prevent people from being exposed to bullying and harassing content in the first place, and also give them the ability to easily block and report it if they are. 

When it comes to bullying and harassment, ​​context and intent matter. Bullying and harassment are often very personal — it can show up in different ways for different people, from making repeated and unwanted contact, to making public threats or to threatening public figures. This is why we are constantly looking at how we can update our policies to respond to new issues or challenges that people may experience, and we don’t make these policy and product decisions alone. We use insights and feedback taken from more than 800 safety partners globally, including Australian safety organisations, experts, and academics. 

In recent years, there has been an increase in abuse directed at public figures, particularly women. In 2017, we began consulting on what more we could do to better protect public figures from the abusive commentary that isn’t connected with public debate. Listening to feedback, we have made changes to our bullying and harassment policy to help protect people from mass harassment and intimidation, and now offer more protections to public figures, particularly women, people of colour and members of the LGBTQI community, who we know can face increased scrutiny. We also developed new tools to turn off comments on a post, limit who can comment, and prevent hateful messages from being sent, whether text or emojis.

Holding ourselves accountable 

How we act, spot, and remove this content is core to keeping it off our platforms, and investing in safety and security will help us achieve this.  

Last year, we spent USD$5 billion/ AU$7billion on safety and security. This allows us to use a combination of reports from our community, human review, and artificial intelligence technology to enforce our policies. We acknowledge it can sometimes be difficult for our systems to distinguish between a bullying comment and a light-hearted joke without knowing the people involved or the nuance of the situation, and using technology to proactively detect this content can be more challenging than other types of violations - however, we are making progress.  

Between October and December last year, we estimate that out of every 10,000 views of content on Facebook, 10 to11 views were of bullying and harassment content. This is lower than the previous reporting period - but we know there is more to do. 

We are working hard to reduce this type of content on our platforms, but we also want to equip our community with tools to protect themselves from potentially offensive content in ways that work best for them. To support people’s experience on our platforms, we’ve built in our apps to prevent, stop and report bullying and harassment online. 

Indications show these tools are working, last year we introduced a new warning tool on Instagram that detects when someone is about to post a comment that could include hateful or harmful. Since the launch of this tool, 50% of people either change what they were going to post or don’t post anything at all.

We can't tackle bullying alone 

Bullying, harassment, and violence are long-standing social issues that predate the internet and social media, so it’s important that we continue to work with others to address these challenges as a whole of society. To that end, we partner with safety organisations, like PROJECT ROCKIT, who serve on our Global Safety Advisory Board, with whom we have partnered for many years and together trained more than 25,000 young Aussie students in anti-bullying workshops to date.

We know there is more to do and we’ll keep making improvements but this is a challenge that goes beyond any individual company. It requires multi-stakeholder, cross-industry collaboration, and consistent rules for the internet that apply to all platforms.

Australia has been actively developing new regulations focused on digital platforms such as Meta. The issue is no longer that digital platforms are unregulated in Australia, or that the risks of the internet are unexamined - particularly in the area of safety. Instead, the main priority moving forward must be ensuring that existing regulations are effective in improving the safety of Australians online.

We believe that a collaborative industry-led process could assist with developing metrics, to better understand whether regulations and industry efforts, especially our own, have been genuinely effective. At Meta, we will continue to work with the Government and our industry peers to help make the new framework as effective as possible and to bring additional oversight and confidence to the industry’s work to keep users safe.

Providing a safe experience for young people online

Mia Garlick, Director of Public Policy, Australia, New Zealand & Pacific Islands – February 8, 2022

 

Digital transformation and the revolutionisation of the internet have come with many advantages and advancements, but it’s also introduced new challenges for netizens, heightening the need for new rules. Legislation has, unfortunately, not kept up. As millions of Australians continue to enjoy the benefits of a free and open internet, we’ve witnessed the good that comes with connecting people and, at times, the harm that can be caused when people misuse online platforms. There is no simple answer, and as such, this conversation has raised important questions on how to hold digital platforms accountable for keeping its users safe, while protecting freedom of expression.

At Meta, we believe that businesses like ours should not be making decisions on our own, and so we are advocating for democratic governments to set new rules for the internet on areas like harmful content, privacy, data, and elections.

But we are not waiting for regulation – we are also ensuring that we are taking responsibility and investing more each year. We have tripled our safety and security team to 40,000 people, removed billions of fake accounts (often created to cause harm), and built better tools to give people control over the experiences and interactions they have on social platforms. As part of this work, we continue to evolve our policies and create new products to ensure they reflect emerging digital and societal trends. We don’t make these policy and product decisions alone, using insights and feedback taken from more than 800 safety partners globally. With new rules for the whole industry, they can make the internet safer while maintaining the social and economic benefits of such platforms for all Australians.

Playing it fair this Safer Internet Day 

Recognising there is much more to do, and aligning to the theme of Safer Internet Day in Australia, #PlayItFairOnline led by the eSafety Commissioner, we continue to engage with external experts, policymakers, parents and caregivers, and the broader industry in developing effective solutions to combat online abuse.

Providing a safe experience for our community, in particular, young people continues to be our top priority. We recognise the importance of ensuring that our policies, enforcement, and tools are regularly updated to ensure we can support all Australians to play it fair online.

With respect to our policies, we recently made changes to our bullying and harassment policy to help protect people from mass harassment and intimidation, and now offer more protection to public figures, particularly women, people of colour and members of the LGBTQI community, who we know can face increased scrutiny.

We continue to invest significantly in proactive detection technology. For example, Meta removed 1.8 billion fake accounts in the last quarter alone and we proactively detected 99.8% of them, the vast majority within minutes of being created. We believe that, on platforms like Facebook and Instagram, where people make direct connections with each other, the community is safer when people stand behind their opinions and actions.

We also update the tools that are available to help people customise their experience on our apps. On Instagram, we offer warnings when our technology identifies language as potentially abusive and ask people to consider rewriting their comments. We know that this is effective because 50% of the time they do edit their comment – or they don’t post at all.

On Facebook, we’ve built tools to give people more control over who can comment on their public posts by choosing from a menu of options ranging from anyone who can see the post to only the people and Pages you tag. You can also block different degrees of profanity from appearing on your Facebook Page. We determine what to block by using the most commonly reported words and phrases marked offensive by the community.

Importantly, developing effective solutions to combat online abuse also means working with partners who can ensure that we better understand and are responsive to the communities that use our platforms. The Office of the Australian eSafety Commissioner and local organisations such as Orygen provide regular feedback on our policies, tools, and resources.

We are also grateful for PROJECT ROCKIT, who serve on our Global Safety Advisory Board, with whom we have partnered for many years and together trained more than 25,000 young Aussie students in anti-bullying workshops to date.

We look forward to continuing to build on our policies, products, and partnerships in the months and years ahead.

Facebook, IDCARE and Puppy Scam Awareness Australia raise scam awareness

Josh Machin, Head of Policy (Australia) – November 8, 2021

 

Over the past 18 months, we know Australians have been spending more time online to connect with family and friends, run their businesses and seek entertainment. Scammers know this too, and they have become even more sophisticated.

Scams can happen anywhere and any time: by phone, text, or online. They use tactics like adapting to real-time events (like tax time, the COVID-19 pandemic, or elections), and are very effective at imitating real organisations.

While some of these scam techniques are not unique to Facebook, we want to do more to help our community spot these commonplace scams on our platforms. To help get the word out, we’re partnering with a national identity and cyber support service, IDCARE, and the pet scam prevention organisation, Puppy Scam Awareness Australia, to use our platform to raise awareness about scams.

As part of Scamwatch’s Scams Awareness Week, we’re launching a new scams awareness campaign to help people identify the seven most common types of online scams, and provide tips on how to avoid them.

In addition to the campaign, we’ve supported IDCARE to promote its Cyber Resilience Outreach Centre (CROC) program across our platforms. The CROC program will take mobile cyber clinics to remote and rural locations in Australia, and provide scam and cyber threat and account protection training. The CROC program began in October and will deliver up to 50 clinics across Australia.

We are also supporting IDCARE and Puppy Scam Awareness Australia to promote their work across our platforms so that Australians know where to report scams and access support.

If you think you’ve identified a scam on Facebook or Instagram, you can report it by clicking the ‘Report post’ or ‘Report photo’ button. You can find more information on how to report a scam to Facebook.

If you have lost personal information to a scammer and are concerned, you can contact IDCARE at https://www.idcare.org/

If you have fallen victim to a puppy scam, you can contact Puppy Scam Awareness Australia at https://www.puppyscamawarenessaustralia.com.au/

If you think your Facebook account may have been hacked, please also visit our Hacked Wizard page. 

IDCARE.org provides a community support service when responding to scams, identity theft, and cybercrimes. IDCARE specialist services address device and personal information risks to build and promote confidence and resilience online. 

Puppy Scams Awareness Australia (PSAA) is a dedicated organisation that brings awareness to the thousands of pet scamming syndicates that prey on pet shoppers. PSAA aims to help puppy shoppers spot a scammer and we help scammed victims through community support.

Facebook and First Draft launch “Don’t Be A Mis-influencer” campaign

Josh Machin, Head of Public Policy (Australia) – September 23, 2021

 

During the pandemic, we’ve seen many people discussing topics such as COVID-19 and vaccines, including public figures and creators. Many Australians are accessing credible and authoritative information about COVID-19 and vaccines via tools such as Facebook’s COVID-19 Information Centre. Given that creators may also have high reach and engagement, we want to support creators who want to share and amplify credible and authoritative information.

To help ensure we provide creators with tools and resources to spot and prevent them from sharing misinformation, today, we are proud to announce that Facebook Australia is partnering with the misinformation prevention coalition First Draft on a new local campaign to address the spread of misinformation by digital creators.

The Don’t Be A Mis-Influencer campaign provides creators, celebrities, and other high-profile accounts with a ‘Protect Your Voice’ toolkit, offering tailored resources and instructive guides to help identify misinformation and prevent its spread.

We understand people follow a range of information sources and may receive information from creators. We’re constantly working to ensure credible information is promoted on our platforms, and creators can play a role in this too.

With the support of our targeted advertising and extensive partner relationships, the resources will be shared directly with Australian creators to reference when posting content. Creators will also be invited to post top tips from the toolkit to encourage followers not to post misinformation and help educate them on how best to counter it.

We know from listening to our community that they have been spending more time online since the pandemic began, not only to connect with family and friends but to seek entertainment and inspiration from creators. During this time, we know people are going to a range of information sources to get information, and many rely on creators as one channel for their information.

At Facebook, we’re consistently working to ensure credible information is promoted on our platforms, and creators can play a role in this too. That’s why we’ve supported First Draft to develop this toolkit, giving creators the tools to prevent the spread of misinformation on their own accounts, and to help amplify that message to their followers.

Available now on the campaign site, the ‘Protect Your Voice’ toolkit provides creators with guidance and practical steps about:

  • Why people spread misinformation
  • How to talk to, or avoid, conversations about conspiracy theories
  • The types of deceptive content to look out for
  • How to verify the authenticity of content
  • How to combat misinformation.

We are also delighted that influencer Abbie Chatfield is one of the first creators to share the campaign.

First Draft and Facebook recognise that for creators, engaging their audience on the topic of misinformation can be challenging. We will actively work with influencers to gather feedback and suggestions from them in countering and eliminating misinformation on their platforms.

For more information or to download the toolkit visit, First Drafts Don’t Be A Mis-Influencer website.

Online safety during COVID – supporting young people feel free to be online

Susanne Legena - CEO of Plan International Australia – August 19, 2021

 

With large cities in Australia stuck in stopping and starting lockdowns owing to COVID, most of us have been spending more of our lives online. There is no doubt that social media platforms have been a great comfort to many and connected us with friends, family, and community.

The downside is that negative comments, trolling and online abuse are also on the rise. As well as opportunities to learn from and share with each other, social media also presents us with content that may negatively affect our well-being and people who are intent on trolling or harassing us. Young people – particularly girls – are often the target of online violence, which can silence their voices and cause real and lasting harm.

That’s why Plan International Australia’s young activists have come together to create a new guide to help young girls in particular take greater control over their online experiences. Alongside their tips and personal experiences, they’ve highlighted tools on Instagram and Facebook that can help make your social media world what you want it to be.

COVID has created the ‘perfect storm' for online trolling

Sadly, COVID-19 has created the perfect conditions for misogynist trollers to proliferate and prosper. In October 2020, Plan International released a wide-ranging report called Free to Be Online? which found that two-thirds of the 14000 girls surveyed in 22 countries have experienced online violence – in many cases on multiple occasions across various platforms.

The report further found that one in five girls (19%) have left or significantly reduced their use of a social media platform after being harassed, while another one in ten (12%) have changed the way they express themselves.

In Australia, two-thirds (65%) of girls and young women aged 15-25 have been exposed to a spectrum of online violence (higher than the global figure of 58%), and half of those have suffered mental and emotional distress as a result.

One in five Australian girls and young women we surveyed have feared for their physical safety due to online threats.

Our youth activists’ guide to Facebook and Instagram

Through our Free to Be Online research, girls and young women around the world told us how they are physically threatened, racially abused, sexually harassed and body shamed online. How it gets worse when they raise their voices and share their opinions.

The young people we work with also pointed to the gap in resources and advice. Guidance often addresses children, or women, without acknowledging adolescent girls at the intersection of these identities. Our youth activists also told us they wanted to see guidance that was relevant, realistic, and empowering. They wanted advice from peers with shared experiences to help them take control of their online lives.

Our youth activists have created this guide to help other young people take control of their experiences online. Through these practical tips, and personal experiences they show how the safety tools on Facebook and Instagram can work for young people, to create a safer, richer experience online.

We thank Facebook for listening to the feedback and for the opportunity to collaborate together with the Youth Activist team to create this youth-led campaign. Visit the campaign page here.

Susanne Legena is CEO of Plan International Australia, the charity for girls’ equality.

Start a chat: Start a meaningful connection

Christine Morgan – CEO, Australia’s National Mental Health Commission – August 11, 2021

 

Across the globe, COVID-19 remains as real and as prevalent today as it was 12 months ago. Here in Australia, where my hometown has just gone into lock down for the first time in a year, it is an ever-present specter.

It remains clear that our mental health and well-being are and will continue to be negatively impacted.

From the outset of COVID-19, we knew that mental health had to be given equal priority.  Those of us in the mental health sector recognised that most Australians were going to face mental health challenges at some point during the pandemic and that social connection was going to be a strong protective element for maintaining mental health and wellbeing. Governments and the mental health sector coordinated the delivery of increased services and a high volume of awareness raising around mental health self-care as well as help-seeking. Despite this, prolonged lockdowns and the uncertain nature of the pandemic are contributing to heightened levels of anxiety and distress within our communities and exacerbating the severity of symptoms for those who already live with mental illness.

Young people are particularly susceptible. In Australia, even before the pandemic, we’ve seen an ever-increasing level of psychological challenges in our children and youth. Add to this the complex levels of uncertainty caused by the pandemic and the consequent impacts on their education, home life, work, and friendships,  and it becomes obvious that young people need support for their mental health now, more than ever.

Our research and monitoring of service use tell us children (5-11 years), young people (12-15 & 16-25 years), and their parents and carers are experiencing increased levels of distress and adverse mental health impacts. Worryingly they are also experiencing heightened levels of self-harm and suicide ideation, resulting in increased presentations to emergency departments. Almost one in three (30%) younger Australians (aged 18 to 34 years) reported experiencing high or very high levels of psychological distress in June 2021, compared with 18% of people aged 35 to 64 years and 10% of people aged 65 years and over during that same period[i].

Here at the National Mental Health Commission, we know that we need to build self-confidence in our children and young people. We need to help them, and their parents and carers, build self-agency, and learn to reach out for help, both from each other and from professional services, when they need it.

It was time to act proactively to respond to a heightened level of need and to provide the resources, information, and support that parents and young people were telling us would best support them.

Over the past five weeks, we partnered with Australia’s national mental health organisations, who specialise in supporting children, young people, parents and carers (including ReachOut, Butterfly Foundation, Orygen, batyr, headspace, Beyond Blue, and Kids Helpline) and worked closely with young people and their parents, to develop a first-of-its-kind campaign to harness the power of social media to engage parents and young people alike. It was a rapid coordinated response to an immediate need by combining our efforts and resources to present a united national approach.

#ChatStarter connects, engages, and promotes the benefits of supportive conversations with young people and children who are going through a difficult time right now. #ChatStarter intentionally encourages people to use the tips and resources freely available in a single online location at the Department of Health’s Head to Health website to help them have supportive conversations and to share and promote the benefits with their communities online. It also encourages young Australians and parents to create their own content on social media with instructions on how they start chatting safely with others.

#ChatStarter is about more than just starting a conversation, it’s about building a connection. It encourages people to connect, engage, and then share their experiences in a safe, thoughtful, and meaningful way. It encourages people to share how they connect with each other on their preferred social media platform while directing people to a library of useful resources and tools, held at the Federal Government’s popular Head to Health platform.

We all have the power to support others and make people feel part of a community. When physical distance restrictions are the norm for so many, the simple act of connection becomes that much more important.

I encourage everyone to get on board and share what works for you and remind those around you that you care and that you are willing to listen and help.

We can get through this together. We are grateful to Facebook Australia for providing support as a major partner to reach millions of Australians across the country using Facebook and Instagram's new and timely program.

Find out more about #ChatStarter HERE.

If you or someone you know needs support, services are available 24/7:

  • Lifeline: 13 11 14
  • KidsHelpline: 1800 55 1800
  • Coronavirus wellbeing support service: 1800 512 348

[i] https://www.abs.gov.au/statistics/people/people-and-communities/household-impacts-covid-19-survey/latest-release

Australian Expert joins Facebook’s Global Women’s Advisory Panel

Mia Garlick, Director of Public Policy, Australia, New Zealand & Pacific Islands – July 27, 2021

 

Women come to Facebook to run thriving businesses, support each other through Groups and make donations to causes they are passionate about. However, like society, it can also be a place where women experience a disproportionate level of harassment and abuse. It is important that we equip women to manage their online experience, stay connected whilst also addressing ways we can minimise harm and stay safe online.

The way in which abuse and harassment manifest online varies country by country, however, the sum result globally is that women have a less safe experience than men. One of our key priorities is to ensure safety concerns are addressed, and that women have equal access to all of the economic opportunities, education, and social connection the internet provides in Australia.

As part of our ongoing work for many years to promote the safety of women, we’re delighted to announce Dr. Asher Flynn, as Australia’s representative on Facebook’s Global Women’s Safety Expert Advisory Group. Dr. Flynn is an Associate Professor of Criminology at Monash University and the Vice President of the Australian and New Zealand Society of Criminology. Her work is focused widely on the fields of AI-facilitated abuse, deepfakes, gendered violence, and image-based sexual abuse.

The Advisory Committee, comprised of 12 women from across the world, will join quarterly meetings dedicated to advancing the safety of women online. We will work with these experts to assess our tools, seek their feedback on new policies, and discuss new options that Facebook can implement to empower women online.

Empowering women to feel safe online 

The formation of the Women’s Advisory Group follows the work we’ve been doing for many years on products, policies, and initiatives to promote the safety of women who use our services, including launching a dedicated safety page for women on our Safety Centre hub.

We take a multi-faceted approach to make our platform a safer place for women. We develop policies to remove harmful content which can disproportionately target women such as hate speech, misrepresentation, and bullying and harassment of women. We create new tools in specialised areas, such as the non-consensual sharing of intimate images; and we build partnerships to support women and prevent technology-facilitated abuse.

Tools women can use to feel safe online 

Over the last 18 months, we have introduced new products and tools that women can use to control their experience with our services. Some of these include:

Message Tools 

  • Filter messages: We’ve introduced the ability for people to filter messages containing offensive words, phrases, and emojis, so they never have to see them. Women can use this tool to block or hide certain words they don't want on their posts or coming into their DM’s.
  • Switch off DMs: All accounts on Instagram have the option to switch off DMs from people they don’t follow. Messenger also gives people the option to ignore a conversation and automatically move it out of their inbox.

Comment controls

  • Who can comment: Earlier this year we launched a new tool on Facebook that gives public figures, creators, and brands — and all Facebook Pages and profiles — the ability to control who can comment on their organic, public posts.
  • Comment filter: Women can add emojis, words, or phrases they find offensive to their comment filter, and comments containing these terms will not appear under their posts.
  • Comment warnings: We’ve expanded comment warnings to include an additional warning when people repeatedly attempt to post potentially offensive comments.

Blocking profiles and accounts:

  • Blocking New Accounts: On Instagram, we’ve made it harder for someone who’s already been blocked to re-establish contact through a new account. With this feature, whenever a person decides to block someone on Instagram, they have the option to both block their account and new accounts that person may create.

Non-consensual sharing of intimate images (NCII)

We work to prevent people from misusing our services to make extortionate threats to expose naked or semi-naked images or videos for financial gain, additional illicit content, or sextortion.  Launched in partnership with the Office of E-Safety in Australia, we have a channel where people can report NCII to us.

We know that even the threat to share this imagery can be distressing, so we will also remove content that even threatens or promotes sexual violence or exploitation. We will remove the photo or video and disable the account of the person who shared it. We then use photo-matching technologies to help stop future attempts to share this content on Facebook, Instagram, and Messenger. By using machine learning and artificial intelligence, we can now proactively detect near nude images or videos that are shared without the user's permission on our services.

Measures such as this mean we can find this content before anyone reports it, which is important as often victims are afraid of retribution so they are reluctant to report the content themselves or are unaware the content has been shared.

We know that there will always be more to do and our job is to continue our investment to stay ahead of this challenge. Removing harmful content, investing in technology and tools to support women, and building resources and partnerships are only part of the solution to prevent technology-facilitated abuse.

We look forward to collaborating with our Women’s Safety Expert Advisors like Dr. Flynn to create solutions for change.

Technology-Facilitated Abuse in Australia

Dr Asher Flynn, Associate Professor of Criminology at Monash University – July 21, 2021

 

Dr Asher Flynn is an Associate Professor of Criminology at Monash University in Victoria, Australia. She is the Vice President of the Australian and New Zealand Society of Criminology and a Facebook Women’s Safety Global Advisor. 

Dr Anastasia Powell is an Associate Professor of Criminology and Justice Studies at RMIT University in Victoria, Australia. She is a Director of Our Watch, Australia’s National Organisation for the Prevention of Violence Against Women and Children.

This week we are launching new insights around the experience of domestic and family violence, sexual assault, health, legal, and allied service sector workers who work with victims and perpetrators of technology-facilitated abuse across Australia. This research will inform policy development to support victims of abuse and as well as support greater collaboration amongst sector services, technology companies, and government to create solutions for change.

Technology-facilitated abuse is a growing problem that has garnered increasing policy, program, and research attention. The term is wide-ranging and inclusive of many subtypes of interpersonal violence and abuse utilising mobile, online, and other digital technologies, such as stalking, psychological abuse, sexual and image-based abuse, and harassment.

In a recent national survey of 338 sector workers, we found technology-facilitated abuse to be a significant problem, with victims facing many impacts and barriers to help-seeking.

Sector workers reported the majority of victims to be women aged 18 to 34 years, girls aged 17 and under, as well as transgender, non-binary and intersex people. The main perpetrators were identified as men aged 18 to 34 years and boys aged 17 years and under, with former intimate partners, de facto, or spouses most likely to initiate the abuse to intimidate or control the victim, cause distress or fear, or isolate them and restrict their activities.

The survey asked sector workers to respond to questions about abusive behaviours that spanned three key areas: monitoring, stalking, or controlling behaviours; psychological/emotional abuse or threatening behaviours; and sexual or image-based abuse, including sexual and digital dating harassment. Some of the key findings included:

  • 83% reported victims being sent insulting or harassing messages.
  • 77% had experienced perpetrators maintaining unwanted contact with victims.
  • More than half worked with victims who were monitored by perpetrators (58%), had access to a telephone, mobile phone, or the internet controlled (56%), or knew of perpetrators who threatened to physically assault the victim (55%)
  • One-third were aware of the hacking or accessing of victims’ emails, social media, or other online accounts without consent (33%), and victims were threatened with the posting of a nude or sexual photo or video without consent (28%).

Sector workers reported the constant monitoring and abuse through technology as creating a sense of omnipresence for victims, feeling as though they were always being watched by the perpetrator. Respondents said this made victims hypervigilant and fearful, feeling as though the abuse would never end.

Respondents reported that last year’s bushfire crisis and the ongoing COVID-19 pandemic had exacerbated technology-facilitated abuse, with less ability to assist clients. They further identified significant obstacles to helping clients who are experiencing abuse and expressed concerns over the adequacy of current responses, such as difficulties in finding up-to-date information, the abuse not being taken seriously by police and courts, and inadequate responses from technology providers.

While this study has provided some insight into this growing social, legal, and health problem from the perspectives of service providers who respond to technology-facilitated abuse in their roles, further research is needed to develop a deeper understanding of the experiences and needs of victims of technology-facilitated abuse, as well as how to respond to perpetrators and inform prevention activities. This includes actively working alongside platforms such as Facebook to improve policies, responses, and tools to prevent and detect technology-facilitated abuse; something Facebook’s Global Women’s Safety Advisory Group is seeking to achieve through initiatives such as the Women’s Safety Hub.

Further research is also required to establish community prevalence rates of technology-facilitated abuse to better understand its scope, nature, and harms. A national representative survey of adult Australians’ experiences of technology-facilitated abuse is currently underway, alongside in-depth interviews with victims and perpetrators, and is expected to be released in 2022.

 

If you or someone you know is experiencing technology-facilitated abuse, you can contact 1800RESPECT on 1800 737 732 or visit their website. Resources and support on technology-facilitated abuse are also available via the eSafety Commissioner website or contact the police in an emergency.

The findings from this report represent Phase I of the Technology Facilitated Abuse project examining the extent, nature, and contexts of technology-facilitated abuse in Australia. The project was announced in June 2020 and was made possible by funding from ANROWS and the Australian Government Department of Social Services. Facebook was one of seventeen technology, industry, and/or community-based organisations advising on the survey.

A Parents Guide to Instagram in partnership with Reach Out.

Josh Machin, Head of Public Policy (Australia) – June 22, 2021

 

At Facebook and Instagram, nothing is more important than the safety and well-being of the people in our community. Empowering and protecting our online community – and young people in particular – is our most important responsibility.

As part of our commitment, Instagram is evolving policies and building tools to empower the community to use the latest technologies safely and equip them with the knowledge and skills to do so. We do this by working in partnership with local organisations and representatives from civil society to ensure we seek feedback and guidance to better understand the local community, and work collaboratively with partners to produce meaningful resources and programs. With the onset of the global pandemic, more families are coming to technology and online platforms to stay connected, and so we wanted to support parents in their understanding of how young people are engaging online and what they can do to support them.

This week, we’ve partnered with ReachOut Australia to introduce a new Parents Guide to Instagram as a comprehensive resource for parents.  This is the updated version of the Parents Guide we launched with ReachOut in 2019, and it includes new tools and tips for parents to understand the safety features on Instagram and to have stronger conversations with their teens about social media use.

We’ve added more details on Instagram’s safety and security features, as well as tips from ReachOut to start conversations with teens related to mental health and the importance of managing their time online.

As part of creating this updated guide, Reach Out also commissioned new research to understand what parents are thinking about when supporting their teens online. The research by ReachOut found that 36% of parents feel unsure about the role they can play in keeping their teens safe on social media and 40% of parents said they need more support to understand social media in order to talk to their teens about it.

The Parents’ Guide to Instagram will respond to some of the concerns voiced by parents and be a powerful resource for Australian parents, carers, and teachers as they guide teens safely through their social media experience, and it’s available to be downloaded today for free on ReachOut’s website or at www.reachout.com/ParentsGuidetoInsta.

Thank you to Reach Out for their continued partnership and for working together to support Australian families to have a safe and positive experience online.

To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy