Facebook on Tuesday said it has stepped up the use of
content”, a move that will help the social media giant take action faster on harmful and violative
Facebook, which has 1.82 billion daily users globally, has drawn flak in the past for its handling of hate speech on the platform in India, which is among its biggest markets.
Facebook Product Manager (Community Integrity) Ryan Barnes said the company is
content, and that this prioritisation is important
to help its over 15,000 reviewers.
She explained that the prioritisation is important for four reasons — not all harmful
content is equal, some enforcement decisions are complex, people do not always report harmful
content and the reports aren’t always accurate.
to reporters in a virtual briefing, she said the company has moved from relying on user reports alone
to add use of technology
to help aid the process.
“We’ve moved away from just reviewing things chronologically
to help us
prioritise what we review. We have looked at severity which has been a factor in our prioritisation, but now we have other factors such as virality, severity and likelihood of violations,” she said, adding that this will help “act” on reports faster.
Barnes said the community integrity team is focussed on reducing the prevalence of bad experiences by taking action on violating
content and abusive actors proactively and with fewer mistakes.
She added that over 95 per cent of such
content is spotted by the company’s technology before anyone reports it
Facebook Engineer (Community Integrity) Chris Palow said
using AI will help in getting
to the “most harmful
content faster” and give human review teams more time
to spend on complex decisions.
He added that this will also help identify new trends, and respond
to people attempting
to post violating
Asked if AI can help in handling hate speech, Barnes said
Facebook has no tolerance for hate speech on its platform.
“We realise this is an important issue, and we in fact think that that AI technologies can help with this in terms of how do we determine if a piece of
content needs human moderation…depending on how viral that piece of
content is or how severe that piece of
content is or its likelihood of violating,” she said.
Barnes added that the company is
using and relying on its technology
to bump up such
content in terms of ranking as
to what reports are reviewed first.
The company’s decisions on ‘dangerous persons’ is a comprehensive process, including consultation with partners on ground and review of online and offline behaviour.
“We don’t use AI
to make decisions about designating people as dangerous individuals under our policy. But AI can help us find
content that may help
to inform that decision, but it’s not actually making the decision outright,” a company spokesperson said.
The spokesperson added that while the community guidelines are uniform globally, local context and cultural nuances are kept in mind as these policies are developed and enforced.