Facebook, Inc. revealed its detailed policy and criteria regarding the process of content removal on Tuesday. Guidelines on how moderators remove content related to spam, harassment, self-harm, terrorism, intellectual property theft, violence nad hate speech were made public for the first time today.
“We use a combination of artificial intelligence and reports from people to identify posts, pictures or other content that likely violates our Community Standards. These reports are reviewed by our Community Operations team, who work 24/7 in over 40 languages. Right now, we have 7,500 content reviewers, more than 40% the number at this time last year,” Vice President of Global Product Management Monika Bickert said in a release.
Moreover, the social media giant introduced the option for users to appeal when a post is removed, which will prompt a new review by the company.