Facebook uses machine learning to remove terrorist content
Facebook | Image source: pixabay

Facebook actively found and removed 99% of terrorist-related content on the site for the last three quarters. It has given some insight into its processes in the blog post.

Some figures: First of all it is important to remember that when it says "terrorism", then Facebook is referring only to ISIS and Al-Qaeda. On average, the company stated that it removes the terrorist content in less than two minutes of posting, compared to the 14 hrs which company takes to remove content before one year. Facebook took action on 9.4 million posts in the quarter to 2018.

Detection System: Facebook has revealed a new machine-learning tool to assess whether ISIS or al-Qaeda has Post Signal support or not. This tool generates a number which indicates how much it violates their counter-terrorism policies, the one who has high score it will be pass to human reviewers. For the highest score cases, posts are automatically deleted. In the "Rare case" employees find the possibility of imminent loss, Facebook immediately notifies law enforcement.

Release the bots: Clearly, Facebook is eager to look tough on terrorism content. And as always, it almost relies entirely on algorithms. This makes great sense from its point of view because a person can never scan the information as quickly as machines and if people will do then it will be expensive. But this is another reminder of the role of Facebook in the form of a judge, jury, and executor who allow us to see it. For more updates on Facebook, Stay tuned with us.

For more Trending News on Tech & Gadgets and Machine Learning, Stay tuned with us on FoxSplit. To stay tuned, you can follow us on TwitterGoogle+ and Facebook.

Source: MIT Tech Review