Facebook officials said in a blog post today that the company uses AI to find and remove "terrorist content" immediately, before users see it. This is a departure from Facebook's earlier practice of relying on users to flag suspect content for removal. "When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny". "And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities".
Facebook is also working to bring these technologies to its other platforms, including WhatsApp and Instagram.
Facebook says it has grown its team of specialists so that it now has 150 people working on counter-terrorism specifically, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers.
"We're now experimenting with analyzing text that we've already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda".
The new algorithms have helped the company to "dramatically reduce the time period that terrorist recidivist accounts are on" the social networking site.
"We agree with those who say that social media should not be a place where terrorists have a voice".
Terrorism remains a problem that governments and companies everywhere are attempting to solve, but Facebook plans to launch several initiatives to contribute to these efforts. When material is identified and removed, algorithms "fan out to try to identify related material that may also support terrorism".
Penguins rout Predators 6-0 to take 3-2 Cup lead
The Predators have rebounded from a 2-0 deficit, winning the last two games by a combined score of 9-2 to even the series at 2-2. Now, they'll take the show on the road in search of a victory in Pittsburgh in Game Five of the Final at PPG Paints Arena.
"It is an enormous challenge to keep people safe on a platform used by almost two billion every month, posting and commenting in more than 80 languages in every corner of the globe".
Facebook also mentioned its partnerships with government and industry, noting that it works with NGOs and others to stem the tide of terrorist propaganda.
YouTube, Facebook, Twitter and Microsoft past year created a common database of digital fingerprints automatically assigned to videos or photos of militant content to help each other identify the same content on their platforms.
"We also have a global team that responds within minutes to emergency requests from law enforcement".
Facebook committed to making its platform a "hostile place for terrorists" and outlined the steps they had taken to do so.
Facebook's new technology includes "image matching", which will be able to detect when someone uploads a previously flagged "propaganda video" or image, and "language understanding", which will analyze written support and praise of terrorist organizations in an effort to further understand their communication.
In a separate post Thursday morning, Facebook said it will be seeking public feedback and sharing its own thinking on thorny issues, including the definition of fake news, the removal of controversial content, and what to do with a person's online identity when they die.