Facebook has announced details of steps it is taking to remove terrorist-related content.

The move comes after growing pressure from governments for technology companies to do more to take down material such as terrorist propaganda.

In a series of blog posts by senior figures and an interview with the BBC, Facebook says it wants to be more open about the work it is doing.

The company told the BBC it was using artificial intelligence to spot images, videos and text related to terrorism as well as clusters of fake accounts.

“We want to find terrorist content immediately, before people in our community have seen it,” it said.

No safe space

The ability of so-called Islamic State to use technology to radicalise and recruit people has raised major questions for the large technology companies.

They have been criticised for running platforms used to spread extremist ideology and inspire people to carry out acts of violence.

Governments, and the UK in particular, have been pushing for more action in recent months, and across Europe talk has been moving towards legislation or regulation.

Earlier this week in Paris, the British prime minister and the president of France launched a joint campaign to ensure the internet could not be used as a safe space for terrorists and criminals.

Among the issues being looked at, they said, was creating a new legal liability for companies if they failed to remove certain content, which could include fines.

Facebook says it is committed to finding new ways to find and remove material – and now wants to do more than talk about it.

“We want to be very open with our community about what we’re trying to do to make sure that Facebook is a really hostile environment for terror groups,” Monika Bickert, director of global policy management at Facebook, told the BBC.

One criticism British security officials make is of the extent to which companies rely on others to report extremist content rather than acting proactively themselves.

Facebook has previously announced it is adding 3,000 employees to review content flagged by users.

But it also says that already more than half of the accounts that it removes for supporting terrorism are ones that it finds itself.

It says it is also now using new technology to improve its proactive work.

“We know we can do better at using technology – and specifically artificial intelligence – to stop the spread of terrorist content on Facebook,” the company says.

Click here to continue reading…

SOURCE: Gordon Corera
BBC News

Advertisements