


But that is the platform Facebook built.Ī police officer secures the area in front of the Masjid al Noor mosque after a shooting incident in Christchurch on March 15, 2019. Policing a platform with billions of people freely and openly sharing their thoughts, their videos, and their pictures, is not an easy task. Trying to show it was being proactive in finding posts that violated its terms of service, it said that between July and September last year it had found 99.5% of “terrorist propaganda” before users had reported the posts to the company, and 96.8% of “violence and graphic content.”

In November, Facebook released a report on how it was policing its platform. Machines, Farid said, “can’t even tell the difference between broccoli and marijuana, let alone if a video is of a movie, or a videogame, or documenting war crimes, or is a crazy guy killing people in a mosque.”

“We are not even close,” he said of the artificial intelligence, “we are years away from being able to do the sophisticated, nuanced things that humans do very well.” “20% of the work is to get you to 90% accuracy,” he said, adding that 80% of the work comes in getting to 99.9% accuracy. It was an illustration of how Facebook could crack down on attempted drug sales on the platform, perhaps.īut at Facebook’s scale, with billions of posts, being wrong 10% of the time is not good enough, Hany Farid, a professor at Dartmouth and expert in digital forensics and image analysis, told CNN Business on Friday. In one example, Schroepfer showed how the systems could determine with about 90% accuracy which image showed to it contained broccoli and which contained marijuana. Mike Schroepfer, Facebook’s chief technology officer, boasted in a recent interview with Fortune magazine how good Facebook’s artificial systems were at identifying the difference between pictures of broccoli and pictures of marijuana. Facebook, YouTube and Twitter struggle to deal with New Zealand shooting video
