Facebook unable to handle fake news
Following pressure from users, the social network introduced tools to stem the spread of false information. But the rollout has been rocky at best.
When Facebook’s new fact-checking system labeled a Newport Buzz article as possible “fake news”, warning users against sharing it, something unexpected happened. Traffic to the story skyrocketed, according to Christian Winthrop, editor of the local Rhode Island website.
“A bunch of conservative groups grabbed this and said, ‘Hey, they are trying to silence this blog – share, share share,’” said Winthrop, who published the story that falsely claimed hundreds of thousands of Irish people were brought to the US as slaves. “With Facebook trying to throttle it and say, ‘Don’t share it,’ it actually had the opposite effect.”
The spreading of Winthrop’s piece after it was debunked and branded “disputed” is one of many examples of the pitfalls of Facebook’s much-discussed initiatives to thwart misinformation on the social network by partnering with third-party fact-checkers and publicly flagging fake news. A Guardian review of false news articles and interviews with fact-checkers and writers who produce fake content suggests that Facebook’s highly promoted initiatives are regularly ineffective, and in some cases appear to be having minimal impact.
Articles formally debunked by Facebook’s fact-checking partners – including the Associated Press, Snopes, ABC News and PolitiFact – frequently remain on the site without the “disputed” tag warning users about the content. And when fake news stories do get branded as potentially false, the label often comes after the story has already gone viral and the damage has been done. Even in those cases, it’s unclear to what extent the flag actually limits the spread of propaganda.