If a veteran journalist, who should have known better, can be fooled into writing a column based on a fake interview, how can we expect a typical news consumer to differentiate between truth and fiction?
It’s a wonder we’re not all being manipulated by false reports and propaganda. Media professionals are working to counter the flood of fake news, but often their actions are limited to traditional media responses. We figure if we work hard to provide credible news and information, and maybe increase the fact-checking, we’ll have done our job.
That’s no longer adequate. Not when it’s so easy to be fooled.
Take, for example, Yen Makabenta of the Manila Times. Makabenta wrote a column in September based largely on fabricated quotes attributed to the American Ambassador to the United Nations he found on a fake “Al Jazeera” Web site.
And we know this is just the tip of the iceberg. For instance, a recent discovery showed that two popular Twitter personalities with tens of thousands of followers, including members of U.S. President Donald Trump’s campaign team, never existed. Since the election, more than 2,700 fake Twitter accounts have been found.
Fake news is nothing new and has been with us at least since the time of Gutenberg. What’s changed is the ease with which these insidious stories can spread through social and digital media and find their targets.
For many, the situation looks grim. A survey of more than 1,100 Internet and technology experts by Pew Research Center found that half of them don’t believe trusted methods to combat fake news can be found within the coming decade.
The good news is that half of them do. There are, thankfully, a growing number of initiatives with missions to expose fake news, provide legitimate information, and protect democratic society. But the future depends on a much greater response.
If you look at what different organisations are doing to combat fake news, you find their activities go beyond just editorial and include technological and educational initiatives. More of these efforts are needed.
Because so much depends on the big platforms, a big part of the solution depends on them. Efforts like Twitter Moments, which pairs machine-learning systems with human moderators, or Facebook’s current efforts to add thousands of human fact-checkers, are useful.
But many people remain unsettled; more transparency about the algorithms they use, and what they’re based on, is also needed.
Some news organisations are contributing by sharing best-practices guides, like this one from The Associated Press and this one from Harvard’s First Draft on when and how to report on propaganda and fake news.
We are also seeing new efforts like the Knight Foundation’s US$4.5 million in new funding to address the decline in trust in news media. And, in the face of shrinking newsrooms, projects like Report for America’s plans to bring 1,000 journalists into local newsrooms in the next five years are addressing a core reason for the decline in trust.
Education might be the most important response, for nothing will work unless news consumers are able to be critical and learn to question things more.
Media literacy programmes are emerging, like a new project in Italian schools that teaches students how to recognise and stop the sharing of fake news and conspiracy theories.
Individual news companies can contribute both by investing in quality journalism, with transparency about the news process, and outreach to their audiences about the role they are playing and the daily combat they undertake. Traditional media must be at the center of these actions.