“Almost everyone is doing monitoring,” said Talia Stroud, director of the Engaging News Project (ENP). “But in many newsrooms, that’s where analytics work stops.”
Stroud’s comments come in response to a survey by the ENP (from the University of Texas, Austin) of U.S. newsrooms and their use of analytics. The project is the first to note that the sample is far from exhaustive (525 respondents), but within the sample it found that, while “newsrooms are most interested in learning more about engaging audiences … newspaper-focused newsrooms are comparatively less likely to use research and development strategies.” Hence Stroud’s point.
We all agree that engagement is the new gold standard, but while there is no end of tools promising to deliver us pin-point accuracy in measuring engagement metrics, few are developing strategies based on those learnings.
So, why is that? Is it because newsrooms are disinterested? Or are newsrooms suffering the same problem with stats that we see in the ad department?
Digital advertising was meant to kill the old truism about not knowing which part of the marketing budget was wasted. Instead, we find advertisers growing cynical about analytics, the reliability of the numbers, and what they really mean to audiences.
Our experience at the Institute is there are too many cases where a large screen showing a plethora of statistics takes a place of pride in newsrooms, but meaningful follow-up is rare — not because of laziness but simply because a lot of the data isn’t useful.
To counter that and turn data into something with a purpose, we propose a simple three-step plan as a starting point.
- Define clearly what you need the data for.
- Decide what data you really need.
- Think about how best to represent and share that to (and in) the newsroom to get results.
Unless the aim is ticking boxes or decorating newsroom screens, then one of the main reasons for collecting and sharing data is to help change behaviour. Usually that means making those concerned aware of something they might otherwise miss.
Aside from behavioural change, the other key reason for data analysis is to make better informed decisions; for example, daily decisions on what stories to prioritise and when.
Selecting the goal should shape what data you choose to collect. This is not about dizzyingly big numbers or a bewildering range of figures and data points, but rather the opposite — a filter of just what really matters. Choosing what matters is entirely dependent on the kind of business you have.
For example, if you have a paid content site, then the kind of data important for making decisions is different from sites generating income from display ads. Pageviews versus dwell time and engagement, or unique views versus returning customers aren’t just different ways of slicing the same cake; they represent an innate business model.
The third point is how to present that information.
Handing out a spreadsheet with the top stories in red is still a talking point at news conferences, but it is a half-baked way of going about a business. Too often those “top” stories represent raw data with no attempt to filter between Web site-wide visitor information and section-by-section analysis.
Highlighting the top stories by view is easy and quick to grasp, which makes it tempting to do, but it presumes all reader engagements and all content sections are equal. That’s a sure way to demotivate section editors and draw accusations of racing down the clickbait-paved road to the bottom.
As one simple measure, statistics should be split up by sections or else, for example, entertainment and sports will often dominate. Too often, the analytics provided by the likes of Chartbeat, FruitFlan, or Google Analytics are reduced to a simple hit parade of the most popular.
In at least one European newsroom, we have seen a screen installed showing the status of the publication’s digital content. The section editor asked, “Do we have to have a screen because it will distract us from work?” It was explained that the screen told the team if their work was achieving its goals. The section editor thought about it and asked, “So do I have to switch it on?”
It doesn’t matter how good the tool is if the news team doesn’t understand and believe its message.
Where the hit parade approach does work is that it takes the form of easily understood graphics. Newsroom staff are typically disinterested in learning the jargon of marketing, and they glaze over (reasonably enough) when the talk turns to CPMs or UVs.
Despite modern cars being largely computerised, it is telling that dashboards still feature a large number of analogue-style instruments. When it comes to taking in key information at a glance, the appeal of gauges and red/green/amber indicator lights still persists.
The same is true for newsrooms where most managers and journalists are less interested in numbers and more interested in snap information about whether to step on the brakes or the acceleration pedal.
Boiling down data to simple points is another huge help. We knew a media company with two screens at the entrance, used to make a point. One showed the daily evolution of print subscriptions, and one showed the equivalent in digital subscriptions. Simple enough and a forceful representation of a message key to the roles of the passing newsroom staff.
The final point about making metrics mean something is there is a key difference between long-term and short-term learning points.
For the short-term, the fast-changing real-time data is essential. But for strategic decisions, it is essential to breakdown usage profiles over days and weeks before leaping to conclusions.
Letting the newsroom know you have both short-term and long-term priorities is another way of helping people contribute and ensuring buy in. Never underestimate a section editor’s objections to a certain metric.
Because if your goal is to mould behaviour, then it takes that buy in. Otherwise, all you’ve done is give the newsroom a screen to switch off or ignore at the first decent opportunity.
This post was also published on the INMA Media Leaders blog.