Last week I delivered the first lecture in our Introduction to Journalism unit. I am building on the material that my colleague, Caroline Fisher, developed in 2014. One of the things about teaching journalism is that every example has to be ‘up to date’. One of the things that Caroline discussed in the 2014 lecture were the predictions for 2014 as presented by the Nieman Lab.
The Nieman Lab is a kind of journalism think tank, clearing house and site of experimentation. At the end of each year they ask professionals and journalism experts to suggest what they think is going to happen in journalism the next year.
Incorporating these predictions into a lecture is a good way to indicate to students what some professionals and experts think are going to be the big trends, changes and events in journalism for that year. (The anticipatory logic of predictions about near-future events has become a genre of journalism/media content that I briefly discuss in a forthcoming journal article. See what I did there.)
To analyse the the 65 predictions for 2015 in a lecture that only goes for an hour would be almost impossible. What I did instead was to carry out a little exercise in data journalism to introduce students to the practical concepts of ‘analytics’, ‘website scraping’, and the capacity to ‘tell a story through data’.
I created a spreadsheet using Outwit Hub Pro that scraped the author’s name, the title of the piece, the brief one or two line intro and the number of Twitter and Facebook shares. I wanted to know how many times each prediction had been shared on social media. This could then serve as a possible indicator of whether readers though the prediction was worth sharing through at least one or two of their social media networks. By combining the number of shares I could then have a very approximate way to measure which predictions readers of the site had the most value.
I have uploaded the table of the Nieman Lab Journalism Predictions 2015 to Google Drive. The table has some very quick and simple coding of each of the predictions so as to capture some sense of what area of journalism the prediction is discussing.
The graph resulting from this table indicates that there were four predictions that were shared more than twice the number of times compared to the other 61 predictions. The top three stories had almost three times the number of shares.
Here are the four stories with the total number of combined shares:
- Diversity: Don’t talk about it, be about it 1652
- The beginning of the end of Facebook’s traffic engine 1617
- The year we get creeped out by algorithms 1529
- A wave of P.R. data 1339
I was able to then present these four links to my students and suggest that it was worth investigating why these four predictions were shared so many more times than the other 61 predictions.
In the most shared prediction, Aaron Edwards forgoes the tech-based predictions that largely shape the other pieces and instead argues that media organizations need to take diversity seriously:
I guess I could pivot here to talk about the future of news in 2015 being about mobile and personalization. (I would geek out about both immensely.) I suppose I could opine on how the reinvention of the article structure to better accommodate complex stories like Ferguson will be on every smart media manager’s mind, just as it should have been in 2014, 2013, and 2003.
But let’s have a different kind of real talk, shall we?
My prediction for the future of news in 2015 is less of a prediction and more of a call of necessity. Next year, if organizations don’t start taking diversity of race, gender, background, and thought in newsrooms seriously, our industry once again will further alienate entire populations of people that aren’t white. And this time, the damage will be worse than ever.
It was a different kind of prediction compared to the others on offer. Most people who work in the news-based media industry have been tasked with demonstrating a permanent process of professional innovation. Edwards piece strips back the tech-based rhetoric and gets at the heart of what media organizations need to be doing so as to properly address all audiences. “The excuse that it’s ‘too hard’ to find good journalists of diverse backgrounds is complete crap.”
The second most shared piece, on the limitations of over-relying on Facebook as a driver of traffic, fits perfectly with the kind of near-future prediction that we have come to expect. Gnomic industry forecasting flips the causal model with which we are familiar — we are driven by ‘history’ and it is the ‘past’ (past traumas, past successes, etc) that define our current character — so that it draws on the future as a kind of tech-mediated collective subconscious. Rather than being haunted by the past, we are haunted by possible futures of technological and organisational change.
My favourite piece among all the predictions is Zeynep Tufekci who suggests that things are going to get weird when our devices start to operate as if animated by a human intelligence. She suggests that “algorithmic judgment is the uncanny valley of computing“:
Algorithms are increasingly being deployed to make decisions where there is no right answer, only a judgment call. Google says it’s showing us the most relevant results, and Facebook aims to show us what’s most important. But what’s relevant? What’s important? Unlike other forms of automation or algorithms where there’s a definable right answer, we’re seeing the birth of a new era, the era of judging machines: machines that calculate not just how to quickly sort a database, or perform a mathematical calculation, but to decide what is “best,” “relevant,” “appropriate,” or “harmful.”