Myth of the Digital Native, Technologies of Convenience, and Scholarship

Do technologies of convenience shape activity?

I work with students to rethink the concept of the ‘filter bubble’ and locate it in a much broader context of how the subject position of user is created through affordances of technologies and services. At stake is whether or not there is a new kind of audience passivity, one that is necessarily co-constituted through user activity, rather than the older notions of a passive mass audience.

In Culture + Technology, Slack and Wise (2005: 33) suggest that to be a “fully functioning adult member of the culture”:

you are likely to have accepted as necessities various technologies and technological practices that are not biological, but are rather cultural necessities.

My current students are afflicted with the generational myth of the ‘digital native’. The character of the ‘digital native’ frames engagement with technology and the capabilities and affordances expected or assumed of an entire generation reconfigured as ‘users’. The idea that, like speakers in language, there are native and immigrant users of technology. Digital natives “surrounded by and using computers, videogames, digital music players, video cams, cell phones, and all the other toys and tools of the digital age” (Prensky, 2001, p. 1). Bennet, Mason, and Karvin (2008) argue that the discourse around “digital natives .[..] rather than being empirically and theoretically informed, the debate can be likened to an academic form of a ‘moral panic.’” For Sadowski (2014) it is a rearticulation of technology discourses that boost ‘gadgets’ over people:

The larger issue is that, when we insist on generalizing people into a wide category based on birth year alone, we effectively erase the stark discrepancies between access and privilege, and between experience and preference. By glancing over these social differences, and just boosting new technologies instead, it becomes easy to prioritize gadgets over what actually benefits a diverse contingent of people.

The myth of the ‘digital native’ has been translated into an educational context with three assumptions (Kirschner and Merriënboer 2013). First, students really understood what they were doing; second, students were using technologies effectively and efficiently; and, third, it is good to design education where students can use digital technologies. What I notice with students is that they do not necessarily seek mastery over a given technology or set of skills or even competence with regards to the professional standards of proficiency, but ‘convenience’. This echoes findings from Kvavik (2005) that carried out a survey of 4374 students of the so-called ‘net generation’ to examine their relation to technology at university. Kvavik interrogated some of the assumptions that articulated a generational cohort with technological skill or capacity:

  • Do they ‘prefer technology’? Only moderate preference.
  • Is technology ‘increasingly important’? Most skilled students had mixed feelings.
  • Do they already possess ‘good IT skills in support of learning’? No, many skills had to be acquired. Skills acquired through requirements of curriculum.

Importantly, Kvavik found that ‘convenience’ was the most common unprompted open text response to good qualities of using technology at university. Relations of ‘convenience’ reintroduce new forms of passivity, where technology use is appreciated as ‘good’ if it is ‘convenient’. What happens in contexts where technology makes a given practice too convenient?

A Case for Practicing Inconvenient Scholarship?

Students are arguably disadvantaged by the technologies of scholarship that most academics and researchers take for granted, such as Google Scholar and the more general phenomena of digitized scholarship. ‘Research practice’ in the humanities and social sciences prior to web often began with a review of literature on a given topic or area of interest. This literature search was profoundly inconvenient, and shaped by limited access and a slow temporality when physical copies of texts were moved around from location of repository to the scholar. A similar moment in current ‘research practice’ in the humanities and social sciences is now characterised by digital searches of an excess of information and the immediacy of ‘answers’ to ‘questions’ just posed. The relative ‘openness’ of with regards to access to such scholarship is a boon, but only in those circumstances where the research questions are not developed in a digitally-enabled and networked context.

The challenge with contemporary research students in particular is the number of possible sources (infinite, literal rate of publishing in some areas is quicker than the maximum rate of engaged reading) and the duration of scholarship thus afforded for developing a critical appreciation. Undergraduate students face a greater challenge in that they will likely not engage with an area of scholarship long enough to develop an appreciation of the above problems.

Previous modes of scholarship would frame this as a problem of appreciating one’s disciplinary area. Come to terms with the main names in a field and you will know the field. This response relies on rearticulating normative hierarchies of scholarship that work to counteract the benefits of ‘open’ scholarship. What is the point of open scholarship if they same institutions have their work valorised over others? This reintroduces a different set of affordances that implicate users in a different (social) technology of convenience.

I think a better way to approach this initial period of scholarship in any given project is to approach the development of an appreciation of a given field as a process and the overarching relation between scholar and field in this process is one of discovery. We all become detectives investigating comparable research problems, rather than judges lording over privileged ways of doing scholarship.

Facebook Research Critiques

Reminds me of when you had to write FB posts in third person.

Engineers at Facebook have worked to continually refine the ‘Edgerank‘ algorithm over the last five or six years or so. They are addressing the problem of how to manage the 1500+ pieces of content available at any moment from “friends, people they follow and Pages” into a more manageable 300 or so pieces of content. Questions have been asked about how Edgerank functions from two related groups. Marketers and the like are concerned about ‘reach’ and ‘engagement’ of their content. Political communication researchers have been concerned about how this selection of content (1500>300) relies on certain algorithmic signals that potentially reduces the diversity of sources. These signals are social and practice-based (or what positivists would call ‘behavioral’). Whenever Facebook makes a change to its algorithm it measures its success in the increase in ‘engagement’ (I’ve not seen a reported ‘failure’ of a change to the algorithm), which means interactions by users with content, including ‘clickthrough rate’. Facebook is working to turn your attention into an economic resource by manipulating the value of your attention through your News Feed and then selling access to your News Feed to advertisers.

The “random sample of 7000 Daily Active Users over a one-week period in July 2013” has produced many of the figures used in various online news reports on Facebook’s algorithm. Via Techcrunch

Exposure to ideologically diverse news and opinion on Facebook

Recently published research by three Facebook researchers was designed to ascertain the significance of the overall selection of content by the Edgerank algorithm. They compared two large datasets. The first dataset was of pieces of content shared on Facebook and specifically ‘hard’ news content. Through various techniques of text-based machine analysis they distributed these pieces of content along a single political spectrum of ‘liberal’ and ‘conservative’. This dataset was selected from “7 million distinct Web links (URLs) shared by U.S. users over a 6-month period between July 7, 2014 and January 7, 2015”. The second dataset was of 10.1 million active ‘de-identified’ individuals who ‘identified’ as ‘conservative’ or ‘liberal’. Importantly, it is not clear if they only included ‘hard news’ articles shared by those in the second set. The data represented in the appended supplementary material suggests that this was not the case. There are therefore two ways the total aggregate Facebook activity and user base was ‘sampled’ in the research. The researchers combined these two datasets to get a third dataset of event-based activity:

This dataset included approximately 3.8 billion unique potential exposures (i.e., cases in which an individual’s friend shared hard content, regardless of whether it appeared in her News Feed), 903 million unique exposures (i.e., cases in which a link to the content appears on screen in an individual’s News Feed), and 59 million unique clicks, among users in our study.

These events — potential exposures, unique exposures and unique clicks — are what the researchers are seeking to understand in terms of the frequency of appearance and then engagement by certain users with ‘cross-cutting’ content, i.e. content that cuts across ideological lines.

The first round of critiques of this research (here, here, here and here) focuses on various aspects of the study, but all resonate with a key critical point (as compared to a critique of the study itself) that the research is industry-backed and therefore suspect. I have issues with the study and I address these below, but they are not based on it being an industry study. Is our first response to find any possible reason for being critical of Facebook’s own research simply because it is ‘Facebook’?

Is the study scientifically valid?

The four critiques that I have linked to make critical remarks about the sampling method and specifically how the dataset of de-identified politically-identifying Facebook users was selected. The main article is confusing and it is only marginally clearer in the appendix but it appears that both samples were validated against the broader US-based Facebook user population and total set of news article URLs shared, respectively. This seems clear to me, and I am disconcerted that it is not clear to those others that have read and critiqued the study. The authors discuss validation, specifically point 1.2 for the user population sample and 1.4.3 for the validation of the ‘hard news’ article sample. I have my own issues with the (ridiculously) normative approach used here (the multiplicity of actual existing entries for political orientation are reduced to a single five point continuum of liberal and conservative, just… what?), but that is not the basis of the existing critiques of the study.

Eszter Hargittai’s post at Crooked Timber is a good example. Let me reiterate that if I am wrong with how I am interpreting these critiques and the study, then I am happy to be corrected. Hargittai writes:

Not in the piece published in Science proper, but in the supplementary materials we find the following:  All Facebook users can self-report their political affiliation; 9% of U.S. users over 18 do. We mapped the top 500 political designations on a five-point, -2 (Very Liberal) to +2 (Very Conservative) ideological scale; those with no response or with responses such as “other” or “I don’t care” were not included. 46% of those who entered their political affiliation on their profiles had a response that could be mapped to this scale. To recap, only 9% of FB users give information about their political affiliation in a way relevant here to sampling and 54% of those do so in a way that is not meaningful to determine their political affiliation. This means that only about 4% of FB users were eligible for the study. But it’s even less than that, because the user had to log in at least “4/7 days per week”, which “removes approximately 30% of users”.  Of course, every study has limitations. But sampling is too important here to be buried in supplementary materials. And the limitations of the sampling are too serious to warrant the following comment in the final paragraph of the paper:  we conclusively establish that on average in the context of Facebook, individual choices (2, 13, 15, 17) more than algorithms (3, 9) limit exposure to attitude-challenging content. How can a sample that has not been established to be representative of Facebook users result in such a conclusive statement? And why does Science publish papers that make such claims without the necessary empirical evidence to back up the claims?

The second paragraph above continues with a further sentence that suggestions that the sample was indeed validated against a sample of 79 thousand other FB US users. Again, I am happy to be corrected here, but this at least indicate that the study authors have attempted to do precisely what Hargittai and the other critiques are suggesting that they have not done. From the appendix of the study:

All Facebook users can self-report their political affiliation; 9% of U.S. users over 18 do. We mapped the top 500 political designations on a five-point, -2 (Very Liberal) to +2 (Very Conservative) ideological scale; those with no response or with responses such as “other” or “I don’t care” were not included. 46% of those who entered their political affiliation on their profiles had a response that could be mapped to this scale. We validated a sample of these labels against a survey of 79 thousand U.S. users in which we asked for a 5-point very-liberal to very-conservative ideological affiliation; the Spearman rank correlation between the survey responses and our labels was 0.78.

I am troubled that other scholars are so quick to condemn a study for not being valid when it does not appear as if any of the critiques (at the time of writing) attempt to engage with the methods but which the study authors tested validity. Tell me it is not valid by addressing the ways the authors attempted to demonstrate validity, don’t just ignore it.

What does the algorithm do?

A more sophisticated “It’s Not Our Fault…” critique is presented by Christian Sandvig. He notes that the study does not take into account how the presentation of News Feed posts and then ‘engagement’ with this content is a process where the work of the Edgerank algorithms and the work of users can not be easily separated (orig. emphasis):

What I mean to say is that there is no scenario in which “user choices” vs. “the algorithm” can be traded off, because they happen together (Fig. 3 [top]). Users select from what the algorithm already filtered for them. It is a sequence.**** I think the proper statement about these two things is that they’re both bad — they both increase polarization and selectivity. As I said above, the algorithm appears to modestly increase the selectivity of users.

And the footnote:

**** In fact, algorithm and user form a coupled system of at least two feedback loops. But that’s not helpful to measure “amount” in the way the study wants to, so I’ll just tuck it away down here.

A “coupled system of at least two feedback loops”, indeed. At least one of those feedback loops ‘begins’ with the way that users form social networks — that is to say, ‘friend’ other users. Why is this important? Our Facebook ‘friends’ (and pages and advertisements, etc.) serve as the source of the content we are exposed to. Users choose to friend other users (or Pages, Groups, etc.) and then select from the pieces of content these other users (and Pages, advertisements, etc.) share to their networks. That is why I began this post with a brief explanation of the way the Edgerank algorithm works. It filters an average of 1500 possible posts down to an average of 300. Scandvig’s assertion that “[u]sers select from what the algorithm already filtered for them” is therefore only partially true. The Facebook researchers assume that Facebook users have chosen the sources of news-based content that can contribute to their feed. This is a complex set of negotiations around who or what has the ability and then the likelihood of appearing in one’s feed (or what could be described as all the options for organising the conditions of possibility for how content appears in one’s News Feed).

The study is testing the work of the algorithm by comparing the ideological consistency of one’s social networks with the ideological orientation of the stories presented and of the news stories’ respective news-based media enterprises. The study tests the hypothesis that your ideologically-oriented ‘friends’ will share ideological-aligned content. Is the number of stories from across the ideological range — liberal to conservative — presented (based on an analysis of ideological orientation of each news-based media enterprise’s URL) different to the apparent ideological homophily of your social network? If so, then this is the work of the algorithm. The study finds that the algorithm works differently for liberal and conservative oriented users.

Nathan Jurgenson spins this into an interpretation of how algorithms govern our behaviour:

For example, that the newsfeed algorithm suppresses ideologically cross cutting news to a non-trivial degree teaches individuals to not share as much cross cutting news. By making the newsfeed an algorithm, Facebook enters users into a competition to be seen. If you don’t get “likes” and attention with what you share, your content will subsequently be seen even less, and thus you and your voice and presence is lessened. To post without likes means few are seeing your post, so there is little point in posting. We want likes because we want to be seen.

‘Likes’ are only signal we have that helps shape our online behaviour? No. Offline feedback is an obvious one. What about the cross-platform feedback loops? Most of what I talk about on Facebook nowadays consists of content posted by others on other social media networks. We have multiple ‘thermostats’ for aligning the appropriate and inappropriateness of posts in terms of attention, morality, sociality, cultural value, etc.  I agree with Jurgenson, when he suggests that Jay Rosen’s observation that “It simply isn’t true that an algorithmic filter can be designed to remove the designers from the equation.” A valid way of testing this has not been developed yet.

The weird thing about this study is that from a commercial point of view Facebook should want to increase the efficacy of the Edgerank algorithms as much as possible, because it is the principle method for manipulating the value of ‘visibility’ of each user’s News Feed (through frequency/competition and position).  Previous research by Facebook has sought to explore the relative value of social networks as compared to the diversity of content, this included a project that investigated the network value of weak tie social relationships.

Effect of Hard and Soft News vs the Work of Publics

What is my critique? All of the critiques mention that the Facebook research, from a certain perspective, has produced findings that are not really that surprising because they largely confirmed how we already understand how people choose ideological content. A bigger problem for me is the hyper-normative classification of ‘hard’ and ‘soft’ news as it obscures part of what makes this kind of research actually very interesting. For example, from the list of 20 stories provided as an example of hard and soft news, at least two of the ‘soft’ news stories are not ‘soft’ news stories by anyone’s definition. From the appendix (page 15):

  • Protesters are expected to gather in downtown Greenville Sunday afternoon to stage a Die In along Main Street …
  • Help us reach 1,000,000 signatures today, telling LEGO to ditch Shell and their dirty Arctic oil!

I did a Google search for the above text. One is a “die in” held as a protest over the death of Eric Garner. The other is a Greenpeace USA campaign.

There are at least two problems for any study that seeks to classify news-based media content according to normative hard and soft news distributions when working to isolate the how contemporary social media platforms have affected democracy:

1. The work of ‘politics’ (or ‘democracy’) does not only happen because of ‘hard news’. This is an old critique, but one that has been granted new life in studies of online publics. The ‘Die-In’ example is particularly important in this context. It is a story on a Fox News affiliate, and I have only been able to find the exact words provided in the appendix by the study authors to refer to this article on Fox News-based sites. Fox News is understood to be ‘conservative’ in the study (table S3 of appendix), and yet the piece on the ‘Die-In’ protest does not contain any specific examples of conservative framing. It is in fact a straightforward ‘hard news’ piece on the protest that I would actually interpret as journalistically sympathetic towards the protests. How many stories classified as ‘conservative’ because they appear on a Fox News-based URL? How many other allegedly ‘soft news’ stories were not actually soft news at all?

2. Why is ‘cross cutting’ framed only along ideological lines of content and users, when it is clear that allegedly ‘soft news’ outlets can cover ‘political topics’ and that more or less impact ‘democracy’?  In the broadcast and print-era of political communication, end users had far less participatory control over the reproduction of issue-based publics. They used ‘news’ as a social resource to isolate differences with others, to argue, to understand their relative place in the world, etc. Of profound importance in the formation of online publics is the way that this work (call it ‘politics’ or not) takes over the front stage in what have been normatively understood as non-political domains. How many times have you had ‘political’ discussions in non-political forums? Or more important for the current study, how many ‘Gamergate’ articles were dismissed from the sample because the machine-based methods of sampling could not discern that they were about more than video games?  The study does not address how ‘non-political’ news-based media outlets become vectors of political engagement when they are used as a resource by users to rearticulate political positions within issue-based publics.

Nieman Lab 2015 Predictions for Journalism

Last week I delivered the first lecture in our Introduction to Journalism unit. I am building on the material that my colleague, Caroline Fisher, developed in 2014. One of the things about teaching journalism is that every example has to be ‘up to date’. One of the things that Caroline discussed in the 2014 lecture were the predictions for 2014 as presented by the Nieman Lab.

The Nieman Lab is a kind of journalism think tank, clearing house and site of experimentation. At the end of each year they ask professionals and journalism experts to suggest what they think is going to happen in journalism the next year.

Incorporating these predictions into a lecture is a good way to indicate to students what some professionals and experts think are going to be the big trends, changes and events in journalism for that year. (The anticipatory logic of predictions about near-future events has become a genre of journalism/media content that I briefly discuss in a forthcoming journal article. See what I did there.)

To analyse the the 65 predictions for 2015 in a lecture that only goes for an hour would be almost impossible. What I did instead was to carry out a little exercise in data journalism to introduce students to the practical concepts of ‘analytics’, ‘website scraping’, and the capacity to ‘tell a story through data’.

Nieman Lab
Nieman Lab 2015 Predictions

I created a spreadsheet using Outwit Hub Pro that scraped the author’s name, the title of the piece, the brief one or two line intro and the number of Twitter and Facebook shares. I wanted to know how many times each prediction had been shared on social media. This could then serve as a possible indicator of whether readers though the prediction was worth sharing through at least one or two of their social media networks. By combining the number of shares I could then have a very approximate way to measure which predictions readers of the site had the most value.

Spreadsheet shares
Here is the spreadsheet created through Outwit Hub Pro,

I have uploaded the table of the Nieman Lab Journalism Predictions 2015 to Google Drive. The table has some very quick and simple coding of each of the predictions so as to capture some sense of what area of journalism the prediction is discussing.

The graph resulting from this table indicates that there were four predictions that were shared more than twice the number of times compared to the other 61 predictions. The top three stories had almost three times the number of shares.

combined social shares
The four predictions with the highest number of shares clearly standout from the rest.

Here are the four stories with the total number of combined shares:

  1. Diversity: Don’t talk about it, be about it                              1652
  2. The beginning of the end of Facebook’s traffic engine 1617
  3. The year we get creeped out by algorithms                        1529
  4. A wave of P.R. data                                                                             1339

I was able to then present these four links to my students and suggest that it was worth investigating why these four predictions were shared so many more times than the other 61 predictions.

In the most shared prediction, Aaron Edwards forgoes the tech-based predictions that largely shape the other pieces and instead argues that media organizations need to take diversity seriously:

I guess I could pivot here to talk about the future of news in 2015 being about mobile and personalization. (I would geek out about both immensely.) I suppose I could opine on how the reinvention of the article structure to better accommodate complex stories like Ferguson will be on every smart media manager’s mind, just as it should have been in 2014, 2013, and 2003.
But let’s have a different kind of real talk, shall we?
My prediction for the future of news in 2015 is less of a prediction and more of a call of necessity. Next year, if organizations don’t start taking diversity of race, gender, background, and thought in newsrooms seriously, our industry once again will further alienate entire populations of people that aren’t white. And this time, the damage will be worse than ever.

It was a different kind of prediction compared to the others on offer. Most people who work in the news-based media industry have been tasked with demonstrating a permanent process of professional innovation. Edwards piece strips back the tech-based rhetoric and gets at the heart of what media organizations need to be doing so as to properly address all audiences.  “The excuse that it’s ‘too hard’ to find good journalists of diverse backgrounds is complete crap.”

The second most shared piece, on the limitations of over-relying on Facebook as a driver of traffic, fits perfectly with the kind of near-future prediction that we have come to expect. Gnomic industry forecasting flips the causal model with which we are  familiar — we are driven by ‘history’ and it is the ‘past’ (past traumas, past successes, etc) that define our current character — so that it draws on the future as a kind of tech-mediated collective subconscious. Rather than being haunted by the past, we are haunted by possible futures of technological and organisational change.

My favourite piece among all the predictions is Zeynep Tufekci who suggests that things are going to get weird when our devices start to operate as if animated by a human intelligence. She suggests that “algorithmic judgment is the uncanny valley of computing“:

Algorithms are increasingly being deployed to make decisions where there is no right answer, only a judgment call. Google says it’s showing us the most relevant results, and Facebook aims to show us what’s most important. But what’s relevant? What’s important? Unlike other forms of automation or algorithms where there’s a definable right answer, we’re seeing the birth of a new era, the era of judging machines: machines that calculate not just how to quickly sort a database, or perform a mathematical calculation, but to decide what is “best,” “relevant,” “appropriate,” or “harmful.”

Education and Cluster Funded Explanations

The way fields of knowledge are split into teaching and research clusters in higher education in Australia is confusing . We have Field of Research codes through which we align our research outputs with disciplinary groupings and we have Field of Education codes through which the government analyses student numbers, teaching performances and funding. They don’t always line up, which causes some headaches for staffing and management. In the context of the current shake-up to education funding, FOR and FOE mismatch is the least of anyone’s worries.

The proposed overhaul of funding tiers so they are reduced and reorganised from eight tiers to five has changed the amount of funding universities receive for each Commonwealth Supported Place (CSP). As part of the current series of investigative hearings of the Education and Employment Legislation Committee of Senate Estimates, various senators asked questions about the proposed changes to education funding arrangements.

Below is an exchange from Thursday 5 June 2014 between Senator Rhiannon and various public servant representatives (page 55 of the pdf):

Communications tiers

To explain Mr Warburton and Ms Paul’s respective answers, and then why the answers are wrong, requires understanding the structure of Field of Education codes.

The Field of Education codes have a tree like structure. The 12 two digit codes begin with 01 Natural and Physical Sciences, then 02 Information technology, and end with 11 Food, Hospitality and Personal Services and 12 Mixed Field Programmes. From each of these two digit FOE codes it then separates into four digit codes. Senator Rhiannon was asking about why some of the disciplines in the four digit FOE of 1007 Communication and Media Studies (in the larger two digit FOE of 10 Creative Arts) in the current proposal were being funded at different rates.

The four-digit codes then split into six-digit codes. In the current proposal, the six-digit FOE 100701 Audio Visual Studies is in a higher funding tier than the other four six-digit FOE areas:

100703 Journalism
100705 Written Communication
100707 Verbal Communication
100799 Communication and Media Studies not elsewhere classified

To explain this discrepancy Mr Warburton and Ms Paul both gesture towards the major 2011 report into funding arrangements, Base Funding Review.

The closest any part of the Base Funding Review report comes to supporting their comments is a section across pages 56-57 that deals with funding of the Visual and Performing Arts in the context of student-intensive studio and project-based modes of teaching:

The disparity in costs for FOE 10 (creative arts) between institutions suggests that it may need to be split between funding clusters with visual and performing arts moved to a funding cluster with a higher rate.

I can’t find anywhere in the Base Funding Review where it drills down to four digit FOE detail, let alone the detail required for an analysis at a six-digit FOE level that would make Mr Warburton and Ms Paul’s answer sufficient.

In my investigation into Mr Warburton and Ms Paul’s answer I realised there is a much bigger problem with the current policy proposal to separate funding into the proposed FOE-based tiers:

What is the relation between the level of macro detail of the two-digit FOEs in the 2011 Base Funding Review to the level of detail in the current budget proposal that differentiates funding on the basis of four-digit FOE and even six-digit FOE code clusters? 

The 2011 Base Funding Review report and material does not provide an answer.

A large amount of explanatory information in the Base Funding Review report is provided by 161 submissions and from what I can gather none of these submissions provides the level of overview in the detail required to substantiate Mr Warburton and Ms Paul’s respective claims.

The Deloitte Access Economics report that is part of the supplementary material of the Base Funding Review report seems to be the basis of much of the non-submission-based material. The Deloitte report actually creates its own 19 groups based on aggregating combinations of two-, four- and six-digit FOEs. The report states, “Given the sheer number of 6 digit FOEs (more than 300) it was deemed appropriate to estimate the model based on an aggregation of clusters and bands to form ‘groups’” (21). The FOE two-digit code of 10 is simply regarded as ‘Art’ it seems. Again, there is no supporting material for Mr Warburton and Ms Paul’s respective claims that the separation of funding tiers is derived from the 2011 Base Funding Review.

Where is the detail for how these funding decisions were made?

By the way, one of the recommendations to government from the 2011 Base Funding Review regarding funding:

The Australian Government should address the identified areas of underfunding in the disciplines of accounting, administration, economics, commerce, medicine, veterinary science, agriculture, dentistry, and visual and performing arts, and should consider increasing the funding level for humanities and law.

 

Journalism Jobs

The ABC is reporting on a leaked “issues paper” from the University of Queensland (UQ) and that UQ apparently plan to merge most of their Communications offerings. Part of this process is allegedly dropping the journalism course (although the leaked document states the contrary: they have no intention to drop the BJournalism degree).

“Issue paper” author and UQ Dean, Prof Tim Dunne, has definitely isolated some issues that are worth engaging with:

Demand for journalism is declining globally as employment opportunities diminish in the era of digital and social media. In recent years, there has been widespread job loss in the journalism profession in Australia. The Australian Government Job Outlook suggests that job openings for journalists and writers will be below average over the next five years, with an overall decline in the number of positions. At the same time, there is increased visibility (on-line, through social media etc) and new kinds of employment opportunities are emerging, including areas such as data analytics.

I am not sure how Journalism is taught at UQ but I find it very hard to believe that students are not equipped to take on the challenge of new “on-line” platforms in addition to traditional media forms.

Prof Dunne presents a bleak picture for journalism, but it is not entirely correct. What is the current state of the news-based media industry, formally known as ‘journalism’? Absolute numbers are very hard to discern, but trends are relatively straightforward:

The ABS Employment in Culture, 2006 – 2011 captures some trends over the five years 2006 to 2011.

[table caption=”Table 1: Employment in Journalism” width=”500″ colwidth=”20|100|50″ colalign=”left|left|center|left|right”]
Role,2006,2011
Newspaper or Periodical Editor, 4844, 5059
Print Journalist, 6306, 5510
Radio Journalist, 671, 603
Television Journalist, 1059, 1123
Journalists and Other Writers (nec), 1279, 1705
Journalists and Other Writers (nfd), 1414, 2125
Totals, 15573, 16125
[/table]

Much has been made over recent high profile lay-offs at Fairfax and News Corp, as if they are the only places that hire journalists. For example, the current #fairgofairfax social media campaign to generate support for Fairfax employees has a high degree of visibility on Twitter. Indeed, the number of print journalists declined by 800 in the five years 2006 to 2011, but as a field the numbers went up. I shall return to this below.

When we turn to the Australian Government Job Outlook data it is clear that this increase in the number of journalism jobs is not surprising.

[table caption=”Table 2: Journalists and Other Writers (Job Growth)” width=”500″ colwidth=”20|100|50″ colalign=”left|left|center|left|right”]

Time Period, Occupation (per cent growth), All Occupations (per cent growth)

5 Year Growth, 37.8, 7.8

2 Year Growth, 28.7, 1.9

[/table]

It seems that the Prof Dunne pays particular heed to this page of the Australian Government Job Outlook data regarding prospects:

Over the five years to November 2017, the number of job openings for Journalists and Other Writers is expected to be below average (between 5,001 and 10,000).Job openings can arise from employment growth and people leaving the occupation.

Employment for Journalists and Other Writers to November 2017 is expected to decline.

Employment in this large occupation (29,800 in November 2012) rose very strongly in the past five years and rose strongly in the long-term (ten years).

Journalists and Other Writers have an average proportion of full-time jobs (75.3 per cent). For Journalists and Other Writers working full-time, average weekly hours are 41.6 (compared to 41.3 for all occupations) and earnings are above average – in the eighth decile. Unemployment for Journalists and Other Writers is average.

So after witnessing jobs growth four to five times the average for the past five years or so, and 10 times the average over last two years, there will ‘only’ be between 5000 to 10000 new positions available.

The broader journalism industry seems like it is in a pretty good state of affairs, which contradicts popularist conservative narratives about an oversupply of journalism graduates. Two years ago The Australian newspaper attacked Journalism Schools and attempted to open up another front of the Culture Wars (or return to old ground after the earlier ‘Media Wars‘). They suggested that Australian journalism schools produce too many graduates, when it is apparent that universities were actually servicing demand. The Australian newspaper does not represent journalism in Australia; in fact, it is a tiny vocal minority.

The bottom line is that there has been an explosive growth over the last decade in journalism and other jobs relating to the news-based media industry. The biggest growth measured in the Employment in Culture statistics for Journalism is in the ‘Not Elsewhere Classified’ category of just under 500 new positions; occupations include blogger, critic, editorial assistant and essayist. The key point is that this growth is not in the legacy media industries areas where journalists have traditionally worked. Most people who work in the media industry know this to be intuitively correct. More media content (writing, filming, recording, producing, etc.) is created and distributed now than at any other point in history.

The real question that Prof Dunne asks, and which is implied by his remarks about the rise of new employment areas, what combination of skills and competences shall serve our graduates in an era that produces more media content than ever before in human history? Or as he states: “What is likely is that there will continue to be a need for strong and vibrant courses in journalism that are practice-based”.

He gestures towards data analytics as an example. Many research projects show how newsrooms have learned to appreciate analytics information about their websites, and increasingly about individual users (in the era of paywalls and required logins). Students report that they feel empowered after the workshop where I give them as editors the task of setting up a ‘dashboard’ in Google Analytics so as to create reports for their team of student journalists. They can see how older forms of journalistic ‘gut feeling’ map onto new analytics information.

Another example is regarding the delegation of editorial responsibilities to more junior staff. Reading into the Employment in Culture figures there has been an increase in the number of editors from 2006 to 2011. Occupations in this role include features editor, news editor, pictures editor, subeditor, and (importantly) website/blog editor. One way to interpret this shift, which is congruent with other observations, is that there has been a ‘flattening out’ of the journalism industry with less medium-specific silos and more network-based cross-platform media enterprises. We train graduates to be prepared to take on some of the responsibilities that used to belong to senior journalists as editors but are now graduate level positions.

Based on proposed five tier funding arrangements there will be a refocus on design and audio-visual studies as the core units of journalism and communication studies. Part of this is because of the very strange separation of Audio and Visual Studies from the other discipline areas in the 1007 Field of Education code so it is in the funding tier that receives greater federal government funding.