I work with students to rethink the concept of the ‘filter bubble’ and locate it in a much broader context of how the subject position of user is created through affordances of technologies and services. At stake is whether or not there is a new kind of audience passivity, one that is necessarily co-constituted through user activity, rather than the older notions of a passive mass audience.
In Culture + Technology, Slack and Wise (2005: 33) suggest that to be a “fully functioning adult member of the culture”:
you are likely to have accepted as necessities various technologies and technological practices that are not biological, but are rather cultural necessities.
My current students are afflicted with the generational myth of the ‘digital native’. The character of the ‘digital native’ frames engagement with technology and the capabilities and affordances expected or assumed of an entire generation reconfigured as ‘users’. The idea that, like speakers in language, there are native and immigrant users of technology. Digital natives “surrounded by and using computers, videogames, digital music players, video cams, cell phones, and all the other toys and tools of the digital age” (Prensky, 2001, p. 1). Bennet, Mason, and Karvin (2008) argue that the discourse around “digital natives .[..] rather than being empirically and theoretically informed, the debate can be likened to an academic form of a ‘moral panic.’” For Sadowski (2014) it is a rearticulation of technology discourses that boost ‘gadgets’ over people:
The larger issue is that, when we insist on generalizing people into a wide category based on birth year alone, we effectively erase the stark discrepancies between access and privilege, and between experience and preference. By glancing over these social differences, and just boosting new technologies instead, it becomes easy to prioritize gadgets over what actually benefits a diverse contingent of people.
The myth of the ‘digital native’ has been translated into an educational context with three assumptions (Kirschner and Merriënboer 2013). First, students really understood what they were doing; second, students were using technologies effectively and efficiently; and, third, it is good to design education where students can use digital technologies. What I notice with students is that they do not necessarily seek mastery over a given technology or set of skills or even competence with regards to the professional standards of proficiency, but ‘convenience’. This echoes findings from Kvavik (2005) that carried out a survey of 4374 students of the so-called ‘net generation’ to examine their relation to technology at university. Kvavik interrogated some of the assumptions that articulated a generational cohort with technological skill or capacity:
Do they ‘prefer technology’? Only moderate preference.
Is technology ‘increasingly important’? Most skilled students had mixed feelings.
Do they already possess ‘good IT skills in support of learning’? No, many skills had to be acquired. Skills acquired through requirements of curriculum.
Importantly, Kvavik found that ‘convenience’ was the most common unprompted open text response to good qualities of using technology at university. Relations of ‘convenience’ reintroduce new forms of passivity, where technology use is appreciated as ‘good’ if it is ‘convenient’. What happens in contexts where technology makes a given practice too convenient?
A Case for Practicing Inconvenient Scholarship?
Students are arguably disadvantaged by the technologies of scholarship that most academics and researchers take for granted, such as Google Scholar and the more general phenomena of digitized scholarship. ‘Research practice’ in the humanities and social sciences prior to web often began with a review of literature on a given topic or area of interest. This literature search was profoundly inconvenient, and shaped by limited access and a slow temporality when physical copies of texts were moved around from location of repository to the scholar. A similar moment in current ‘research practice’ in the humanities and social sciences is now characterised by digital searches of an excess of information and the immediacy of ‘answers’ to ‘questions’ just posed. The relative ‘openness’ of with regards to access to such scholarship is a boon, but only in those circumstances where the research questions are not developed in a digitally-enabled and networked context.
The challenge with contemporary research students in particular is the number of possible sources (infinite, literal rate of publishing in some areas is quicker than the maximum rate of engaged reading) and the duration of scholarship thus afforded for developing a critical appreciation. Undergraduate students face a greater challenge in that they will likely not engage with an area of scholarship long enough to develop an appreciation of the above problems.
Previous modes of scholarship would frame this as a problem of appreciating one’s disciplinary area. Come to terms with the main names in a field and you will know the field. This response relies on rearticulating normative hierarchies of scholarship that work to counteract the benefits of ‘open’ scholarship. What is the point of open scholarship if they same institutions have their work valorised over others? This reintroduces a different set of affordances that implicate users in a different (social) technology of convenience.
I think a better way to approach this initial period of scholarship in any given project is to approach the development of an appreciation of a given field as a process and the overarching relation between scholar and field in this process is one of discovery. We all become detectives investigating comparable research problems, rather than judges lording over privileged ways of doing scholarship.
Engineers at Facebook have worked to continually refine the ‘Edgerank‘ algorithm over the last five or six years or so. They are addressing the problem of how to manage the 1500+ pieces of content available at any moment from “friends, people they follow and Pages” into a more manageable 300 or so pieces of content. Questions have been asked about how Edgerank functions from two related groups. Marketers and the like are concerned about ‘reach’ and ‘engagement’ of their content. Political communication researchers have been concerned about how this selection of content (1500>300) relies on certain algorithmic signals that potentially reduces the diversity of sources. These signals are social and practice-based (or what positivists would call ‘behavioral’). Whenever Facebook makes a change to its algorithm it measures its success in the increase in ‘engagement’ (I’ve not seen a reported ‘failure’ of a change to the algorithm), which means interactions by users with content, including ‘clickthrough rate’. Facebook is working to turn your attention into an economic resource by manipulating the value of your attention through your News Feed and then selling access to your News Feed to advertisers.
Exposure to ideologically diverse news and opinion on Facebook
Recently published research by three Facebook researchers was designed to ascertain the significance of the overall selection of content by the Edgerank algorithm. They compared two large datasets. The first dataset was of pieces of content shared on Facebook and specifically ‘hard’ news content. Through various techniques of text-based machine analysis they distributed these pieces of content along a single political spectrum of ‘liberal’ and ‘conservative’. This dataset was selected from “7 million distinct Web links (URLs) shared by U.S. users over a 6-month period between July 7, 2014 and January 7, 2015”. The second dataset was of 10.1 million active ‘de-identified’ individuals who ‘identified’ as ‘conservative’ or ‘liberal’. Importantly, it is not clear if they only included ‘hard news’ articles shared by those in the second set. The data represented in the appended supplementary material suggests that this was not the case. There are therefore two ways the total aggregate Facebook activity and user base was ‘sampled’ in the research. The researchers combined these two datasets to get a third dataset of event-based activity:
This dataset included approximately 3.8 billion unique potential exposures (i.e., cases in which an individual’s friend shared hard content, regardless of whether it appeared in her News Feed), 903 million unique exposures (i.e., cases in which a link to the content appears on screen in an individual’s News Feed), and 59 million unique clicks, among users in our study.
These events — potential exposures, unique exposures and unique clicks — are what the researchers are seeking to understand in terms of the frequency of appearance and then engagement by certain users with ‘cross-cutting’ content, i.e. content that cuts across ideological lines.
The first round of critiques of this research (here, here, here and here) focuses on various aspects of the study, but all resonate with a key critical point (as compared to a critique of the study itself) that the research is industry-backed and therefore suspect. I have issues with the study and I address these below, but they are not based on it being an industry study. Is our first response to find any possible reason for being critical of Facebook’s own research simply because it is ‘Facebook’?
Is the study scientifically valid?
The four critiques that I have linked to make critical remarks about the sampling method and specifically how the dataset of de-identified politically-identifying Facebook users was selected. The main article is confusing and it is only marginally clearer in the appendix but it appears that both samples were validated against the broader US-based Facebook user population and total set of news article URLs shared, respectively. This seems clear to me, and I am disconcerted that it is not clear to those others that have read and critiqued the study. The authors discuss validation, specifically point 1.2 for the user population sample and 1.4.3 for the validation of the ‘hard news’ article sample. I have my own issues with the (ridiculously) normative approach used here (the multiplicity of actual existing entries for political orientation are reduced to a single five point continuum of liberal and conservative, just… what?), but that is not the basis of the existing critiques of the study.
Eszter Hargittai’s post at Crooked Timber is a good example. Let me reiterate that if I am wrong with how I am interpreting these critiques and the study, then I am happy to be corrected. Hargittai writes:
The second paragraph above continues with a further sentence that suggestions that the sample was indeed validated against a sample of 79 thousand other FB US users. Again, I am happy to be corrected here, but this at least indicate that the study authors have attempted to do precisely what Hargittai and the other critiques are suggesting that they have not done. From the appendix of the study:
I am troubled that other scholars are so quick to condemn a study for not being valid when it does not appear as if any of the critiques (at the time of writing) attempt to engage with the methods but which the study authors tested validity. Tell me it is not valid by addressing the ways the authors attempted to demonstrate validity, don’t just ignore it.
What does the algorithm do?
A more sophisticated “It’s Not Our Fault…” critique is presented by Christian Sandvig. He notes that the study does not take into account how the presentation of News Feed posts and then ‘engagement’ with this content is a process where the work of the Edgerank algorithms and the work of users can not be easily separated (orig. emphasis):
What I mean to say is that there is no scenario in which “user choices” vs. “the algorithm” can be traded off, because they happen together (Fig. 3 [top]). Users select from what the algorithm already filtered for them. It is a sequence.**** I think the proper statement about these two things is that they’re both bad — they both increase polarization and selectivity. As I said above, the algorithm appears to modestly increase the selectivity of users.
And the footnote:
**** In fact, algorithm and user form a coupled system of at least two feedback loops. But that’s not helpful to measure “amount” in the way the study wants to, so I’ll just tuck it away down here.
A “coupled system of at least two feedback loops”, indeed. At least one of those feedback loops ‘begins’ with the way that users form social networks — that is to say, ‘friend’ other users. Why is this important? Our Facebook ‘friends’ (and pages and advertisements, etc.) serve as the source of the content we are exposed to. Users choose to friend other users (or Pages, Groups, etc.) and then select from the pieces of content these other users (and Pages, advertisements, etc.) share to their networks. That is why I began this post with a brief explanation of the way the Edgerank algorithm works. It filters an average of 1500 possible posts down to an average of 300. Scandvig’s assertion that “[u]sers select from what the algorithm already filtered for them” is therefore only partially true. The Facebook researchers assume that Facebook users have chosen the sources of news-based content that can contribute to their feed. This is a complex set of negotiations around who or what has the ability and then the likelihood of appearing in one’s feed (or what could be described as all the options for organising the conditions of possibility for how content appears in one’s News Feed).
The study is testing the work of the algorithm by comparing the ideological consistency of one’s social networks with the ideological orientation of the stories presented and of the news stories’ respective news-based media enterprises. The study tests the hypothesis that your ideologically-oriented ‘friends’ will share ideological-aligned content. Is the number of stories from across the ideological range — liberal to conservative — presented (based on an analysis of ideological orientation of each news-based media enterprise’s URL) different to the apparent ideological homophily of your social network? If so, then this is the work of the algorithm. The study finds that the algorithm works differently for liberal and conservative oriented users.
For example, that the newsfeed algorithm suppresses ideologically cross cutting news to a non-trivial degree teaches individuals to not share as much cross cutting news. By making the newsfeed an algorithm, Facebook enters users into a competition to be seen. If you don’t get “likes” and attention with what you share, your content will subsequently be seen even less, and thus you and your voice and presence is lessened. To post without likes means few are seeing your post, so there is little point in posting. We want likes because we want to be seen.
‘Likes’ are only signal we have that helps shape our online behaviour? No. Offline feedback is an obvious one. What about the cross-platform feedback loops? Most of what I talk about on Facebook nowadays consists of content posted by others on other social media networks. We have multiple ‘thermostats’ for aligning the appropriate and inappropriateness of posts in terms of attention, morality, sociality, cultural value, etc. I agree with Jurgenson, when he suggests that Jay Rosen’s observation that “It simply isn’t true that an algorithmic filter can be designed to remove the designers from the equation.” A valid way of testing this has not been developed yet.
The weird thing about this study is that from a commercial point of view Facebook should want to increase the efficacy of the Edgerank algorithms as much as possible, because it is the principle method for manipulating the value of ‘visibility’ of each user’s News Feed (through frequency/competition and position). Previous research by Facebook has sought to explore the relative value of social networks as compared to the diversity of content, this included a project that investigated the network value of weak tie social relationships.
Effect of Hard and Soft News vs the Work of Publics
What is my critique? All of the critiques mention that the Facebook research, from a certain perspective, has produced findings that are not really that surprising because they largely confirmed how we already understand how people choose ideological content. A bigger problem for me is the hyper-normative classification of ‘hard’ and ‘soft’ news as it obscures part of what makes this kind of research actually very interesting. For example, from the list of 20 stories provided as an example of hard and soft news, at least two of the ‘soft’ news stories are not ‘soft’ news stories by anyone’s definition. From the appendix (page 15):
Protesters are expected to gather in downtown Greenville Sunday afternoon to stage a Die In along Main Street …
Help us reach 1,000,000 signatures today, telling LEGO to ditch Shell and their dirty Arctic oil!
There are at least two problems for any study that seeks to classify news-based media content according to normative hard and soft news distributions when working to isolate the how contemporary social media platforms have affected democracy:
1. The work of ‘politics’ (or ‘democracy’) does not only happen because of ‘hard news’. This is an old critique, but one that has been granted new life in studies of online publics. The ‘Die-In’ example is particularly important in this context. It is a story on a Fox News affiliate, and I have only been able to find the exact words provided in the appendix by the study authors to refer to this article on Fox News-based sites. Fox News is understood to be ‘conservative’ in the study (table S3 of appendix), and yet the piece on the ‘Die-In’ protest does not contain any specific examples of conservative framing. It is in fact a straightforward ‘hard news’ piece on the protest that I would actually interpret as journalistically sympathetic towards the protests. How many stories classified as ‘conservative’ because they appear on a Fox News-based URL? How many other allegedly ‘soft news’ stories were not actually soft news at all?
2. Why is ‘cross cutting’ framed only along ideological lines of content and users, when it is clear that allegedly ‘soft news’ outlets can cover ‘political topics’ and that more or less impact ‘democracy’? In the broadcast and print-era of political communication, end users had far less participatory control over the reproduction of issue-based publics. They used ‘news’ as a social resource to isolate differences with others, to argue, to understand their relative place in the world, etc. Of profound importance in the formation of online publics is the way that this work (call it ‘politics’ or not) takes over the front stage in what have been normatively understood as non-political domains. How many times have you had ‘political’ discussions in non-political forums? Or more important for the current study, how many ‘Gamergate’ articles were dismissed from the sample because the machine-based methods of sampling could not discern that they were about more than video games? The study does not address how ‘non-political’ news-based media outlets become vectors of political engagement when they are used as a resource by users to rearticulate political positions within issue-based publics.
Media editor of The Australian, Sharri Markson, has produced an article titled ‘Activism a threat to journalism‘. In it she draws on sources to argue that ‘activist journalism academics’ on ‘social media’ are a threat to journalism. She paraphrases her boss and Australian newspaper editor, Chris Mitchell:
Editor-in-chief of The Australian, Chris Mitchell, said the greatest threat to journalism was not the internet or governments and press councils trying to limit free speech, but the rise of the activist journalist over the past 25 years and the privileging of the views of activist groups over the views of the wider community.
Worse than the figure of the ‘activist journalist’ is the ‘modern journalism academic’. Here Markson introduces a Mitchell quote so as describe the ‘modern journalism academic’ as someone with opinions on political issues:
Mr Mitchell, who has edited newspapers for more than 20 years, said media academics who were vocal about ideological issues on social media were part of the problem.
“This is at the heart of my disdain for modern journalism academics. And anyone who watches their Twitter feeds as I do will know I am correct,’’ he said.
Tens of thousands of people, including journalism students and those starting their career in the industry, follow media academics Jenna Price, Wendy Bacon and journalist Margo Kingston on Twitter. All are opinionated on political issues.
Through its Media section the Australian newspaper is running a small-scale ‘moral panic’ about the loss of efficacy of legacy media outlets, like the print-based Australian newspaper. Most of the people who work at the Australian newspaper have been to university and would’ve more than likely come across the concept of a moral panic. Even if they haven’t, as savvy media operators that should be familiar with the concept.
The concept of the ‘moral panic’ once belonged to the academic discipline of sociology, but has now largely leaked into everyday language. A moral panic is a diagnostic tool used to understand how fears and anxieties experienced by social group often about social change is projected onto and becomes fixated around what is called a ‘folk devil’.
A ‘folk devil’ is a social figure who may be represented by actual people, but functions to gather fear and anxiety. I have a book chapter on the folk devil figure of the ‘hoon’. There are actual ‘hoons’ who are a road safety issue, but the hoon moral panics that swept across Australia 10 years ago were completely out of proportion to the actual risk presented by hoons. The figure of the hoon represented fears and anxieties about how young people use public space particularly in areas with high retiree and tourist populations.
Clearly, the ‘activist journalist’ and ‘modern journalism academic’ are the folk devil figures. What fears and anxieties do ‘activist journalists’ and ‘modern journalism academics’ represent? ‘Social media’ is used as a collective term in Markson’s piece to describe technologies and social practices that threaten not only the commercial existence of the Australian newspaper, but also its existential purpose. As Crikey reported last week, the Australian newspaper is losing money hand over fist, but I think this ongoing effort to attack ‘activist journalists’ and ‘modern journalism academics’ indicates that the anxiety has a greater purchase than mere commercial imperatives in the Australian newspaper workplace.
Markson has been a vocal activist for print-based publication and it is clear from her advocacy workon social media that she is a ‘print media’ enthusiast. Indeed, Markson and Mitchell could be described as what are the ‘moral entrepreneurs‘ of the ‘moral panic’ in this particular example. A ‘moral entrepreneur’ is a person or group of people who advocate and bring attention to a particular issue for the purposes of trying to effect change. In traditional moral panic theory this is largely local politicians who try to effect legislative change to compensate for the social changes that triggered the moral panic in the first place.
The Australian newspaper’s ongoing response to the perceived existential threat of ‘social media’ (as an inaccurate collective term to describe far more complex and longer term shifts in the media industry) is a useful example for thinking about the cyclical character of these outbursts. They are small-scale moral panics because they never really spread beyond a limited number of moral entrepreneurs. The latest round is merely another example of the media-based culture wars that began with the so-called ‘media wars‘ in the late 1990s. Again, journalism academics were central in the conflict over what counted as ‘journalism’ and/or ‘news’. More recently, the Australian newspaper attacked journalism programs and their graduates.
The ‘Outrage Cycle’
The concept of a ‘moral panic’ is a bit clunky and doesn’t really capture the cyclical character of these ideological battles over perceived existential threats. Creator of the ‘moral panic’ concept, Stanley Cohen, included some critical comments about the concept as a revised introduction to the 2002 third edition of his iconic Folk Devils and Moral Panics book. About the possibility of a “permanent moral panic” Cohen writes:
A panic, be defintion, is self-limiting, temporary and spasmodic, a splutter of rage which burns itself out. Every now and then speeches, TV documentaries, trials, parliamentary debates, headlines and editorials cluster into the peculiar mode of managing information and expressing indignation that we call a moral panic. Each one may draw on the same stratum of political morality and cultural unease and — much like Foucault’s micro-systems of power — have a similar logic and internal rhythm. Successful moral panics owe their appeal to their ability to find points of resonance with wider anxieties. But each appeal is a sleight of hand, magic without a magician. (xxx)
A useful model for understanding the cyclical character of the relation between anxiety (or what we call ‘affect’), greater media attention (or what we call, after Foucault, ‘visibility’) and an exaggerated sense of social norms and expectations is Gartner’s ‘Hype Cycle’ model.
It is not a ‘theoretical’ or even a ‘scientific’ tool; rather, it serves as a kind of rule of thumb about the reception of technological change for the purposes of creating business intelligence. New technologies tend to be hyped so take this into account when making business decisions about risks of investment. (Each year I use the ‘Hype Cycle’ to introduce my third year unit on technological change ; the way it represents technology is useful for understanding social relations and technology beyond technology being an ‘object’.) There is something similar going on with the Australian newspaper’s constant preoccupation with other journalists and in particular the role of journalism academics in society. Rather than the giddy ‘hype’ of the tech press and enthusiasts about technological change, the Australian newspaper’s cycle is organised around ‘outrage’. The Australian newspaper’s ‘Outrage Cycle’ is a useful way to frame how Western societies constantly mobilise to engage with perceived existential threats. The actual curve of the ‘Hype CYcle’ itself is less important than the cyclical character of trigger and response, which is also apparent in ‘moral panic’ theory:
I’ve changed the ‘zones’ of the Hype Cycle. ‘Maturity’ did not seem like the most appropriate measure of the X-axis, so I changed it to ‘time’ which Gartner also sometimes uses. I’ve made a table for ease of reference:
Peak of Inflated Expectations
Peak of Confected Outrage
Trough of Disillusionment
Trough of Realism
Slope of Enlightenment
Slope of Conservatism
Plateau of Productivity
Plateau of Social Norms
Existential threat: In the case of the Australian newspaper, the existential threat is not so much activist journalists and modern journalism academics, but the apparent dire commercial position of the newspaper and the accelerated decline in social importance of a national newspaper. The world is changing around the newspaper and it currently survives because of cross-funding arrangements from other sections of News Corp. The moral entrepreneurs in this case are fighting for the very existence of ‘print’ and the institutional social relations that ‘print’ once enjoyed. A second example of this involves ‘online piracy’, which serves as a perceived existential threat to the current composition of media distribution companies.
Peak of Confected Outrage: It is unclear who is actually outraged besides employees of News Corp about so-called ‘activist journalists’ and ‘modern journalism academics’ in general. There are specific cases, just like with ‘moral panics’, where specific people have triggered the ire of some social groups. They serve as representative ‘folk devils’ for an entire social identity. Similarly, ‘pirates’ serve as an example of ‘bad internet users’ who are part of the disruptions of the legacy media industry. There is a more sophisticated point to be made about reporting on ‘outrage’ and other affective states like ‘fear’ and ‘anxiety’. They become their own sources of newsworthiness.
Trough of Realism: In the case of the Australian newspaper, this is where legacy media advocates face up to the unfortunate reality of the shifting media industry. It is not clear to me, at least in this example, that this will actually happen. (Perhaps after the Australian newspaper folds?) In terms of ‘online piracy’ facing reality includes companies like Foxtel currently working to create online client versions of their pay TV business. It is basically at this point that proponents have to ‘face reality’.
Slope of Conservatism: In Gartner’s original version, technologies become adopted and companies learn how to use them appropriately. In the ‘Outrage Cycle’ the Slope of Conservatism is ironically named as it signals social change. In some ways, Markson’s advocacy of ‘print’ is a bad example of this. A better example is the way sports fans learn how to adapt to the commodification of broadcast sporting events.
Plateau of Social Norms: The constant change in social values and relations that have characterised Western societies for the last 300 years continues unabated, indicated by the increasing ‘liberalisation’ of normative social values, but societies often pass thresholds of organisational composition where certain norms are dominant. Heterosexual patriarchal social values and racist social values were normative up until the postwar period in Australia, then they began a very slow process of changing and we are still in the midst of these shifts. Most people who work in the media industry are learning to operate in the new norms that characterise contemporary expectations regarding the production, distribution/access and consumption of media and journalistic content. Recent examples of this include the popularity of the ‘home theatre’ as the most recent evolution of domestic cinema culture that become part of mass popular cultures with the VCR.
Major media corporations and tech giants have become bogged down in nymwars, post-hoc jerry-rigging and outright comment bans as they attempt to erase conflict around perenially divisive topics. All the while, as media companies are all too happy to trade on clickbait and outrage, there’s a suspicion that they have appropriated and mobilised the figure of the troll in order to constrain a new outpouring of political speech. Trolling has perhaps displaced pornography as the obscenity which underwrites the demand that the Internet be brought under control.
In the midst of social media’s perpetual flurries of outrage, we teach one another that the range of acceptable opinion is small, that we are individually responsible for comporting ourselves within these limits, and that the negative consequences are unpredictable, and potentially catastrophic. Accepting cues – from media, government and other authorities – about the dangers of incivility and extremism, we monitor each other’s conduct, ensuring that it doesn’t cross any arbitrary lines.
We can read the perpetual Outrage Cycle of the Australian newspaper as a machine for the production of new normative social values. Without being subsidised by other business areas of the News Corp enterprise, the Australian newspaper would be out of business, so to say that the Australian will inevitably fail is to miss the point that it is already in a state of constant ‘fail’. Unless someone thinks that the Australian newspaper will actually become profitable again (and will do so while its editor-in-chief and media editor are advocating for ‘print’), the social function of the Australian newspaper is not to make money as a commercial journalistic enterprise but to serve a social role that reinforce what its employees perceive to be normative social values.
The Australian newspaper and other News Corp print-based products seemed to be currently organised around using this ‘Outrage Cycle’ as a business model. Isolate a perceived existential threat (religion, class difference, education, etc.) and then represent this on the front page of newspapers in such a way as to create feelings of fear, anxiety and outrage in the community. We know that they do not aim to represent and report on this fear, anxiety and outrage, because otherwise their front pages would be full of articles about readers of their own newspapers.
If we wanted to build a digital startup journalism entity, we would behave like the technology company Vox Media truly is: launch fast and tweak often.
The launch of Vox.com has been framed in terms of it being a technology company. It would be interesting to see a breakdown of how they actually approach stories and the production of content. Hopefully, it is not like David Eun’s 2011 master plan for AOL.
Eun used an ‘engineering flow’ type approach to integrating SEO and analytics information into the production of news-based media content. Not very many people were happy about this. As one recent commentator described it:
It’s telling that throughout “The AOL Way”, the emphasis is on what managers and technology employees can do to maximize pageviews, and not on actual writing or video production, itself. That is, the presentation implies that AOL management took its content’s quality for granted.
“It was amazing to me as a reader how quickly I felt I fell off the news cycle,” she says. “If I wasn’t paying attention to the rapid developments, it was difficult for me to understand what was happening in major news stories. When I took that step back I realised the challenge of being a reader.”
The news cycle used to be organised around the habits of consumers. The evening broadcast television bulletin, the morning newspaper, or the hourly radio bulletin. It was structural to the rhythms of industry and cultural expectations of news consumers. Not unlike the difference between the ranking of books in the New York Time’s Bestsellers list as compared to the highlighting of book passages through Kindle as an index of popularity, has there been a shift in the character of the news cycle?
Ben Eltham‘s analysis of the cuts to investment in science and research also touches on some general points about building research capacity in the contemporary era of underfunded research institutions. The main point that needs to be emphasised is the lack of stability that the current de-funding approach produces. Eltham quotes Nobel laureate Brian Schmidt on the difficulties of planning a research program:
“I think the problem that we have right now is that every time we get a new program, the ground shifts,” Schmidt added.
“I’m trying to plan my research program, like everyone else, five to 10 years out, and when we spend money and then switch it a few years later, we end up not getting very good value for the government’s investment.”
Beyond politics and policy, there is a fundamental epistemological problem with the current instability produced by de-funding research.
There is currently too much risk involved in the business case for planning five to 10 year research plans. Who knows how research will be funded in three years, let alone 10? At a minimum, it means creating multiple research plans and diversifying investment in knowledge creation (that is, time required to do research) across these plans, so as to minimise the risk in the ‘ground shifting’ again. This dilutes the amount of work that can be carried out and reduces overall research capacity.
There is a massive drop in efficiency for organisations that lose confidence in how they appreciate their own progress. Who is succeeding in research when a program is simply de-funded? What does it mean to succeed? What are the milestones?
How many potential PhD candidates will look at the state of investment in research in this country and decide that the lack of security is not worth putting at risk their own welfare or the welfare of their families?