The first point made in the key findings of the report is about how young people receive news from family and friends, including teachers (from the infographic). Trust is extremely high.
My problem with the reporting in the Conversation is focuses on ‘fake news’. ‘Fake news’ has tabloid ‘outrage’ news value among an educated audience, but it is not actually that interesting from a research perspective.
After being part of three Digital News Reports (2015, 2016 and 2017) the key critical question for me is, how do children and young people develop news literacy and their own sources of news as they mature? If they are accessing news via their family and friends, does this mean this is how they also develop news literacy? By imitating the critical relationships based on cultural values and social norms of their parents? In our research low levels of trust in mainstream news have been interpreted as relatively high levels of critical news literacy. How does this work in the context of young people developing their own news literacy if they have extremely high levels of trust in their primary sources of news?
Critical News Literacies?
What is the relationship between perceptions of bias (key finding 3) and the capacity to spot ‘fake news’ (key finding 4)? Arguably ‘fake news’ is irrelevant compared to the ideological framing of most of the mainstream news. The key development of 8-12 to 13-16 year olds seems to be the radical reduction in the percentage share of those responding to the survey who don’t know about various measures of bias (Figures 18-20). That is, there is roughly half the number of young people who responded ‘I don’t know’ to questions 13-16 year olds compared to 8-12 year olds. Rightly or wrongly having a view on the bias of news representations demonstrates critical or discerning engagement and this increases.
I scraped Breitbart’s all posts from Facebook page. This is a representation of all ‘engagement’ (likes, comments and shares) for each month. The first six months of 2015 saw tremendous growth in engagement and it would be worth exploring what actually happened in that period, so I did a search of the Nexis service for ‘Breitbart’ across January – June 2015 to see if mainstream news services mentioned the site. Nexis is not comprehensive but it does track most major news publications and services. I did not include ‘press releases’ or ‘newswires’. Plus I collated all the articles that mentioned ‘Breitbart’ without any data cleaning so likely multiple entries for same article published in slightly different ways.
The table at the bottom of this post lists the publications with the most mentions of ‘Breitbart’. A few comments about this list. I had to search for ‘US Official News’ as I had not heard of it before. It is LexisNexis’s own news aggregation service. I think I can assume that only subscribers to LexisNexis can access this so it is not important for getting a sense of this period. MailOnline is next and as a click chasing operation it clearly went after ‘outrage’. There are multiple entries for WaPo blogs in the list so I think posts are being counted more than once. Interesting to see the Canberra Times down the bottom.
Reading the three pieces mentioned in these articles requires a subtle attuned to the concerns of Breitbart. The review celebrates the movie and what is understood to be general sentiment behind it. It also couches the movie as a kind of repudiation (I think?) of ‘Big Hollywood’. ‘Big Hollywood’ is a meta-tag on the site and therefore can be understood to be one of the major concerns. I think it refers to the conservative belief that the ‘cultural left’ rules Hollywood and that there is a kind of conspiracy to de-valuing ‘right wing’ culture. The other pieces are similar and even more explicitly framed in terms of broader concerns. The second WaPo blog piece is about ‘mainstream media’ reporting on ‘hoaxes’ as if they were true. The third piece interprets a tweet by Seth Rogen in such a way as to suggest that the movie is akin to Nazi propaganda. These are also tagged Big Hollywood. In this context then ‘Big Hollywood’ is not only about the movie industry but popular culture more broadly.
I work with students to rethink the concept of the ‘filter bubble’ and locate it in a much broader context of how the subject position of user is created through affordances of technologies and services. At stake is whether or not there is a new kind of audience passivity, one that is necessarily co-constituted through user activity, rather than the older notions of a passive mass audience.
In Culture + Technology, Slack and Wise (2005: 33) suggest that to be a “fully functioning adult member of the culture”:
you are likely to have accepted as necessities various technologies and technological practices that are not biological, but are rather cultural necessities.
My current students are afflicted with the generational myth of the ‘digital native’. The character of the ‘digital native’ frames engagement with technology and the capabilities and affordances expected or assumed of an entire generation reconfigured as ‘users’. The idea that, like speakers in language, there are native and immigrant users of technology. Digital natives “surrounded by and using computers, videogames, digital music players, video cams, cell phones, and all the other toys and tools of the digital age” (Prensky, 2001, p. 1). Bennet, Mason, and Karvin (2008) argue that the discourse around “digital natives .[..] rather than being empirically and theoretically informed, the debate can be likened to an academic form of a ‘moral panic.’” For Sadowski (2014) it is a rearticulation of technology discourses that boost ‘gadgets’ over people:
The larger issue is that, when we insist on generalizing people into a wide category based on birth year alone, we effectively erase the stark discrepancies between access and privilege, and between experience and preference. By glancing over these social differences, and just boosting new technologies instead, it becomes easy to prioritize gadgets over what actually benefits a diverse contingent of people.
The myth of the ‘digital native’ has been translated into an educational context with three assumptions (Kirschner and Merriënboer 2013). First, students really understood what they were doing; second, students were using technologies effectively and efficiently; and, third, it is good to design education where students can use digital technologies. What I notice with students is that they do not necessarily seek mastery over a given technology or set of skills or even competence with regards to the professional standards of proficiency, but ‘convenience’. This echoes findings from Kvavik (2005) that carried out a survey of 4374 students of the so-called ‘net generation’ to examine their relation to technology at university. Kvavik interrogated some of the assumptions that articulated a generational cohort with technological skill or capacity:
Do they ‘prefer technology’? Only moderate preference.
Is technology ‘increasingly important’? Most skilled students had mixed feelings.
Do they already possess ‘good IT skills in support of learning’? No, many skills had to be acquired. Skills acquired through requirements of curriculum.
Importantly, Kvavik found that ‘convenience’ was the most common unprompted open text response to good qualities of using technology at university. Relations of ‘convenience’ reintroduce new forms of passivity, where technology use is appreciated as ‘good’ if it is ‘convenient’. What happens in contexts where technology makes a given practice too convenient?
A Case for Practicing Inconvenient Scholarship?
Students are arguably disadvantaged by the technologies of scholarship that most academics and researchers take for granted, such as Google Scholar and the more general phenomena of digitized scholarship. ‘Research practice’ in the humanities and social sciences prior to web often began with a review of literature on a given topic or area of interest. This literature search was profoundly inconvenient, and shaped by limited access and a slow temporality when physical copies of texts were moved around from location of repository to the scholar. A similar moment in current ‘research practice’ in the humanities and social sciences is now characterised by digital searches of an excess of information and the immediacy of ‘answers’ to ‘questions’ just posed. The relative ‘openness’ of with regards to access to such scholarship is a boon, but only in those circumstances where the research questions are not developed in a digitally-enabled and networked context.
The challenge with contemporary research students in particular is the number of possible sources (infinite, literal rate of publishing in some areas is quicker than the maximum rate of engaged reading) and the duration of scholarship thus afforded for developing a critical appreciation. Undergraduate students face a greater challenge in that they will likely not engage with an area of scholarship long enough to develop an appreciation of the above problems.
Previous modes of scholarship would frame this as a problem of appreciating one’s disciplinary area. Come to terms with the main names in a field and you will know the field. This response relies on rearticulating normative hierarchies of scholarship that work to counteract the benefits of ‘open’ scholarship. What is the point of open scholarship if they same institutions have their work valorised over others? This reintroduces a different set of affordances that implicate users in a different (social) technology of convenience.
I think a better way to approach this initial period of scholarship in any given project is to approach the development of an appreciation of a given field as a process and the overarching relation between scholar and field in this process is one of discovery. We all become detectives investigating comparable research problems, rather than judges lording over privileged ways of doing scholarship.
Looking around, I wondered why Halliday, who always claimed to have had a miserable childhood, had later become so nostalgic for it. I knew that if and when I finally escaped from the stacks, I’d never look back. And I definitely wouldn’t create a detailed simulation of the place. (103)
At the time of writing 51% of the 236038 ratings on Goodreads for Ernest Cline’s 2011 novel Ready Player One are five stars. Most commentaries on the novel are celebratory. I think it is one of the most condensed representations of contemporary hegemonic masculinity organised around geek/brogrammer culture. For those who have read it, think of the story and all the main characters. The ‘James Halliday’ character wasn’t the benevolent tech genius, entrepreneur and lovable anti-social geek, but is, in fact, the super-villain. His character is premised on the social norms for reproducing the kind of toxic masculinity that has come to characterise a number of recent fronts in the culture wars.
The novel is set in the near future and Halliday and his business partner Ogden Morrow create a kind of mash-up of Facebook and World of Warcraft virtual world called Ontologically Anthropocentric Sensory Immersive Simulation (OASIS). The plot of the novel is driven by an elaborate meta-game in OASIS created by Halliday as a Willy Wonka-style mechanism for handing over control of most of his estate. That is, whoever ‘wins’ this meta-game, thus proving their ultimate geek credentials, inherits ownership of OASIS. The meta-game requires players to have elaborate knowledge of mostly late-1970s and 1980s popular cultural texts. The ‘Halliday’ character hid the meta-game as a series of elaborate ‘easter eggs’ embedded in the structure of the larger OASIS universe. It is the ultimate geek fantasy, not that you are simply ‘better’ than the normative social and cultural ideal, but that the new normal is premised on (alleged) geek ideals. As Nicholas Mizer explains:
The geeks of the story race through an “easter egg” hunt in the OASIS, the winner of which controls the fate of the virtual spaces it contains. In this story, geek cultural spaces are all known and mapped, in a totalized version of the geek dilemma I have described [of too much popularity]. Rather than directly confronting the corporate “egg hunters” that want to re-shape the virtual world to their own ends, however, the protagonist always manages to stay one step ahead of them because his intense love of the cultural spaces has driven him deep below their surface. Here power comes not through simply inhabiting the spaces of geek culture, but through intensive familiarity with every aspect of those spaces. (24-25)
Mizer goes on to describe this as the cultural tactic of ‘digging down’ into the context of a cultural text (or practice or artefact) to such an extent that the ‘geek’ becomes completely immersed and is able to discover new qualities of the text worthy of their interest. Hence, the cultural logic of the ‘Easter Egg’ meta-game that Ready Player One is based on.
There are no more hidden thought-palaces—they’re easily accessed websites, or Facebook pages with thousands of fans. And I’m not going to bore you with the step-by-step specifics of how it happened. In the timeline of the upheaval, part of the graph should be interrupted by the words the Internet. And now here we are.The problem with the Internet, however, is that it lets anyone become otaku about anything instantly. In the ’80s, you couldn’t get up to speed on an entire genre in a weekend.
Ready Player One is a response to the democratisation of geek culture. Cline presents every aspect of Halliday’s taste as worthy of valorisation through the gamified logic of the competition:
“Canon” was the term we used to classify any movie, book, game, song, or TV show of which Halliday was known to have been a fan. (40)
Jim [Halliday] always wanted everyone to share his obsessions, to love the same things he loved. I think this contest is his way of giving the entire world an incentive to do just that. (122)
Cline operationalises the dual cultural logic of the ‘Easter egg’: that there is secret knowledge regarding a shared cultural object and what matters is who is knows and who does not know about this secret. Geek authenticity is therefore a performance of ‘knowingness’ about the shared cultural object (that may or may not exist, such as the case with spoilers and the like). The ‘Easter eggification’ of geek culture encourages a paranoid, reactionary mode of cultural consumption that is forever defensive about protecting the conditions of possibility for the ‘Easter egg’.
Writer Laura Bennet points out the positive social and political shifts of the rise of first person journalism. That there is “more of a market for underrepresented viewpoints than ever”. They seem to dramatize at the level of genre the relationship between the personal and the political. These are fantastic developments in the contemporary character of mass and niche media. Bennet also indicates the strong negatives:
The “first-person economy […] incentivizes knee-jerk, ideally topical self-exposure, the hot take’s more intimate sibling.”
Works of first person journalism “seem to be professional dead ends, journalistically speaking […] [r]ather than feats of self-branding”.
Pitches all end up sounding like they “were all written in the same voice: ‘immature, sort of boastful.'”
They’re predominately popular in a highly gendered part of the market: “many of the outlets that are most hungry for quick freelancer copy, and have the lowest barriers to entry for publication, are still women’s interest sites”. This is of course not ‘bad’. The implication is that first person journalism is a genre that has a very limited market.
But these do not explain why first person journalism has emerged as one of the popular genres of content online. Bennet draws a connection to the personal disclosure mode of Web 1.0’s practices of blogging. That might be true of very early examples of first person journalism online (2005-2009) but seems less true for subsequent generations of writers who simply bypassed the ‘blogging’ era of the internet.
Although they may be using the rhetorical forms of early blog-based first person journalism, the discursive function of the genre I suggest has more in common with celebrity discourse. As David Marshall argues, “celebrities have become the discursive talking points for the political dimensions of a host of formerly private and personal concern” (2009: 27). For example, an analysis of the representation of Slovenian political celebrities taking part in weekly interviews published in mass-market women’s magazine Jana, Luthar (2010) describes a process of personalisation which “involves the construction and representation of famous people and celebrities as individualized human types as the major component of popular discourse” (2010: 696). Luthar is concerned with the discursive articulation of a national Slovenian identity through personal identity characteristics, primarily gender. But we can see how first person journalism is a more general personalisation of what media and communications scholars call ‘public discourse’.
Celebrity discourse is one way to personalise public discourse and the genre of first person journalism is another. (To get more technical, the personalisation of public discourse around social issues through traumatic experience is one way to anchor audiences to affectively resonant ‘issue publics’ and produce click-based audiences as a commodity in the post-broadcast attention economy.) It in part explains why young writers think they are promoting themselves as ‘writers’ when they write and seek publication for works of first person journalism. They think that if their story allows them to become the center of an issue-based public organised around their experience, then this reflects well on their aspirations for being journalists or media personalities. In effect they become minor issue-based celebrities because of their experience. Instead, I’d emphasise Bennet’s point about the way the ‘click economy’ consumes such aspirants is very useful advice.