‘Conversations’ are one way to examine interactions on social media. We looked at conversations as the unit of analysis in our Turnbull paper due out in MIA I think in a few months (based on our ANZCA paper). A simple point to make is that the ‘public conversation’ is not the same thing as ‘conversations on Twitter’ or even ‘Twitter publics’. Sure, there are conversations that happen entirely on Twitter (say a Trump tweet and reaction and cross talk), but these are not very useful as the basis of assessing public conversations. How Twitter users produce openings on other spaces, so that the circulation of discourse is necessarily cross-platform. The philosophy behind Cortico’s general approach looks interesting, but Twitter and Cortico will need to partner with other platforms.
There are broadly two ways to map the circulation of discourse. The first is derived from Bourdieu and maps a ‘field’ based on the social interactions between actors and the analytical construction of what is valued in the field (doxa). I think this is inherently flawed because of the reliance on a notion of faith (as in good or bad faith). Bourdieu’s Manet lectures are clear on this. The second is derived from Foucault and maps the discursive regularities between statements and the analytical construction is regarding the conditions of possibility based on ‘authority’ and composition of power relations (dispositif). What Foucault broadly called ‘eventalization’ (only ever in interviews, so the method has to be reverse engineered across a range of works). Interestingly, network graphing techniques seem to be aligned with ‘eventalization’ until you realise that they mostly rely on the providence of digital objects and platform-based network relations between them. There have been few attempts to map networks of discourse in spite of the platforms as this multiplies the work exponentially.
Analysing discourse in terms of the ‘health’ of conversations assumes a normative dimension that I think smuggles in assumptions about the good faith of actors. There are two problems here. First, analytics are unlikely to indicate how a particular user is ‘blinkered’, and therefore has an extremely constrained degree of freedom (in the systems theory sense), what Guattari called a low co-efficient of transversality or Warner might talk about in terms of the character of reflexivity. They will instead show how such blinkered users belong to tribes, because of the discursive coherency and affective congruence of discourse. So what? Second, Twitter does not appear to want to operate upon the good or bad faith of actors, and therefore take obvious steps to reduce the weaponised use of the platform (such as reduced functionality for new accounts until thresholds of participation are passed, such as number of followers or interactions). Getting over the normative assumptions about the good faith of users is an important first step.
The image of Aylan Kurdi washed ashore has had a dramatic impact on the character of the refugee debate in Australia and elsewhere. Most responses from across the political spectrum have recognised the need for greater compassion in rethinking policy. Radical conservatives like Australian politician Cory Bernardi or media commentator Andrew Bolt have isolated themselves to a few limited talking points as I discuss below. What is clear is that the image of the little boy being picked up delicately by the soldier has managed to change the character of the debate so that instead of debating whether or not these people are ‘migrants’ or ‘refugees’ they have become subject to our compassion.
In media studies we call this a shift in the ‘discourse’, which means that there has been change in the normal social expectations that people have about what can and can not be said. Bernardi has clearly misunderstood the broader context of this shift and is still attempting to address a tiny minority of radical conservatives. The political talking points are now about the appropriate measure of response rather than whether or not those escaping trauma are refugees.
The Australian Prime Minister, Tony Abbott, was attempting to express his political party’s old policy position in terms of the new discourse as recently as four days ago. He stated that:
We are a country which, on a per capita basis, takes more refugees than any other. We take more refugees than any other through the UNHCR on a per capita basis, but obviously this is a very grave situation in the Middle East.
This is an attempt to frame the current policy in such a way that it responds to the overwhelming demand for compassion. The response to Abbott’s claim was swift. Refugee advocates had used legalistic mechanisms to try to force reluctant Australian governments to take more refugees. Abbott was responding to this version of the refugee discourse. Less than 1% of 14.4 million refugees of concern to UNHCR around the world are submitted for resettlement. Abbott had failed to respond to the new discourse of compassion, which was not couched in a legalistic discourse.
The Australian government has today responded to the current refugee crisis by increasing the intake of refugees and funding contributing to the overall global cause. Abbott has changed the way he talks about the refugees, he has shifted from a legalistic discourse to a discourse of compassion. Note the change in the way he talks about those working to escape trauma for example (from various reports):
This is a very significant increase in Australia’s humanitarian intake and it’s a generous response to the current emergency.
Our focus for these new 12,000 permanent resettlement places will be those people most in need of permanent protection – women, children and families from persecuted minorities who have sought temporary refuge in Jordan, Lebanon and Turkey.
I agree with the Leader of the Opposition that there is an unprecedented crisis. It is, as he said earlier this afternoon, probably the most serious humanitarian crisis that we have seen, the greatest mass movement of people that we have seen since the end of the Second World War and the partition of India.
I can inform the House that it is the government’s firm intention to take a significant number of people from Syria this year. We will give people refuge; that is the firm intention of this government.
It is a response that is now framed in the discourse of compassion.
Media Events as Focusing Events
The power of a single image to cut through and develop into a much bigger media event was explored by McKenzie Wark in his book Virtual Geography (here is a super-condensed version). Wark develops a notion of weird global media events based on what he calls media vectors. Wark’s basic point is that as images circulate across media vectors they develop into a media event. This is different to the other established definition of a media event organised around ‘mega-events’ that are produced and made for broadcast television (Dayan & Katz 1991). The vector-based media events are far more common now in our era of social media and the power of social media to draw our attention to sinsular images.
Aylan Kurdi’s image becoming a media event is an example of what John Kingdon calls a ‘focusing event’ in the terrain of public policy making. Focusing events are those experiences or occurences that force politicians to attend to them. Kingdon suggests there are two types of focusing events. The first is premised on personal experiences made by policy makers. The second is the impact of powerful symbols. In this case it is an example of both, as expressed by Liberal backbencher Ewen Jones:
You forget how light children are, you forget how small they actually are as they grow. And it’s one of those things that you just saw this poor, lifeless little – lifeless little tot and that really does chill you straight through.
So … what exactly was he “fleeing” when he paid a people smuggler thousands of dollars to bring his family — without safety vests — to Greece, to join that irresistible army of illegal immigrants now smashing through Europe’s borders?
Tima Kurdi explained… “The situation is that Abdullah does not have any teeth…
“So I been trying to help him fix his teeth. But is gonna cost me 14,000 and up to do it …
“Actually my dad, he come up with the idea, he said to me, ‘I think if they go to Europe for his case and for our future, I think he should do that, and then we’ll see if he can fix his teeth’.
“And that’s what I did three weeks ago.” She sent her brother the money for people smugglers.
Now, it is terrible to have no teeth. Awful to be poor. A misery to have your children denied chances.
But can the West really take in not just real refugees, but the Third World’s poor as well, including those in search of better dentistry?
Originally born in Damascus, Mr Kurdi moved to the Kurdish city of Kobane after the uprising against President Bashar-al Assad began in 2011. He says he has suffered at the hands of every side in Syria’s brutal civil war. At the beginning of the anti-Assad revolution, he was tortured by Syrian state security services, while during the Islamic State takeover of Kobane, he was arrested by Isil fanatics and beaten again, this time losing eight of his teeth.
He said he then applied for asylum in Canada, where his sister Fatima lives, but had his case rejected. It was then that he decided to try to take the family to Europe. His attempt last week was his third, the first two having ended with the family being caught and turned back by coast guard vessels.
Radical conservatives are choosing to understand the tragedy of the Kurdi family in terms of the previous legalistic discourse of refugees fleeing across borders from a specific conflict in a geopolitical location. They are choosing to believe that the Kurdi family’s trauma somehow ended once they entered Turkey. The discourse of compassion is organised around the trauma of refugees, not their geopolitical location. The aim of refugee policy should be to reduce the terrible trauma that refugees experience, not perpetuate it.
Scraping the results from a Twitter ‘advanced search’ allows you create an archive of tweets without the limitations of the API. It is only useful for relatively small sets that have less than 3,200 tweets per day as you can query Twitter for all tweets for a given hashtag per day.
The lists of tweets shall be used for the purpose of carrying out sophisticated analyses of the ‘circulation of discourse’:
Writing to a public helps to make a world, insofar as the object of address is brought into being partly by postulating and characterizing it. This performative ability depends, however, on that object’s being not entirely fictitious–not postulated merely, but recognized as a real path for the circulation of discourse. That path is then treated as a social entity. (Warner 2002: 64)
The character of this discourse will depend on the stakeholder publics they (or their organisations) wish to engage with and so on.
The economy of culture is, accordingly, not a description of culture as a representation of certain extra-cultural economic constraints. Rather, it is an attempt to grasp the logic of cultural development itself as an economic logic of the revaluation of values.
I am enjoying Groys’ non-market ‘economic’ interpretation of Nietzschean truth. He develops an economic conception of Nietzsche’s non-moral version of value without turning to Marxist conceptions of value that would position cultural value as a consequence of the social relation between capital and labour power.
In my True Detective essay I develop a notion of ‘meta’ so as to grapple with the epistemological displacement that occurs in the midst of a revaluation of values. I call this a ‘liminal epistemology’, which has been commodified as ‘discovery’ in contemporary ‘apps’ that assist users access various kinds of cultural texts (music, written texts, phatic/social media texts, etc). The media event of True Detective (as compared to the televisual text) is interesting as it dramatises the ‘detective work’ of this liminal epistemology itself. From the introduction of my True Detective essay:
If nothing else, True Detective clearly triggers meta-detective work by the audience. The show, its inter-textual references, and non-diegetic exegetical explanations of these references produced new edges of surprise and a new sense of expectation. For example, there is a folding of the crime fiction genre into existentialist horror and a topological transformation wrought upon both. Both genres frame a passage of discovery by the characters and audience. “Discovery” has become a buzzword in user-centred design to describe the design of platforms that assist users discover appropriate content, and this refers to the way users willingly embrace the delegated agency of “smart” interfaces. The liminal epistemology of discovery in meta-stable media assemblages pose answers to questions that haven’t yet been asked. The question isn’t simply asked of the characters of the show, but of the entire event itself as it repeated different elements of genres in different ways; in effect, the audience carries out meta-detective work.
The reason why I am excited about Groys’ work is that he has already isolated a similar problematic with regards to the revaluation of values. His focus so far is not animated by the same concerns as I am, but there is a similar problematic. I make it very clear that what I found the most interesting about the True Detective media event is that it is part of a broader constellation of cultural texts that are all, in different ways, working through this revaluation of values. From the introduction of my essay:
In the final section I develop meta in terms of what Sianne Ngai (2012) calls a minor aesthetic category, and in this case what characterises meta as a minor aesthetic category is the way any text, object or event that dramatises the suspension of cultural values. In Simondon’s terms, meta is an aesthetic category that refers to works that in some way repotentialise values that serve as the “preindividual norms” of value in a state of meta-stability ready to be potentialised in a multiplicity of ways (Combes 2013: 64). As I shall explore in detail, True Detective dramatises a conflict between systems of belief and cultural value through the figures of the two main characters, Rust and Marty. In this way, “meta” signals a threshold of value (or what Nietzsche (1968) calls “transvaluation”) more often associated with nihilism.
Engineers at Facebook have worked to continually refine the ‘Edgerank‘ algorithm over the last five or six years or so. They are addressing the problem of how to manage the 1500+ pieces of content available at any moment from “friends, people they follow and Pages” into a more manageable 300 or so pieces of content. Questions have been asked about how Edgerank functions from two related groups. Marketers and the like are concerned about ‘reach’ and ‘engagement’ of their content. Political communication researchers have been concerned about how this selection of content (1500>300) relies on certain algorithmic signals that potentially reduces the diversity of sources. These signals are social and practice-based (or what positivists would call ‘behavioral’). Whenever Facebook makes a change to its algorithm it measures its success in the increase in ‘engagement’ (I’ve not seen a reported ‘failure’ of a change to the algorithm), which means interactions by users with content, including ‘clickthrough rate’. Facebook is working to turn your attention into an economic resource by manipulating the value of your attention through your News Feed and then selling access to your News Feed to advertisers.
Exposure to ideologically diverse news and opinion on Facebook
Recently published research by three Facebook researchers was designed to ascertain the significance of the overall selection of content by the Edgerank algorithm. They compared two large datasets. The first dataset was of pieces of content shared on Facebook and specifically ‘hard’ news content. Through various techniques of text-based machine analysis they distributed these pieces of content along a single political spectrum of ‘liberal’ and ‘conservative’. This dataset was selected from “7 million distinct Web links (URLs) shared by U.S. users over a 6-month period between July 7, 2014 and January 7, 2015”. The second dataset was of 10.1 million active ‘de-identified’ individuals who ‘identified’ as ‘conservative’ or ‘liberal’. Importantly, it is not clear if they only included ‘hard news’ articles shared by those in the second set. The data represented in the appended supplementary material suggests that this was not the case. There are therefore two ways the total aggregate Facebook activity and user base was ‘sampled’ in the research. The researchers combined these two datasets to get a third dataset of event-based activity:
This dataset included approximately 3.8 billion unique potential exposures (i.e., cases in which an individual’s friend shared hard content, regardless of whether it appeared in her News Feed), 903 million unique exposures (i.e., cases in which a link to the content appears on screen in an individual’s News Feed), and 59 million unique clicks, among users in our study.
These events — potential exposures, unique exposures and unique clicks — are what the researchers are seeking to understand in terms of the frequency of appearance and then engagement by certain users with ‘cross-cutting’ content, i.e. content that cuts across ideological lines.
The first round of critiques of this research (here, here, here and here) focuses on various aspects of the study, but all resonate with a key critical point (as compared to a critique of the study itself) that the research is industry-backed and therefore suspect. I have issues with the study and I address these below, but they are not based on it being an industry study. Is our first response to find any possible reason for being critical of Facebook’s own research simply because it is ‘Facebook’?
Is the study scientifically valid?
The four critiques that I have linked to make critical remarks about the sampling method and specifically how the dataset of de-identified politically-identifying Facebook users was selected. The main article is confusing and it is only marginally clearer in the appendix but it appears that both samples were validated against the broader US-based Facebook user population and total set of news article URLs shared, respectively. This seems clear to me, and I am disconcerted that it is not clear to those others that have read and critiqued the study. The authors discuss validation, specifically point 1.2 for the user population sample and 1.4.3 for the validation of the ‘hard news’ article sample. I have my own issues with the (ridiculously) normative approach used here (the multiplicity of actual existing entries for political orientation are reduced to a single five point continuum of liberal and conservative, just… what?), but that is not the basis of the existing critiques of the study.
Eszter Hargittai’s post at Crooked Timber is a good example. Let me reiterate that if I am wrong with how I am interpreting these critiques and the study, then I am happy to be corrected. Hargittai writes:
The second paragraph above continues with a further sentence that suggestions that the sample was indeed validated against a sample of 79 thousand other FB US users. Again, I am happy to be corrected here, but this at least indicate that the study authors have attempted to do precisely what Hargittai and the other critiques are suggesting that they have not done. From the appendix of the study:
I am troubled that other scholars are so quick to condemn a study for not being valid when it does not appear as if any of the critiques (at the time of writing) attempt to engage with the methods but which the study authors tested validity. Tell me it is not valid by addressing the ways the authors attempted to demonstrate validity, don’t just ignore it.
What does the algorithm do?
A more sophisticated “It’s Not Our Fault…” critique is presented by Christian Sandvig. He notes that the study does not take into account how the presentation of News Feed posts and then ‘engagement’ with this content is a process where the work of the Edgerank algorithms and the work of users can not be easily separated (orig. emphasis):
What I mean to say is that there is no scenario in which “user choices” vs. “the algorithm” can be traded off, because they happen together (Fig. 3 [top]). Users select from what the algorithm already filtered for them. It is a sequence.**** I think the proper statement about these two things is that they’re both bad — they both increase polarization and selectivity. As I said above, the algorithm appears to modestly increase the selectivity of users.
And the footnote:
**** In fact, algorithm and user form a coupled system of at least two feedback loops. But that’s not helpful to measure “amount” in the way the study wants to, so I’ll just tuck it away down here.
A “coupled system of at least two feedback loops”, indeed. At least one of those feedback loops ‘begins’ with the way that users form social networks — that is to say, ‘friend’ other users. Why is this important? Our Facebook ‘friends’ (and pages and advertisements, etc.) serve as the source of the content we are exposed to. Users choose to friend other users (or Pages, Groups, etc.) and then select from the pieces of content these other users (and Pages, advertisements, etc.) share to their networks. That is why I began this post with a brief explanation of the way the Edgerank algorithm works. It filters an average of 1500 possible posts down to an average of 300. Scandvig’s assertion that “[u]sers select from what the algorithm already filtered for them” is therefore only partially true. The Facebook researchers assume that Facebook users have chosen the sources of news-based content that can contribute to their feed. This is a complex set of negotiations around who or what has the ability and then the likelihood of appearing in one’s feed (or what could be described as all the options for organising the conditions of possibility for how content appears in one’s News Feed).
The study is testing the work of the algorithm by comparing the ideological consistency of one’s social networks with the ideological orientation of the stories presented and of the news stories’ respective news-based media enterprises. The study tests the hypothesis that your ideologically-oriented ‘friends’ will share ideological-aligned content. Is the number of stories from across the ideological range — liberal to conservative — presented (based on an analysis of ideological orientation of each news-based media enterprise’s URL) different to the apparent ideological homophily of your social network? If so, then this is the work of the algorithm. The study finds that the algorithm works differently for liberal and conservative oriented users.
For example, that the newsfeed algorithm suppresses ideologically cross cutting news to a non-trivial degree teaches individuals to not share as much cross cutting news. By making the newsfeed an algorithm, Facebook enters users into a competition to be seen. If you don’t get “likes” and attention with what you share, your content will subsequently be seen even less, and thus you and your voice and presence is lessened. To post without likes means few are seeing your post, so there is little point in posting. We want likes because we want to be seen.
‘Likes’ are only signal we have that helps shape our online behaviour? No. Offline feedback is an obvious one. What about the cross-platform feedback loops? Most of what I talk about on Facebook nowadays consists of content posted by others on other social media networks. We have multiple ‘thermostats’ for aligning the appropriate and inappropriateness of posts in terms of attention, morality, sociality, cultural value, etc. I agree with Jurgenson, when he suggests that Jay Rosen’s observation that “It simply isn’t true that an algorithmic filter can be designed to remove the designers from the equation.” A valid way of testing this has not been developed yet.
The weird thing about this study is that from a commercial point of view Facebook should want to increase the efficacy of the Edgerank algorithms as much as possible, because it is the principle method for manipulating the value of ‘visibility’ of each user’s News Feed (through frequency/competition and position). Previous research by Facebook has sought to explore the relative value of social networks as compared to the diversity of content, this included a project that investigated the network value of weak tie social relationships.
Effect of Hard and Soft News vs the Work of Publics
What is my critique? All of the critiques mention that the Facebook research, from a certain perspective, has produced findings that are not really that surprising because they largely confirmed how we already understand how people choose ideological content. A bigger problem for me is the hyper-normative classification of ‘hard’ and ‘soft’ news as it obscures part of what makes this kind of research actually very interesting. For example, from the list of 20 stories provided as an example of hard and soft news, at least two of the ‘soft’ news stories are not ‘soft’ news stories by anyone’s definition. From the appendix (page 15):
Protesters are expected to gather in downtown Greenville Sunday afternoon to stage a Die In along Main Street …
Help us reach 1,000,000 signatures today, telling LEGO to ditch Shell and their dirty Arctic oil!
There are at least two problems for any study that seeks to classify news-based media content according to normative hard and soft news distributions when working to isolate the how contemporary social media platforms have affected democracy:
1. The work of ‘politics’ (or ‘democracy’) does not only happen because of ‘hard news’. This is an old critique, but one that has been granted new life in studies of online publics. The ‘Die-In’ example is particularly important in this context. It is a story on a Fox News affiliate, and I have only been able to find the exact words provided in the appendix by the study authors to refer to this article on Fox News-based sites. Fox News is understood to be ‘conservative’ in the study (table S3 of appendix), and yet the piece on the ‘Die-In’ protest does not contain any specific examples of conservative framing. It is in fact a straightforward ‘hard news’ piece on the protest that I would actually interpret as journalistically sympathetic towards the protests. How many stories classified as ‘conservative’ because they appear on a Fox News-based URL? How many other allegedly ‘soft news’ stories were not actually soft news at all?
2. Why is ‘cross cutting’ framed only along ideological lines of content and users, when it is clear that allegedly ‘soft news’ outlets can cover ‘political topics’ and that more or less impact ‘democracy’? In the broadcast and print-era of political communication, end users had far less participatory control over the reproduction of issue-based publics. They used ‘news’ as a social resource to isolate differences with others, to argue, to understand their relative place in the world, etc. Of profound importance in the formation of online publics is the way that this work (call it ‘politics’ or not) takes over the front stage in what have been normatively understood as non-political domains. How many times have you had ‘political’ discussions in non-political forums? Or more important for the current study, how many ‘Gamergate’ articles were dismissed from the sample because the machine-based methods of sampling could not discern that they were about more than video games? The study does not address how ‘non-political’ news-based media outlets become vectors of political engagement when they are used as a resource by users to rearticulate political positions within issue-based publics.