Facebook Research Critiques

Reminds me of when you had to write FB posts in third person.

Engineers at Facebook have worked to continually refine the ‘Edgerank‘ algorithm over the last five or six years or so. They are addressing the problem of how to manage the 1500+ pieces of content available at any moment from “friends, people they follow and Pages” into a more manageable 300 or so pieces of content. Questions have been asked about how Edgerank functions from two related groups. Marketers and the like are concerned about ‘reach’ and ‘engagement’ of their content. Political communication researchers have been concerned about how this selection of content (1500>300) relies on certain algorithmic signals that potentially reduces the diversity of sources. These signals are social and practice-based (or what positivists would call ‘behavioral’). Whenever Facebook makes a change to its algorithm it measures its success in the increase in ‘engagement’ (I’ve not seen a reported ‘failure’ of a change to the algorithm), which means interactions by users with content, including ‘clickthrough rate’. Facebook is working to turn your attention into an economic resource by manipulating the value of your attention through your News Feed and then selling access to your News Feed to advertisers.

The “random sample of 7000 Daily Active Users over a one-week period in July 2013” has produced many of the figures used in various online news reports on Facebook’s algorithm. Via Techcrunch

Exposure to ideologically diverse news and opinion on Facebook

Recently published research by three Facebook researchers was designed to ascertain the significance of the overall selection of content by the Edgerank algorithm. They compared two large datasets. The first dataset was of pieces of content shared on Facebook and specifically ‘hard’ news content. Through various techniques of text-based machine analysis they distributed these pieces of content along a single political spectrum of ‘liberal’ and ‘conservative’. This dataset was selected from “7 million distinct Web links (URLs) shared by U.S. users over a 6-month period between July 7, 2014 and January 7, 2015”. The second dataset was of 10.1 million active ‘de-identified’ individuals who ‘identified’ as ‘conservative’ or ‘liberal’. Importantly, it is not clear if they only included ‘hard news’ articles shared by those in the second set. The data represented in the appended supplementary material suggests that this was not the case. There are therefore two ways the total aggregate Facebook activity and user base was ‘sampled’ in the research. The researchers combined these two datasets to get a third dataset of event-based activity:

This dataset included approximately 3.8 billion unique potential exposures (i.e., cases in which an individual’s friend shared hard content, regardless of whether it appeared in her News Feed), 903 million unique exposures (i.e., cases in which a link to the content appears on screen in an individual’s News Feed), and 59 million unique clicks, among users in our study.

These events — potential exposures, unique exposures and unique clicks — are what the researchers are seeking to understand in terms of the frequency of appearance and then engagement by certain users with ‘cross-cutting’ content, i.e. content that cuts across ideological lines.

The first round of critiques of this research (here, here, here and here) focuses on various aspects of the study, but all resonate with a key critical point (as compared to a critique of the study itself) that the research is industry-backed and therefore suspect. I have issues with the study and I address these below, but they are not based on it being an industry study. Is our first response to find any possible reason for being critical of Facebook’s own research simply because it is ‘Facebook’?

Is the study scientifically valid?

The four critiques that I have linked to make critical remarks about the sampling method and specifically how the dataset of de-identified politically-identifying Facebook users was selected. The main article is confusing and it is only marginally clearer in the appendix but it appears that both samples were validated against the broader US-based Facebook user population and total set of news article URLs shared, respectively. This seems clear to me, and I am disconcerted that it is not clear to those others that have read and critiqued the study. The authors discuss validation, specifically point 1.2 for the user population sample and 1.4.3 for the validation of the ‘hard news’ article sample. I have my own issues with the (ridiculously) normative approach used here (the multiplicity of actual existing entries for political orientation are reduced to a single five point continuum of liberal and conservative, just… what?), but that is not the basis of the existing critiques of the study.

Eszter Hargittai’s post at Crooked Timber is a good example. Let me reiterate that if I am wrong with how I am interpreting these critiques and the study, then I am happy to be corrected. Hargittai writes:

Not in the piece published in Science proper, but in the supplementary materials we find the following:  All Facebook users can self-report their political affiliation; 9% of U.S. users over 18 do. We mapped the top 500 political designations on a five-point, -2 (Very Liberal) to +2 (Very Conservative) ideological scale; those with no response or with responses such as “other” or “I don’t care” were not included. 46% of those who entered their political affiliation on their profiles had a response that could be mapped to this scale. To recap, only 9% of FB users give information about their political affiliation in a way relevant here to sampling and 54% of those do so in a way that is not meaningful to determine their political affiliation. This means that only about 4% of FB users were eligible for the study. But it’s even less than that, because the user had to log in at least “4/7 days per week”, which “removes approximately 30% of users”.  Of course, every study has limitations. But sampling is too important here to be buried in supplementary materials. And the limitations of the sampling are too serious to warrant the following comment in the final paragraph of the paper:  we conclusively establish that on average in the context of Facebook, individual choices (2, 13, 15, 17) more than algorithms (3, 9) limit exposure to attitude-challenging content. How can a sample that has not been established to be representative of Facebook users result in such a conclusive statement? And why does Science publish papers that make such claims without the necessary empirical evidence to back up the claims?

The second paragraph above continues with a further sentence that suggestions that the sample was indeed validated against a sample of 79 thousand other FB US users. Again, I am happy to be corrected here, but this at least indicate that the study authors have attempted to do precisely what Hargittai and the other critiques are suggesting that they have not done. From the appendix of the study:

All Facebook users can self-report their political affiliation; 9% of U.S. users over 18 do. We mapped the top 500 political designations on a five-point, -2 (Very Liberal) to +2 (Very Conservative) ideological scale; those with no response or with responses such as “other” or “I don’t care” were not included. 46% of those who entered their political affiliation on their profiles had a response that could be mapped to this scale. We validated a sample of these labels against a survey of 79 thousand U.S. users in which we asked for a 5-point very-liberal to very-conservative ideological affiliation; the Spearman rank correlation between the survey responses and our labels was 0.78.

I am troubled that other scholars are so quick to condemn a study for not being valid when it does not appear as if any of the critiques (at the time of writing) attempt to engage with the methods but which the study authors tested validity. Tell me it is not valid by addressing the ways the authors attempted to demonstrate validity, don’t just ignore it.

What does the algorithm do?

A more sophisticated “It’s Not Our Fault…” critique is presented by Christian Sandvig. He notes that the study does not take into account how the presentation of News Feed posts and then ‘engagement’ with this content is a process where the work of the Edgerank algorithms and the work of users can not be easily separated (orig. emphasis):

What I mean to say is that there is no scenario in which “user choices” vs. “the algorithm” can be traded off, because they happen together (Fig. 3 [top]). Users select from what the algorithm already filtered for them. It is a sequence.**** I think the proper statement about these two things is that they’re both bad — they both increase polarization and selectivity. As I said above, the algorithm appears to modestly increase the selectivity of users.

And the footnote:

**** In fact, algorithm and user form a coupled system of at least two feedback loops. But that’s not helpful to measure “amount” in the way the study wants to, so I’ll just tuck it away down here.

A “coupled system of at least two feedback loops”, indeed. At least one of those feedback loops ‘begins’ with the way that users form social networks — that is to say, ‘friend’ other users. Why is this important? Our Facebook ‘friends’ (and pages and advertisements, etc.) serve as the source of the content we are exposed to. Users choose to friend other users (or Pages, Groups, etc.) and then select from the pieces of content these other users (and Pages, advertisements, etc.) share to their networks. That is why I began this post with a brief explanation of the way the Edgerank algorithm works. It filters an average of 1500 possible posts down to an average of 300. Scandvig’s assertion that “[u]sers select from what the algorithm already filtered for them” is therefore only partially true. The Facebook researchers assume that Facebook users have chosen the sources of news-based content that can contribute to their feed. This is a complex set of negotiations around who or what has the ability and then the likelihood of appearing in one’s feed (or what could be described as all the options for organising the conditions of possibility for how content appears in one’s News Feed).

The study is testing the work of the algorithm by comparing the ideological consistency of one’s social networks with the ideological orientation of the stories presented and of the news stories’ respective news-based media enterprises. The study tests the hypothesis that your ideologically-oriented ‘friends’ will share ideological-aligned content. Is the number of stories from across the ideological range — liberal to conservative — presented (based on an analysis of ideological orientation of each news-based media enterprise’s URL) different to the apparent ideological homophily of your social network? If so, then this is the work of the algorithm. The study finds that the algorithm works differently for liberal and conservative oriented users.

Nathan Jurgenson spins this into an interpretation of how algorithms govern our behaviour:

For example, that the newsfeed algorithm suppresses ideologically cross cutting news to a non-trivial degree teaches individuals to not share as much cross cutting news. By making the newsfeed an algorithm, Facebook enters users into a competition to be seen. If you don’t get “likes” and attention with what you share, your content will subsequently be seen even less, and thus you and your voice and presence is lessened. To post without likes means few are seeing your post, so there is little point in posting. We want likes because we want to be seen.

‘Likes’ are only signal we have that helps shape our online behaviour? No. Offline feedback is an obvious one. What about the cross-platform feedback loops? Most of what I talk about on Facebook nowadays consists of content posted by others on other social media networks. We have multiple ‘thermostats’ for aligning the appropriate and inappropriateness of posts in terms of attention, morality, sociality, cultural value, etc.  I agree with Jurgenson, when he suggests that Jay Rosen’s observation that “It simply isn’t true that an algorithmic filter can be designed to remove the designers from the equation.” A valid way of testing this has not been developed yet.

The weird thing about this study is that from a commercial point of view Facebook should want to increase the efficacy of the Edgerank algorithms as much as possible, because it is the principle method for manipulating the value of ‘visibility’ of each user’s News Feed (through frequency/competition and position).  Previous research by Facebook has sought to explore the relative value of social networks as compared to the diversity of content, this included a project that investigated the network value of weak tie social relationships.

Effect of Hard and Soft News vs the Work of Publics

What is my critique? All of the critiques mention that the Facebook research, from a certain perspective, has produced findings that are not really that surprising because they largely confirmed how we already understand how people choose ideological content. A bigger problem for me is the hyper-normative classification of ‘hard’ and ‘soft’ news as it obscures part of what makes this kind of research actually very interesting. For example, from the list of 20 stories provided as an example of hard and soft news, at least two of the ‘soft’ news stories are not ‘soft’ news stories by anyone’s definition. From the appendix (page 15):

  • Protesters are expected to gather in downtown Greenville Sunday afternoon to stage a Die In along Main Street …
  • Help us reach 1,000,000 signatures today, telling LEGO to ditch Shell and their dirty Arctic oil!

I did a Google search for the above text. One is a “die in” held as a protest over the death of Eric Garner. The other is a Greenpeace USA campaign.

There are at least two problems for any study that seeks to classify news-based media content according to normative hard and soft news distributions when working to isolate the how contemporary social media platforms have affected democracy:

1. The work of ‘politics’ (or ‘democracy’) does not only happen because of ‘hard news’. This is an old critique, but one that has been granted new life in studies of online publics. The ‘Die-In’ example is particularly important in this context. It is a story on a Fox News affiliate, and I have only been able to find the exact words provided in the appendix by the study authors to refer to this article on Fox News-based sites. Fox News is understood to be ‘conservative’ in the study (table S3 of appendix), and yet the piece on the ‘Die-In’ protest does not contain any specific examples of conservative framing. It is in fact a straightforward ‘hard news’ piece on the protest that I would actually interpret as journalistically sympathetic towards the protests. How many stories classified as ‘conservative’ because they appear on a Fox News-based URL? How many other allegedly ‘soft news’ stories were not actually soft news at all?

2. Why is ‘cross cutting’ framed only along ideological lines of content and users, when it is clear that allegedly ‘soft news’ outlets can cover ‘political topics’ and that more or less impact ‘democracy’?  In the broadcast and print-era of political communication, end users had far less participatory control over the reproduction of issue-based publics. They used ‘news’ as a social resource to isolate differences with others, to argue, to understand their relative place in the world, etc. Of profound importance in the formation of online publics is the way that this work (call it ‘politics’ or not) takes over the front stage in what have been normatively understood as non-political domains. How many times have you had ‘political’ discussions in non-political forums? Or more important for the current study, how many ‘Gamergate’ articles were dismissed from the sample because the machine-based methods of sampling could not discern that they were about more than video games?  The study does not address how ‘non-political’ news-based media outlets become vectors of political engagement when they are used as a resource by users to rearticulate political positions within issue-based publics.

The Australian Newspaper Outrage Cycle

Media editor of The Australian, Sharri Markson, has produced an article titled ‘Activism a threat to journalism‘. In it she draws on sources to argue that ‘activist journalism academics’ on ‘social media’ are a threat to journalism. She paraphrases her boss and Australian newspaper editor, Chris Mitchell:

Editor-in-chief of The Australian, Chris Mitchell, said the greatest threat to journalism was not the internet or governments and press councils trying to limit free speech, but the rise of the activist journalist over the past 25 years and the privileging of the views of activist groups over the views of the wider community.

Worse than the figure of the ‘activist journalist’ is the ‘modern journalism academic’. Here Markson introduces a Mitchell quote so as describe the ‘modern journalism academic’ as someone with opinions on political issues:

Mr Mitchell, who has edited newspapers for more than 20 years, said media academics who were vocal about ideological issues on social media were part of the problem.

“This is at the heart of my disdain for modern journalism academics. And anyone who watches their Twitter feeds as I do will know I am correct,’’ he said.

Tens of thousands of people, including journalism students and those starting their career in the industry, follow media academics Jenna Price, Wendy Bacon and journalist Margo Kingston on Twitter. All are opinionated on political issues.

Through its Media section the Australian newspaper is running a small-scale ‘moral panic’ about the loss of efficacy of legacy media outlets, like the print-based Australian newspaper. Most of the people who work at the Australian newspaper have been to university and would’ve more than likely come across the concept of a moral panic. Even if they haven’t, as savvy media operators that should be familiar with the concept.

The concept of the ‘moral panic’ once belonged to the academic discipline of sociology, but has now largely leaked into everyday language. A moral panic is a diagnostic tool used to understand how fears and anxieties experienced by social group often about social change is projected onto and becomes fixated around what is called a ‘folk devil’.

A ‘folk devil’ is a social figure who may be represented by actual people, but functions to gather fear and anxiety. I have a book chapter on the folk devil figure of the ‘hoon’. There are actual ‘hoons’ who are a road safety issue, but the hoon moral panics that swept across Australia 10 years ago were completely out of proportion to the actual risk presented by hoons. The figure of the hoon represented fears and anxieties about how young people use public space particularly in areas with high retiree and tourist populations.

Clearly, the ‘activist journalist’ and ‘modern journalism academic’ are the folk devil figures. What fears and anxieties do ‘activist journalists’ and ‘modern journalism academics’ represent? ‘Social media’ is used as a collective term in Markson’s piece to describe technologies and social practices that threaten not only the commercial existence of the Australian newspaper, but also its existential purpose. As Crikey reported last week, the Australian newspaper is losing money hand over fist, but I think this ongoing effort to attack ‘activist journalists’ and ‘modern journalism academics’ indicates that the anxiety has a greater purchase than mere commercial imperatives in the Australian newspaper workplace.

Sharri 2
An example of ‘print enthusiast’ Sharri Markson’s advocacy work on social media.

Markson has been a vocal activist for print-based publication and it is clear from her advocacy work on social media that she is a ‘print media’ enthusiast. Indeed, Markson and Mitchell could be described as what are the ‘moral entrepreneurs‘ of the ‘moral panic’ in this particular example. A ‘moral entrepreneur’ is a person or group of people who advocate and bring attention to a particular issue for the purposes of trying to effect change. In traditional moral panic theory this is largely local politicians who try to effect legislative change to compensate for the social changes that triggered the moral panic in the first place.

The Australian newspaper’s ongoing response to the perceived existential threat of ‘social media’ (as an inaccurate collective term to describe far more complex and longer term shifts in the media industry) is a useful example for thinking about the cyclical character of these outbursts. They are small-scale moral panics because they never really spread beyond a limited number of moral entrepreneurs. The latest round is merely another example of the media-based culture wars that began with the so-called ‘media wars‘ in the late 1990s. Again, journalism academics were central in the conflict over what counted as ‘journalism’ and/or ‘news’. More recently, the Australian newspaper attacked journalism programs and their graduates.

The ‘Outrage Cycle’

The concept of a ‘moral panic’ is a bit clunky and doesn’t really capture the cyclical character of these ideological battles over perceived existential threats. Creator of the ‘moral panic’ concept, Stanley Cohen, included some critical comments about the concept as a revised introduction to the 2002 third edition of his iconic Folk Devils and Moral Panics book. About the possibility of a “permanent moral panic” Cohen writes:

A panic, be defintion, is self-limiting, temporary and spasmodic, a splutter of rage which burns itself out. Every now and then speeches, TV documentaries, trials, parliamentary debates, headlines and editorials cluster into the peculiar mode of managing information and expressing indignation that we call a moral panic. Each one may draw on the same stratum of political morality and cultural unease and — much like Foucault’s micro-systems of power — have a similar logic and internal rhythm. Successful moral panics owe their appeal to their ability to find points of resonance with wider anxieties. But each appeal is a sleight of hand, magic without a magician. (xxx)

A useful model for understanding the cyclical character of the relation between anxiety (or what we call ‘affect’), greater media attention (or what we call, after Foucault, ‘visibility’) and an exaggerated sense of social norms and expectations is Gartner’s ‘Hype Cycle’ model.

HypeCycle

It is not a ‘theoretical’ or even a ‘scientific’ tool; rather, it serves as a kind of rule of thumb about the reception of technological change for the purposes of creating business intelligence. New technologies tend to be hyped so take this into account when making business decisions about risks of investment. (Each year I use the ‘Hype Cycle’ to introduce my third year unit on technological change ; the way it represents technology is useful for understanding social relations and technology beyond technology being an ‘object’.) There is something similar going on with the Australian newspaper’s constant preoccupation with other journalists and in particular the role of journalism academics in society. Rather than the giddy ‘hype’ of the tech press and enthusiasts about technological change, the Australian newspaper’s cycle is organised around ‘outrage’. The Australian newspaper’s ‘Outrage Cycle’ is a useful way to frame how Western societies constantly mobilise to engage with perceived existential threats. The actual curve of the ‘Hype CYcle’ itself is less important than the cyclical character of trigger and response, which is also apparent in ‘moral panic’ theory:

OutrageCycle 2014

I’ve changed the ‘zones’ of the Hype Cycle. ‘Maturity’ did not seem like the most appropriate measure of the X-axis, so I changed it to ‘time’ which Gartner also sometimes uses. I’ve made a table for ease of reference:

Hype Cycle

Outrage Cycle

Technology Trigger

Existential Threat

Peak of Inflated Expectations

Peak of Confected Outrage

Trough of Disillusionment

Trough of Realism

Slope of Enlightenment

Slope of Conservatism

Plateau of Productivity

Plateau of Social Norms

 

Existential threat: In the case of the Australian newspaper, the existential threat is not so much activist journalists and modern journalism academics, but the apparent dire commercial position of the newspaper and the accelerated decline in social importance of a national newspaper. The world is changing around the newspaper and it currently survives because of cross-funding arrangements from other sections of News Corp. The moral entrepreneurs in this case are fighting for the very existence of ‘print’ and the institutional social relations that ‘print’ once enjoyed. A second example of this involves ‘online piracy’, which serves as a perceived existential threat to the current composition of media distribution companies.

Peak of Confected Outrage: It is unclear who is actually outraged besides employees of News Corp about so-called ‘activist journalists’ and ‘modern journalism academics’ in general. There are specific cases, just like with ‘moral panics’, where specific people have triggered the ire of some social groups. They serve as representative ‘folk devils’ for an entire social identity. Similarly, ‘pirates’ serve as an example of ‘bad internet users’ who are part of the disruptions of the legacy media industry. There is a more sophisticated point to be made about reporting on ‘outrage’ and other affective states like ‘fear’ and ‘anxiety’. They become their own sources of newsworthiness.

Trough of Realism: In the case of the Australian newspaper, this is where legacy media advocates face up to the unfortunate reality of the shifting media industry.  It is not clear to me, at least in this example, that this will actually happen. (Perhaps after the Australian newspaper folds?) In terms of ‘online piracy’ facing reality includes companies like Foxtel currently working to create online client versions of their pay TV business. It is basically at this point that proponents have to ‘face reality’.

Slope of Conservatism: In Gartner’s original version, technologies become adopted and companies learn how to use them appropriately. In the ‘Outrage Cycle’ the Slope of Conservatism is ironically named as it signals social change. In some ways, Markson’s advocacy of ‘print’ is a bad example of this. A better example is the way sports fans learn how to adapt to the commodification of broadcast sporting events.

Plateau of Social Norms: The constant change in social values and relations that have characterised Western societies for the last 300 years continues unabated, indicated by the increasing ‘liberalisation’ of normative social values, but societies often pass thresholds of organisational composition where certain norms are dominant. Heterosexual patriarchal social values and racist social values were normative up until the postwar period in Australia, then they began a very slow process of changing and we are still in the midst of these shifts. Most people who work in the media industry are learning to operate in the new norms that characterise contemporary expectations regarding the production, distribution/access and consumption of media and journalistic content. Recent examples of this include the popularity of the ‘home theatre’ as the most recent evolution of domestic cinema culture that become part of mass popular cultures with the VCR.

The ‘Outrage Cycle’ as a Business Model

In our editorial introduction to the recent ‘Trolling’ special issue of Fibreculture Journal, my colleagues Jason Wilson, Christian McCrea and I wrote:

Major media corporations and tech giants have become bogged down in nymwars, post-hoc jerry-rigging and outright comment bans as they attempt to erase conflict around perenially divisive topics. All the while, as media companies are all too happy to trade on clickbait and outrage, there’s a suspicion that they have appropriated and mobilised the figure of the troll in order to constrain a new outpouring of political speech. Trolling has perhaps displaced pornography as the obscenity which underwrites the demand that the Internet be brought under control.

Jason in particular has emphasised the normative character of particular kinds of outrage. On the topic of a recent research report report from the respected Pew Centre about the normative effect of social media, Jason wrote for the Guardian ‘newspaper’:

In the midst of social media’s perpetual flurries of outrage, we teach one another that the range of acceptable opinion is small, that we are individually responsible for comporting ourselves within these limits, and that the negative consequences are unpredictable, and potentially catastrophic. Accepting cues – from media, government and other authorities – about the dangers of incivility and extremism, we monitor each other’s conduct, ensuring that it doesn’t cross any arbitrary lines.

We can read the perpetual Outrage Cycle of the Australian newspaper as a machine for the production of new normative social values. Without being subsidised by other business areas of the News Corp enterprise, the Australian newspaper would be out of business, so to say that the Australian will inevitably fail is to miss the point that it is already in a state of constant ‘fail’. Unless someone thinks that the Australian newspaper will actually become profitable again (and will do so while its editor-in-chief and media editor are advocating for ‘print’), the social function of the Australian newspaper is not to make money as a commercial journalistic enterprise but to serve a social role that reinforce what its employees perceive to be normative social values.

The Australian newspaper and other News Corp print-based products seemed to be currently organised around using this ‘Outrage Cycle’ as a business model. Isolate a perceived existential threat (religion, class difference, education, etc.) and then represent this on the front page of newspapers in such a way as to create feelings of fear, anxiety and outrage in the community. We know that they do not aim to represent and report on this fear, anxiety and outrage, because otherwise their front pages would be full of articles about readers of their own newspapers.

Rough Notes on the Techno-Aesthetics of Cattle

Other permutations of the title of this post could have been techno-aesthetics of ‘living standards’ or techno-aesthetics of ‘the future’.

Mike Konczal’s piece in The New Inquiry on the work of ‘standardization’ in processes of ‘financialization’ was shared across my social networks the other day. In it he suggests that financial markets have in part attempted to solve a thousands of years old philosophical problem:

Are there only particular, individual, material things out there, with generic names arising only from social conventions? Or are there ideal Platonic universal entities, which exist separately from individual iterations of them? The financial system that has evolved in the past 150 years alongside capitalism in part attempts to resolve this question.

Hogwash.

Konscal tells an interesting story of the process through which the phenomena of standardising previously non-standardised goods meant that these goods could be traded on financial markets.  Does the process of standardising a good therefore lead to the material embodiment of a Platonic ideal? No, of course not.

Konscal’s argument is more sophisticated than this because it is concerned with relations between the present and the future. The Platonic ideal of standardised cattle does not exist in the present but on the edge of the present in the traded-future.

Let’s look at the Chicago Mercentile Exchange’s rulebook for a Live Cattle Future, specifically the legal content for what qualifies as a “deliverable” cattle. First off, “No individual animal weighing less than 1,050 pounds or more than 1,500 pounds” shall be deliverable as a cattle. “Unmerchantable” cattle, such as those that are “crippled, sick, obviously damaged or bruised,” are not acceptable. Graders are on standby to ensure that these judgments are satisfactorily made.

Pick any other commodity, and you’ll find the contract that similarly marks what the ideal form of it should be. […]

The system of standardization in futures contracts resolved the particular into the general and came to be heralded as a major financial innovation. The name of the thing produced the thing, rather than the thing producing the name: nominalism vs. realism solved.

‘Ideal form’ in the sense of a Platonic ideal form? Nope.

Nietzsche’s “On Truth and Falsity” takes aim with this problem, of the relation between the infinite variability of actual materiality and the anthropomorphic drive for ‘truth’ in speech and ‘ideas’ or what in this context Konscal calls a ‘standard’. Ideas do not originate from an ideal, but through a process of equating the unequal:

Every word becomes at once an idea not by having, as one might presume, to serve as a reminder for the original experience happening but once and absolutely individualised, to which experience such word owes its origin, no, but by having simultaneously to fit innumerable, more or less similar (which really means never equal, therefore altogether unequal) cases. Every idea originates through equating the unequal. […]

The disregarding of the individual and real furnishes us with the idea, as it like-wise also gives us the form ; whereas nature knows of no forms and ideas, and therefore knows no species but only an x, to us inaccessible and indefinable. For our antithesis of individual and species is anthropomorphic too and does not come from the essence ‘ of things, although on the other hand we do not dare to say that it does not correspond to it ; for that would be a dogmatic assertion and as such just as undemonstrable as its contrary. […]

His procedure is to apply man as the measure of all things, whereby he starts from the error of believing that he has these things immediately before him as pure objects. He therefore forgets that the original metaphors of perception are metaphors, and takes them for the things themselves.

Most interpretations of Nietzsche have focused on what is called the implicit ‘perspectivism’ of his position on truth. I am interested in the non-anthropomorphic “original experience happening but once and absolutely individualised” and how this relates to what Duns Scotus called a ‘haecceity’ and Gilbert Simondon called a process of individuation. One aspect of individuation often forgotten is that it describes not just an ‘individual’ (a person, a cow, anything) but also the ‘environment’ or context within which the individual is individuated. One way to interpret this is through what Simondon called an analysis of the relation between an individual and environment techno-aesthetics.

Techno-aesthetics attends not to the aesthetics of forms (ideal or otherwise) but the regularity of singular points through which the individual-environment relation is composed and the individual individuated. In related work Simondon explored the very long historical shifts that led to the emergence of technology and religion from a “primitive magical unity” as the the human being’s first mode of being. Primitive Magical Unity is characterised by an immobile connection of singular way-points, embodied in mountains and the like, whereby the mountain serves as a conduit to an extra-human realm. Religion produces a new ground, while Technology mobilises the singular-relation itself and Technicity is a kind of embodied relational index of this process.

The techno-aesthetics of cattle futures is not concerned with the ideal form of cattle as discursively embodied in legal rules but with, firstly, the existing (past) process of individuation through which cattle are individuated and, secondly, the way in which ‘futures’ serve as a connection between this existing (past) process of individuation and another future process of individuation. Experience-based knowledges are implicit here, so for example an expect ‘cattle reader’ can read the process of individuation off a given herd of cattle

What is the second process of individuation? It is the deployment of the cattle as socio-technology to individuate a set of relations that we call a ‘market’. Traders of cattle futures do not want ‘ideal cattle’ they want an instrument that allows them to pursue the individuation of a second market that will ‘consume’ the cattle (in reality, they are merely just the next linkage in a series of Latour’s mediators). Inherent to all this is a legally sanctioned form of trust, which Nietzsche suggested underpins the evolution of ‘truth’. Massumi describes the affective dimension of this connection between two processes of individuation an ‘operational linkage’. Consumers are caught up in this process too, as the flipside of the individuated market. The consumers’ affective relation is talked about in economics as ‘confidence’. 

I am being an aleatory materialist here. There is no ‘ideal’ anything. 

Konscal of course recognises this, in particular when he turns his attention to the failed attempt to ‘financialise’ toxic home loans:

Not only were these contracts designed to make the bad-mortgage future, they were also ill-prepared for the contingencies they pretended to tame and master. When the housing market collapsed, the creators of these contracts lacked the thorough knowledge of the mortgage contracts within them—highly individualized relations between lenders and borrowers, each with their own nuances—that would have been necessary to recover some of their value.

In this context the risk/opportunity nexus serves as the operational-linkage between (at least) two processes of individuation. What Konscal has isolated is not the apparent attempt of bankers to ‘solve’ a many thousands year old problem of ideational ontology, but the specific failure of bankers to, firstly, appreciate the process of individuation by which ‘risks’ (and, by extension ‘opportunities’) are created, and secondly, even if they did appreciate this, they lacked the operational “knowledge of the […] highly individualized relations between lenders and borrowers, each with their own nuances”. Or as Konscal puts it more bluntly: “They proved to be farmers who couldn’t tell cows from cow shit.”

What is ‘theory’?

Steven Muecke provides a description of what theory is in his review of Morton’s Hyperobjects:

What he does is “theory,” which is what high-flying professors of English write when they are not training people to read literature. Those who read Terry Eagleton’s Literary Theory in college will be familiar with the genre. It comprises difficult material made a little more accessible, and even enjoyable, via rhetorical flourishes, brilliant and breathtaking connections (Marx, God, Wordsworth, and cornflakes might appear in the same sentence), and sometimes it includes combat sports, as rival critical theories are pummelled into the ground.

Theory is not an academic discipline. Philosophers reading Hyperobjects might groan and protest (see Nathan Brown’s review of Morton’s recent Realist Magic), but Morton is not doing philosophy, he is sampling it. Likewise with the most recent advances in theoretical physics, appearing in this book in spades, along with some writing about avant-garde arts and music. It’s a strange mash-up, this theory stuff. You don’t read theory to advance the discipline you might belong to — you read it for stimulation…

Talking about world views

In the latest Partially Examined Life podcast on Thomas Kuhn’s notion of scientific progress Mark refers to the previous Deleuze and Guattari What is Philosophy? podcast and makes a connection between Kuhn’s notion of ‘paradigm’ with Deleuze and Guattari’s notion of a ‘plane of immanence’. Below are some rough notes on this connection to push it a bit further into some of Deleuze and Guattari’s other works and so as to connect Mark’s reference to ‘planes of immanence’ in the context of Kuhnian paradigms with Deleuze and Guattari’s notion of ‘collective assemblages of enunciation’.

I have roughly transcribed the section from the podcast below (between the time code references):

[1:02:10]

[Discussing how the term ‘paradigm’ has entered into non-technical discourse to refer to what could be called a ‘world view’. ‘Technical’ in this context means following Kuhn’s definition.]

Wes: Most people use it as synonymous with ‘world view’, which… there’s an argument for that, but really it’s more like ‘exemplar’; it’s an ‘example’.

Mark: I would just like some more systematic language — some philosophy — to tell me how to talk more intelligently about ‘world views’ in this nebulous way that we actually want to talk about it. There perhaps a modern [inaudible] evolution of this idea in the Deleuze [and Guattari] book that we read, When he’s talking about ‘planes of immanence’ there’s a certain commonality — granted he’s talking about ‘planes of immanence’ as what defines a ‘philosophy’ and what defines a ‘philosophy’ is defined by the concepts and once you have the ‘concepts’ established maybe you could see that as providing a paradigm for science, which remember [Mark shifts to his wise-cracking smart-ass voice] he sees as just providing ‘functions’ its just mapping one value onto another as if you’ve got the mapping rule already stored in your paradigm there and your plane of immanence…  and so science on that model is just what Kuhn is describing normal science as — is just filling in the details, is finding out what each question maps to in your set-up. [But] the plane of immanence that we had so much trouble with… maybe its just my desire to make some sense out of the Deleuze retrospectively, [Wes: Well..] but maybe paradigm is a good start for that…

Wes: That sounds like more a conceptual scheme which I think is different to a paradigm. [Mark: Hmmm] A conceptual scheme includes — yeah — a set of concepts for talking about the world and certain assumptions, but a paradigm I think as an example gets at some of the more less conceptual stuff, some of the tacit knowledge, some of the ways… maybe it’s more like — what’s Wittgenstein’s phrase?

Mark: Mode of life?

Wes: Yeah, and part of it’s about what’s relevant to people, so its not just about what concepts they’re deploying, but what’s about what’s interesting and relevant.

[1:04:07]

I have taught Kuhn’s work to first year undergraduates in a large introductory ‘research methods’ unit that is taught to every incoming student to our faculty of arts and design. The purpose of the unit is to introduce students to ‘research methods’ in the humanities. I draw on Kuhn’s work so as to illustrate how the practice and meaning of the word ‘research’ in a contemporary Australian university context is largely determined by scientific discourse. I indicate the connection between our university’s policies on research to the federal government’s policies to the guidelines provided by OECD’s Frascati Manual in the way that ‘research’ is defined.

The contemporary Frascati Manual is an interesting document as it attempts to bridge the gap between the ‘basic’ and ‘applied’ research of the sciences (p. 30) with a non-scientific research of the humanities. At stake is the distinction between the practice of what could be described as ‘routine work’ and the practice of ‘research’. ‘Research’ in this context is any practice that is worthy of non-routine investment funding. Why is this important for the OECD? Because research in the humanities can have productivity outcomes. “For the social sciences and humanities,” the manual suggests, “an appreciable element of novelty or a resolution of scientific/technological uncertainty is again useful criterion for defining the boundary between R&D and related (routine) scientific activities” (p. 48).

When introducing this to to my first year students I use it to talk about what this ‘resolution of scientific/technological uncertainty’. I frame this discussion in terms of matching certain kinds of research practice with certain kinds of epistemological uncertainty. The students already do research to address a certain kind of uncertainty. What films are showing at the cinema this weekend? What gift should I give to someone dear to me? This work of everyday research relates to the kinds of tacit knowledge that I think Wes was referring to. I introduce the notion of ‘research’ in this manner so as to help students realise that the epistemological process of working to resolve uncertainty is not some special thing that academics do, but is something we are all familiar with as part of everyday life.

The next manoeuvre is to posit undergraduate research as part of a process of becoming familiar with another set of professional practices for identifying the ‘uncertainties’ that belong to a given scholarly or research-centred field. I teach Kuhn’s notion of paradigm in terms of being one way to describe (make ‘sense’ of) an epistemological process for the resolution of uncertainty. The ‘paradigm’ is the set of agreed upon practices and assumptions for reproducing the conditions by which such uncertainties are identified as such (‘certain uncertainties’ to riff off Rumsfeld). From my lecture notes, I note that ‘paradigms’ are compositions of relations that:

Create avenues of inquiry.
Formulate questions.
Select methods with which to examine questions.
Define areas of relevance.

I define ‘expert researcher’ for my students as someone who knows exactly what they do not know and who belongs to a ‘scholarly field’ that has specific methods for defining what is not known in terms of what is known. (One reason for this is to try to shunt students out of the debilitating circuitous logic of gaming education for grades and resurrect a sense of wonder about the world.)

The ‘reproduction’ part in defining paradigms is therefore important as Kuhn also identified the so-called political aspect of scientific paradigms: they are not simply sustained by the quality of the knowledge produced by research, but the professional conditions by which that knowledge and producers of that knowledge are judged worthy as belonging. This has been a roundabout way of getting to the substance of this post, which is Mark’s reference to Deleuze and Guattari’s notion of a ‘plane of immanence’. Rather than a ‘plane of immanence’, I think perhaps a better connection is to Deleuze and Guattari’s notion of a ‘collective assemblage of enunciation’. 

A ‘plane of immanence’ is the ‘quasi-causal’ grounds by which thought is possible. (That is an esoteric post-Kantian pun.)  ‘Quasi-cause’ comes from Deleuze’s work The Logic of Sense. It is an attempt to address the problem of how ‘sense’ (the logic of meaning) arises from what is basically the cosmological nonsense of the universe. I won’t pursue this too much, but the way humans make sense of the world normally implies some kind of realism. This ‘realism’ is in itself not natural, and can be described as a collective system of reference.

In What is Philosophy? Deleuze and Guattari characterise ‘science’ as the creation of what they call ‘functives’; a ‘functive’ is the basic element of a function and it describes some aspect of the way the universe works. What makes thought possible is the complex individuation of a thought through the body of a sentient being. Cognitive science is doing its best to resolve this problem. Individuation in this context follows a causally normative path of individuation. This leads to that. The process of cognition.

What makes thought sensible is a philosophical problem. The seemingly counter-intuitive movement of thought in the context of the expression of thought, whereby the future affects the present. That is lead by this. In Difference & Repetition Deleuze draws on Nietzsche’s notion of the ‘dark precursor’ to describe this movement. On the surface, non-linear causality seems like a radical idea. In practice, we do this work everyday. Instead of creating momentous existential crises most of the time we delegate these causally circular movements of thought to metaphysical placeholders. We collectively describe these as ‘assumptions’.

Indeed, Deleuze separates the cosmos into bodies and the passions of bodies (causes) and expressions and the sense of expressions (effects) and associates two orders of causality. (Or ‘two floors’ in the existential architecture of reality in The Fold.) One which belongs to the world and is shared by every single thing (body) in the world. One which only can be inferred by implication in any expression of sense. Deleuze’s concept of the event is an conceptual attempt to group together the dynamic quasi-causal expression of ‘sense’, which is why the ‘event’ is central to The Logic of Sense. 

Language and culture imply a shared sense of quasi-causality for those thinking beings who belong to that culture and use that language. Cultural expression can therefore be understood as an elaborate method for the dissemination of assumptions. Interesting to think about in this context is ‘poetics’ as a research practice  — that is, poetics as a method for identifying or discovering new assumptions. For those who work in the creative industries perhaps it is worth thinking about what assumptions are you helping to disseminate.

The detour through ‘quasi-cause’ was necessary to explain the notion of a collective assemblage of enunciation and why it is difficult to explain how a new paradigm emerges from an old paradigm. The notes to PEL podcast on Kuhn describe this as an ‘evolutionary version of Kantianism’. But the problem with this is that the new paradigm does not emerge from the old paradigm; the point of the notion of the paradigm is that it describes practices that ward off the development of new paradigms. Hence the non-scientific problem with the concept of the paradigm: the difficulty of describing how a new paradigm emerges from the new paradigm before that ‘new’ paradigm exists in actuality.

In A Thousand Plateaus Deleuze and Guattari develop the concept of ‘agencement’, which is translated by Massumi as ‘assemblage’. There are two sides to every assemblage: a machinic assemblage of bodies and a collective assemblage of enunciation. There are two orders of causality to every assemblage. The linear movement of causal relations belonging to bodies and the ‘quasi-causal’ relations of thought. Each fold of ‘thought’ in this context is the process of transversal distribution of sense in the world. Sense is distributed from the future; it is the superposition of one moment upon the next. One way to think about this is that every paradigm (as a concrescence of singular points) already exists quasi-causally.

A ‘world view’ therefore has two ontological levels: the world and the view. Language is important because each singular expression implies a monadological view that can be inferred. More important is that even though sentience can be defined by the existential capacity to make assumptions. As Nietzsche was at pains to point out, it is a seemingly unique human trait to delegate this capacity for making assumptions (or what he called ‘truths’) to our culture. Nietzsche was worried about the manifestation of ignorance as the acceptance of such assumptions as well as admiring the near-suicidal pursuit to overcome such assumption-producing cultural mechanisms. 

Which leads to the question, in what ways are humans not sentient? Is your world view making you non-sentient? If non-sentient life is defined as the delegation of the capacity for making assumptions to genetics, then what are the assumptions we have delegated to our biology or through our biology (by way of evolutionary ‘fitness’) to our environment? 

I have purchased but not yet read Isabelle Stengers Thinking with Whitehead. I suspect it shall address, at least peripherally, some of these issues.