The economy of culture is, accordingly, not a description of culture as a representation of certain extra-cultural economic constraints. Rather, it is an attempt to grasp the logic of cultural development itself as an economic logic of the revaluation of values.
I am enjoying Groys’ non-market ‘economic’ interpretation of Nietzschean truth. He develops an economic conception of Nietzsche’s non-moral version of value without turning to Marxist conceptions of value that would position cultural value as a consequence of the social relation between capital and labour power.
In my True Detective essay I develop a notion of ‘meta’ so as to grapple with the epistemological displacement that occurs in the midst of a revaluation of values. I call this a ‘liminal epistemology’, which has been commodified as ‘discovery’ in contemporary ‘apps’ that assist users access various kinds of cultural texts (music, written texts, phatic/social media texts, etc). The media event of True Detective (as compared to the televisual text) is interesting as it dramatises the ‘detective work’ of this liminal epistemology itself. From the introduction of my True Detective essay:
If nothing else, True Detective clearly triggers meta-detective work by the audience. The show, its inter-textual references, and non-diegetic exegetical explanations of these references produced new edges of surprise and a new sense of expectation. For example, there is a folding of the crime fiction genre into existentialist horror and a topological transformation wrought upon both. Both genres frame a passage of discovery by the characters and audience. “Discovery” has become a buzzword in user-centred design to describe the design of platforms that assist users discover appropriate content, and this refers to the way users willingly embrace the delegated agency of “smart” interfaces. The liminal epistemology of discovery in meta-stable media assemblages pose answers to questions that haven’t yet been asked. The question isn’t simply asked of the characters of the show, but of the entire event itself as it repeated different elements of genres in different ways; in effect, the audience carries out meta-detective work.
The reason why I am excited about Groys’ work is that he has already isolated a similar problematic with regards to the revaluation of values. His focus so far is not animated by the same concerns as I am, but there is a similar problematic. I make it very clear that what I found the most interesting about the True Detective media event is that it is part of a broader constellation of cultural texts that are all, in different ways, working through this revaluation of values. From the introduction of my essay:
In the final section I develop meta in terms of what Sianne Ngai (2012) calls a minor aesthetic category, and in this case what characterises meta as a minor aesthetic category is the way any text, object or event that dramatises the suspension of cultural values. In Simondon’s terms, meta is an aesthetic category that refers to works that in some way repotentialise values that serve as the “preindividual norms” of value in a state of meta-stability ready to be potentialised in a multiplicity of ways (Combes 2013: 64). As I shall explore in detail, True Detective dramatises a conflict between systems of belief and cultural value through the figures of the two main characters, Rust and Marty. In this way, “meta” signals a threshold of value (or what Nietzsche (1968) calls “transvaluation”) more often associated with nihilism.
Engineers at Facebook have worked to continually refine the ‘Edgerank‘ algorithm over the last five or six years or so. They are addressing the problem of how to manage the 1500+ pieces of content available at any moment from “friends, people they follow and Pages” into a more manageable 300 or so pieces of content. Questions have been asked about how Edgerank functions from two related groups. Marketers and the like are concerned about ‘reach’ and ‘engagement’ of their content. Political communication researchers have been concerned about how this selection of content (1500>300) relies on certain algorithmic signals that potentially reduces the diversity of sources. These signals are social and practice-based (or what positivists would call ‘behavioral’). Whenever Facebook makes a change to its algorithm it measures its success in the increase in ‘engagement’ (I’ve not seen a reported ‘failure’ of a change to the algorithm), which means interactions by users with content, including ‘clickthrough rate’. Facebook is working to turn your attention into an economic resource by manipulating the value of your attention through your News Feed and then selling access to your News Feed to advertisers.
Exposure to ideologically diverse news and opinion on Facebook
Recently published research by three Facebook researchers was designed to ascertain the significance of the overall selection of content by the Edgerank algorithm. They compared two large datasets. The first dataset was of pieces of content shared on Facebook and specifically ‘hard’ news content. Through various techniques of text-based machine analysis they distributed these pieces of content along a single political spectrum of ‘liberal’ and ‘conservative’. This dataset was selected from “7 million distinct Web links (URLs) shared by U.S. users over a 6-month period between July 7, 2014 and January 7, 2015″. The second dataset was of 10.1 million active ‘de-identified’ individuals who ‘identified’ as ‘conservative’ or ‘liberal’. Importantly, it is not clear if they only included ‘hard news’ articles shared by those in the second set. The data represented in the appended supplementary material suggests that this was not the case. There are therefore two ways the total aggregate Facebook activity and user base was ‘sampled’ in the research. The researchers combined these two datasets to get a third dataset of event-based activity:
This dataset included approximately 3.8 billion unique potential exposures (i.e., cases in which an individual’s friend shared hard content, regardless of whether it appeared in her News Feed), 903 million unique exposures (i.e., cases in which a link to the content appears on screen in an individual’s News Feed), and 59 million unique clicks, among users in our study.
These events — potential exposures, unique exposures and unique clicks — are what the researchers are seeking to understand in terms of the frequency of appearance and then engagement by certain users with ‘cross-cutting’ content, i.e. content that cuts across ideological lines.
The first round of critiques of this research (here, here, here and here) focuses on various aspects of the study, but all resonate with a key critical point (as compared to a critique of the study itself) that the research is industry-backed and therefore suspect. I have issues with the study and I address these below, but they are not based on it being an industry study. Is our first response to find any possible reason for being critical of Facebook’s own research simply because it is ‘Facebook’?
Is the study scientifically valid?
The four critiques that I have linked to make critical remarks about the sampling method and specifically how the dataset of de-identified politically-identifying Facebook users was selected. The main article is confusing and it is only marginally clearer in the appendix but it appears that both samples were validated against the broader US-based Facebook user population and total set of news article URLs shared, respectively. This seems clear to me, and I am disconcerted that it is not clear to those others that have read and critiqued the study. The authors discuss validation, specifically point 1.2 for the user population sample and 1.4.3 for the validation of the ‘hard news’ article sample. I have my own issues with the (ridiculously) normative approach used here (the multiplicity of actual existing entries for political orientation are reduced to a single five point continuum of liberal and conservative, just… what?), but that is not the basis of the existing critiques of the study.
Eszter Hargittai’s post at Crooked Timber is a good example. Let me reiterate that if I am wrong with how I am interpreting these critiques and the study, then I am happy to be corrected. Hargittai writes:
The second paragraph above continues with a further sentence that suggestions that the sample was indeed validated against a sample of 79 thousand other FB US users. Again, I am happy to be corrected here, but this at least indicate that the study authors have attempted to do precisely what Hargittai and the other critiques are suggesting that they have not done. From the appendix of the study:
I am troubled that other scholars are so quick to condemn a study for not being valid when it does not appear as if any of the critiques (at the time of writing) attempt to engage with the methods but which the study authors tested validity. Tell me it is not valid by addressing the ways the authors attempted to demonstrate validity, don’t just ignore it.
What does the algorithm do?
A more sophisticated “It’s Not Our Fault…” critique is presented by Christian Sandvig. He notes that the study does not take into account how the presentation of News Feed posts and then ‘engagement’ with this content is a process where the work of the Edgerank algorithms and the work of users can not be easily separated (orig. emphasis):
What I mean to say is that there is no scenario in which “user choices” vs. “the algorithm” can be traded off, because they happen together (Fig. 3 [top]). Users select from what the algorithm already filtered for them. It is a sequence.**** I think the proper statement about these two things is that they’re both bad — they both increase polarization and selectivity. As I said above, the algorithm appears to modestly increase the selectivity of users.
And the footnote:
**** In fact, algorithm and user form a coupled system of at least two feedback loops. But that’s not helpful to measure “amount” in the way the study wants to, so I’ll just tuck it away down here.
A “coupled system of at least two feedback loops”, indeed. At least one of those feedback loops ‘begins’ with the way that users form social networks — that is to say, ‘friend’ other users. Why is this important? Our Facebook ‘friends’ (and pages and advertisements, etc.) serve as the source of the content we are exposed to. Users choose to friend other users (or Pages, Groups, etc.) and then select from the pieces of content these other users (and Pages, advertisements, etc.) share to their networks. That is why I began this post with a brief explanation of the way the Edgerank algorithm works. It filters an average of 1500 possible posts down to an average of 300. Scandvig’s assertion that “[u]sers select from what the algorithm already filtered for them” is therefore only partially true. The Facebook researchers assume that Facebook users have chosen the sources of news-based content that can contribute to their feed. This is a complex set of negotiations around who or what has the ability and then the likelihood of appearing in one’s feed (or what could be described as all the options for organising the conditions of possibility for how content appears in one’s News Feed).
The study is testing the work of the algorithm by comparing the ideological consistency of one’s social networks with the ideological orientation of the stories presented and of the news stories’ respective news-based media enterprises. The study tests the hypothesis that your ideologically-oriented ‘friends’ will share ideological-aligned content. Is the number of stories from across the ideological range — liberal to conservative — presented (based on an analysis of ideological orientation of each news-based media enterprise’s URL) different to the apparent ideological homophily of your social network? If so, then this is the work of the algorithm. The study finds that the algorithm works differently for liberal and conservative oriented users.
For example, that the newsfeed algorithm suppresses ideologically cross cutting news to a non-trivial degree teaches individuals to not share as much cross cutting news. By making the newsfeed an algorithm, Facebook enters users into a competition to be seen. If you don’t get “likes” and attention with what you share, your content will subsequently be seen even less, and thus you and your voice and presence is lessened. To post without likes means few are seeing your post, so there is little point in posting. We want likes because we want to be seen.
‘Likes’ are only signal we have that helps shape our online behaviour? No. Offline feedback is an obvious one. What about the cross-platform feedback loops? Most of what I talk about on Facebook nowadays consists of content posted by others on other social media networks. We have multiple ‘thermostats’ for aligning the appropriate and inappropriateness of posts in terms of attention, morality, sociality, cultural value, etc. I agree with Jurgenson, when he suggests that Jay Rosen’s observation that “It simply isn’t true that an algorithmic filter can be designed to remove the designers from the equation.” A valid way of testing this has not been developed yet.
The weird thing about this study is that from a commercial point of view Facebook should want to increase the efficacy of the Edgerank algorithms as much as possible, because it is the principle method for manipulating the value of ‘visibility’ of each user’s News Feed (through frequency/competition and position). Previous research by Facebook has sought to explore the relative value of social networks as compared to the diversity of content, this included a project that investigated the network value of weak tie social relationships.
Effect of Hard and Soft News vs the Work of Publics
What is my critique? All of the critiques mention that the Facebook research, from a certain perspective, has produced findings that are not really that surprising because they largely confirmed how we already understand how people choose ideological content. A bigger problem for me is the hyper-normative classification of ‘hard’ and ‘soft’ news as it obscures part of what makes this kind of research actually very interesting. For example, from the list of 20 stories provided as an example of hard and soft news, at least two of the ‘soft’ news stories are not ‘soft’ news stories by anyone’s definition. From the appendix (page 15):
Protesters are expected to gather in downtown Greenville Sunday afternoon to stage a Die In along Main Street …
Help us reach 1,000,000 signatures today, telling LEGO to ditch Shell and their dirty Arctic oil!
There are at least two problems for any study that seeks to classify news-based media content according to normative hard and soft news distributions when working to isolate the how contemporary social media platforms have affected democracy:
1. The work of ‘politics’ (or ‘democracy’) does not only happen because of ‘hard news’. This is an old critique, but one that has been granted new life in studies of online publics. The ‘Die-In’ example is particularly important in this context. It is a story on a Fox News affiliate, and I have only been able to find the exact words provided in the appendix by the study authors to refer to this article on Fox News-based sites. Fox News is understood to be ‘conservative’ in the study (table S3 of appendix), and yet the piece on the ‘Die-In’ protest does not contain any specific examples of conservative framing. It is in fact a straightforward ‘hard news’ piece on the protest that I would actually interpret as journalistically sympathetic towards the protests. How many stories classified as ‘conservative’ because they appear on a Fox News-based URL? How many other allegedly ‘soft news’ stories were not actually soft news at all?
2. Why is ‘cross cutting’ framed only along ideological lines of content and users, when it is clear that allegedly ‘soft news’ outlets can cover ‘political topics’ and that more or less impact ‘democracy’? In the broadcast and print-era of political communication, end users had far less participatory control over the reproduction of issue-based publics. They used ‘news’ as a social resource to isolate differences with others, to argue, to understand their relative place in the world, etc. Of profound importance in the formation of online publics is the way that this work (call it ‘politics’ or not) takes over the front stage in what have been normatively understood as non-political domains. How many times have you had ‘political’ discussions in non-political forums? Or more important for the current study, how many ‘Gamergate’ articles were dismissed from the sample because the machine-based methods of sampling could not discern that they were about more than video games? The study does not address how ‘non-political’ news-based media outlets become vectors of political engagement when they are used as a resource by users to rearticulate political positions within issue-based publics.
A brief passage in the essay reminded me of my Forget OOO post from almost 5 years ago encouraging graduate students to not get caught up in the internet hype of OOO. The 2006 post was triggered by Levi Bryant’s reading of ‘desiring machines’ in terms of OOO’s ‘objects’. Buchanan’s chapter addresses the use of schizoanalysis to understand how desire is productive in the context of artistic work. The passage extracted below explains better than I did why reading ‘desiring machines’ in terms of ‘objects’ as a move to some how escape from Kantianism is profoundly ill-advised. (Of course, there is another dimension to the below that Buchanan does not emphasise, which I indicate in my Forget OOO post pertaining to the ‘machinic’ or what I think is best described as the ‘milieu of singularities’):
Desiring-production is the process and means the psyche deploys in producing connections and links between thoughts, feelings, ideas, sensa- tions, memories and so on that we call desiring-machines (assemblages). It only becomes visible to us in and through the machines it forms. While both these terms were abandoned by Deleuze and Guattari in subsequent writing on schizoanalysis, the thinking behind them remains germane throughout. This is by no means straightforward because Deleuze and Guattari cast their discussion of desiring-production in language drawn from Marx, which has the effect of making it seem as though they are talking about the production of physical things, which simply is not and cannot be the case. The truth of this can be seen by asking the very simple question: if desire produces, then what does it produce?
The answer isn’t physical things. The correct answer is ‘objects’ – but ‘objects’ in the form of intuitions, to use Kant’s term for the mind’s initial attempts to grasp the world (both internal and external to the psyche). That is what desire produces, objects, not physical things. Kant, Deleuze and Guattari argue, was one of the first to conceive of desire as production, but he botched things by failing to recognize that the object produced by desire is fully real. Deleuze and Guattari reject the idea that superstitions, hallucinations and fantasies belong to the alternate realm of ‘psychic reality’ as Kant would have it (Deleuze and Guattari 1983: 25). The schizophrenic has no awareness that the reality they are experiencing is not reality itself. They may be aware that they do not share the same reality as everyone else, but they see this as a failing in others rather than a flaw in themselves. If they see their long dead mother in the room with them they do not question whether this is possible or not; they aren’t troubled by any such doubts. That is the essential difference between a delusion and a halluci- nation. What delusionals see is what is, quite literally. If this Kantian turn by Deleuze and Guattari seems surprising, it is never- theless confirmed by their critique of Lacan, who in their view makes essentially the same mistake as Kant in that he conceives desire as lacking a real object (for which fantasy acts as both compensation and substitute). Deleuze and Guattari describe Lacan’s work as ‘complex’, which seems to be their code word for useful but flawed (they say the same thing about Badiou). On the one hand, they credit him with discovering desiring-machines in the form of the objet petit a, but on the other hand they accuse him of smothering them under the weight of the Big O (Deleuze and Guattari 1983: 310). As Zizek is fond of saying, in the Lacanian universe fantasy supports reality. This is because reality, as Lacan conceives it, is fundamentally deficient; it perpetually lacks a real object. If desire is conceived this way, as a support for reality, then, they argue, ‘its very nature as a real entity depends upon an “essence of lack” that produces the fantasized object. Desire thus conceived of as production, though merely the production of fantasies, has been explained perfectly by psychoanalysis’ (Deleuze and Guattari 1983: 25). But that is not how desire works. If it was, it would mean that all desire does is produce imaginary doubles of reality, creating dreamed-of objects to complement real objects. This subordinates desire to the objects it supposedly lacks, or needs, thus reducing it to an essentially secondary role. This is precisely what Deleuze was arguing against when he said that the task of philosophy is to overturn Platonism. Nothing is changed by correlating desire with need as psychoanalysis tends to do. ‘Desire is not bolstered by needs, but rather the contrary; needs are derived from desire: they are counterproducts within the real that desire produces. Lack is a countereffect of desire; it is deposited, distributed, vacuolized within a real that is natural and social’ (Deleuze and Guattari 1983: 27).
Nearly every single student in my big Introduction to Journalism lecture knew what I was talking about when I mentioned #thedress. I used it as a simple example to illustrate some core concepts for operating in a multi-platform or convergent news-based media environment.
Multi-Platform Media Event
Journalists used to be trained to develop professional expertise in one platform. Until very recently this included radio, television or print and there was a period from the early to mid-2000s when ‘online’ existed as a fourth category. Now ‘digital’-modes of communication are shaping almost all others. We’ve moved from a ‘platform only’ approach to a ‘platform first’ approach — so that TV journalists also produces text or audio, writers produce visuals, an so on — and what is called a ‘multi-platform’ (or ‘digital first’, ‘convergent’ or ‘platform free’) approach.
When with think ‘multi-platform’, we think about how the elements of a story will be delivered across media channels or platforms:
Live – presentations
Social – Facebook, Twitter, Youtube, etc.
Web – own publishing platform, podcast, video, etc.
Mobile – specific app or a mobile-optimised website
Television – broadcast, narrowcast stream, etc.
Radio – broadcast, digital, etc.
Print – ‘publication’
‘Platform’ is the word we use to describe the social and technological relation between a producer and a consumer of a certain piece of media content in the act of transmission or access. In a pre-digital world, transmission or delivery were distinct from what was transmitted.
Thinking in terms of platforms also incorporates how we ‘operate’ or ‘engage’ with content via an ‘interface’ and so on. Most Australians get their daily news from the evening broadcast television news bulletin. Recent figures indicate that most people aged 18-24 actually get their news about politics and elections from online and SNS sources, compared to broadcast TV.
#thedress is a multi-platform media event. It began on Tumblr and then quickly spread via the Buzzfeed post to Twitter and across various websites belonging to news-based media enterprises. It only makes sense if the viral, mediated character of the event is taken into account. #thedress media event did not simply propagate, it spread at different rates and at different ways. The amplification effect of celebrities meant #thedress propagated across networks that are different orders of magnitude in scale. Viral is a mode of distribution, but it also produces relations of visibility/exposure.
New News and Old News Conventions
Consumers of news on any platform expect the conventions of established news journalism. What are the conventions of established news journalism?
The inverted pyramid
Grammar: Active Voice, Tense
When we look at #thedress multi-platform media event we see different media outlets covered the story in different ways. Time magazine wrote the most conventional lead out of any that I have seen; the media event is the story:
I’ve only include the head, intro and first par for Time and Cosmo and you can see already they are far more verbose compared to Buzzfeed’s original post. The original Buzzfeed post rearticulated a Tumblr post, but with one important variation:
What Colors Are This Dress?
There’s a lot of debate on Tumblr about this right now, and we need to settle it.
This is important because I think I’m going insane.
Tumblr user swiked uploaded this image.
There’s a lot of debate about the color of the dress.
So let’s settle this: what colors are this dress?
68% White and Gold
32% Blue and Black
The Buzzfeed post added an ‘action': the poll at the bottom of the post. Why is this important?
Buzzfeed, Tumblr and the Relative Value of a Page View
Some of its sponsored “story unit” ad units have clickthrough rates as high as 4% to 5%, with an average around 1.5% to 2%, BuzzFeed President Jon Steinberg says. (That’s better than the roughly 1% clickthrough rate Steinberg says he thought was good for search ads when he worked at Google.) BuzzFeed’s smaller, thumbnail ad units have clickthrough rates around 0.25%.
At BuzzFeed our mobile traffic has grown from 20% of monthly unique visitors to 40% in under a year. I see no reason why this won’t go to 70% or even 80% in couple years.
Importantly, Buzzfeed’s business model is still organised around displaying what used to be called ‘custom content’ and what is now commonly referred to as ‘native advertising’ or even ‘content marketing’ when it is a longer piece (like these Westpac sponsored posts at Junkee).
On the other hand, Tumblr is a visual platform; users are encouraged to post, favourite and reblog all kinds of content, but mostly images. For example, .gif-based pop-culture subcultures thrive on tumblr and tumblr icons are those that perform gestures that are easily turned into gifs (Taylor Swift) or static images (#thedress).The new owners of Tumblr, Yahoo, are struggling to commercialise Tumblr’s booming popularity.
I had a discussion with the Matt Liddy and Rosanna Ryan on Twitter this morning about the relative value of the 73 million views of the original Tumblr post versus the value of the 38 million views of the Buzzfeed post. Trying to make sense of what is of value in all this is tricky. At first glance the 73 million views of the original Tumblr post trumps the almost 38 million views of the Buzzfeed post, but how has Tumblr commercialised the relationship between users of the site and content? There is no clear commercialised relationship.
Buzzfeed’s business model is premised on a high click-through rate for their ‘native advertising’. Of key importance in all this is the often overlooked poll at the bottom of the Buzzfeed post. Almost 38 million or even 73 million views pales in comparison to the 3.4 million votes in the poll. Around 8.6% of the millions of people who visited the Buzzfeed article performed an action when they got there. This may not seem as impressive an action as those 483.2 thousand Tumblr uses that reblogged #thedress post, but the difference is that Buzzfeed has a business model that has commercialised performing an action (click-through), while Tumblr has not.
Last week I delivered the first lecture in our Introduction to Journalism unit. I am building on the material that my colleague, Caroline Fisher, developed in 2014. One of the things about teaching journalism is that every example has to be ‘up to date’. One of the things that Caroline discussed in the 2014 lecture were the predictions for 2014 as presented by the Nieman Lab.
Incorporating these predictions into a lecture is a good way to indicate to students what some professionals and experts think are going to be the big trends, changes and events in journalism for that year. (The anticipatory logic of predictions about near-future events has become a genre of journalism/media content that I briefly discuss in a forthcoming journal article. See what I did there.)
To analyse the the 65 predictions for 2015 in a lecture that only goes for an hour would be almost impossible. What I did instead was to carry out a little exercise in data journalism to introduce students to the practical concepts of ‘analytics’, ‘website scraping’, and the capacity to ‘tell a story through data’.
I created a spreadsheet using Outwit Hub Pro that scraped the author’s name, the title of the piece, the brief one or two line intro and the number of Twitter and Facebook shares. I wanted to know how many times each prediction had been shared on social media. This could then serve as a possible indicator of whether readers though the prediction was worth sharing through at least one or two of their social media networks. By combining the number of shares I could then have a very approximate way to measure which predictions readers of the site had the most value.
I have uploaded the table of the Nieman Lab Journalism Predictions 2015 to Google Drive. The table has some very quick and simple coding of each of the predictions so as to capture some sense of what area of journalism the prediction is discussing.
The graph resulting from this table indicates that there were four predictions that were shared more than twice the number of times compared to the other 61 predictions. The top three stories had almost three times the number of shares.
Here are the four stories with the total number of combined shares:
I guess I could pivot here to talk about the future of news in 2015 being about mobile and personalization. (I would geek out about both immensely.) I suppose I could opine on how the reinvention of the article structure to better accommodate complex stories like Ferguson will be on every smart media manager’s mind, just as it should have been in 2014, 2013, and 2003.
But let’s have a different kind of real talk, shall we?
My prediction for the future of news in 2015 is less of a prediction and more of a call of necessity. Next year, if organizations don’t start taking diversity of race, gender, background, and thought in newsrooms seriously, our industry once again will further alienate entire populations of people that aren’t white. And this time, the damage will be worse than ever.
It was a different kind of prediction compared to the others on offer. Most people who work in the news-based media industry have been tasked with demonstrating a permanent process of professional innovation. Edwards piece strips back the tech-based rhetoric and gets at the heart of what media organizations need to be doing so as to properly address all audiences. “The excuse that it’s ‘too hard’ to find good journalists of diverse backgrounds is complete crap.”
The second most shared piece, on the limitations of over-relying on Facebook as a driver of traffic, fits perfectly with the kind of near-future prediction that we have come to expect. Gnomic industry forecasting flips the causal model with which we are familiar — we are driven by ‘history’ and it is the ‘past’ (past traumas, past successes, etc) that define our current character — so that it draws on the future as a kind of tech-mediated collective subconscious. Rather than being haunted by the past, we are haunted by possible futures of technological and organisational change.
Algorithms are increasingly being deployed to make decisions where there is no right answer, only a judgment call. Google says it’s showing us the most relevant results, and Facebook aims to show us what’s most important. But what’s relevant? What’s important? Unlike other forms of automation or algorithms where there’s a definable right answer, we’re seeing the birth of a new era, the era of judging machines: machines that calculate not just how to quickly sort a database, or perform a mathematical calculation, but to decide what is “best,” “relevant,” “appropriate,” or “harmful.”
Media, culture and philosophy personal research blog by Glen Fuller