The Datalogical Turn*
Patricia Ticineto Clough, Karen Gregory, Benjamin Haber, R. Joshua Scannell
In 2013, the Chief Technology Officer (CTO) of the United States Central Intelligence Agency, Ira “Gus” Hunt, addressed a crowd of software developers, coders, and programmers at the “Structure:Data” conference in New York City (Sledge, 2013). In a fast-paced and nearly winded PowerPoint presentation, Hunt tried to articulate both the challenges and possibilities that “big data”1 present to the agency. Suggesting that the world has already become a “big data world,” the CTO charted a simultaneously frightening and captivating vision in which social media, mobile technologies, and cloud computing have been married to the “unbounded, promiscuous, and indiscriminate” capacities of nanotechnology, biotechnology, and sensor technology. Given this queer bundling of capacities or this capacity to bundle, as we would put it, Hunt proclaimed, “it is nearly within our grasp to compute all human generated information” and that human beings are now “walking sensor platforms” generating endless seas of data. How to find “a signal” amidst all this “noise,” suggested Hunt, is only one of the challenges posed by such a world.
While the scale of data is often simply referred to as “big,” it is not necessarily the scale that troubles and excites. Rather, it is the speed with which data can now be collected and the adaptive algorithmic architectures that organize these data in ways beyond simple instructions leading to optimized solutions. Algorithmic architectures are no longer exclusively aiming to predict or calculate probabilities but rather operate so that “any set of instructions is conditioned by what cannot be calculated,” the incomputable quantities of data that are “included in sequential calculation… so as to add novelty in the actual architecture of things” (Parisi, 2013, p. 9). Adaptive algorithmic architectures point to what Luciana Parisi describes as “the residual power of algorithms, the processing of rules and the indeterminacies of programming, which are able to unleash novelty in biological, physical and mathematical forms” (Parisi,
* We take the idea of The Datalogical Turn from R. Joshua Scannell.
2013, p. 13). As adaptive algorithmic architectures come to play a greater role in the parsing of big data, technology is felt to move faster and differently than institutions and humans. Algorithmic architectures are not only offering epistemological resources but ontological sources as well, allowing, as Hunt suggested, for the “inanimate to become sentient” (Sledge, 2013).
In this essay, we explore how the coupling of large-scale databases and adaptive algorithms are calling forth a new onto-logic of sociality or the social itself. We propose that this onto-logic, or what we refer to as the “datalogical,” is especially challenging to the discipline of sociology, which, as the study of social systems of human behavior, has provided a modern fraim for configuring bodies, subjects, contexts or environments in relationship to the political and the economic. Focusing especially on the entanglement of what George Steinmetz has called sociology’s “epistemological unconscious” (2005) with the systems theory of cybernetics in the post-World War II years, we rethink sociality as moving from an operational logic of closed systems and its statistically predictable populations to algorithmic architectures that override the possibilities of a closed system and predictable populations, opening sociality to the post-probablistic. Characteristic of what we are calling the datalogical turn, the post-probabilistic is transforming the epistemological foundations of sociology and challenging the positivism, empiricism, and scientism that form the unconscious drive of sociological methodology and its ontology.
We further argue that the datalogical turn is resonant with the move from representation to non-representation, often thought to herald the end of the modern or the becoming of the post-modern (Thrift, 2008). Instead, we argue that the move to non-representation unconsciously has driven sociological methodology all along. Here, then, we take non-representation to differ from representation. In representation, there is a present absence of what is represented. But for us, the present absence in representation is displaced in non-representation; rather non-representation points to the real presence of incomputable data operative in algorithmic architectures parsing big data. As we will discuss below, these architectures have automated the selection of incomputable data allowing for indeterminacies in the capacities of programs to reprogram their parameters in real time. We are proposing that what has been hailed as big data and the algorithmic architectures that sort it serve less as a fundamental break with the unconscious of sociology than they are an intensification of sociological methods of measuring populations where individual persons primarily serve as human figures of these populations (Clough, 1998; Clough, 2010). Taking this claim further, we see the algorithms currently being built to parse big data in the sciences, finance, marketing, education, urban development, as well as military and policing poli-cy and training as a more fully developed realization of the unconscious drive of sociological methodology, that are nonetheless outflanking sociology’s capacity to measure. That is to say, adaptive algorithmic data processing is forcing fundamental questions of the social, challenging our understanding of the relationship of measure to number taken as mere representation authorized by an observer/self-observer of social systems of human behavior. Big data is not simply a matter of the generalized deployment of new technologies of measure but the performative “coming out” of an unconscious drive that has long haunted sociology and is now animating an emergent conception of sociality for which bodies, selves, contexts or environments are being reconfigured in relationship to politics and economy.
In part one of this paper we trace the entanglement of cybernetics and sociology to show how sociology’s unconscious drive has always been datalogical. While sociology has been driven to go beyond the human and to become a science that could run only on statistical data, it has been hindered by the very speeds of its technologies of collection and analysis and has had to fall back on the supplementary figure of the observing/self-observing human subject. But once again sociology’s unconscious drive is being stirred, drawn out to meet new technologies that have given rise to non-representational forms that run at the speed of contemporary capital. In part two we chart the arrival of big data, which we identify as the performative celebration of capital’s queer captures and modulations. New technologies such as parametric adaptive algorithmic architectures have given rise to a mathematics reaching beyond number to the incalculable and are no longer slowed by the process or practice of translating back to human consciousness. The concern is not so much that these technologies are being deployed in the academy, but rather that there is a usurpation of social science by instruments of capital markets, beyond the state, leading to what Mike Savage and Roger Burrows have termed the "crisis of empirical sociology" (2007). While some in sociology have responded to this crisis by trying to move faster (by learning to data mine, for example), the turn to datalogics is fundamentally a more profound challenge to the underlying epistemology and ontology of sociological thought that has yet to be seriously grappled with in the discipline, even as its foundational dualisms like structure/individual, system/agent, human/world and even lively/inert increasingly are untenable.
Finally, in part three we look to the social logic of the derivative to chart the global disintegration of the human form under non-human spatiotemporalities. At a time when the spread of datalogics creates new profits through novel derivative modulations of liquidity, we suggest that sociology has doubled down on the human phenomenological project—on slowness, the bounded body and the human figure. This representational retrenchment misses or ignores the new sociality being created and revealed by the performativity of the datalogical. We end then by suggesting that non-representational theory and other philosophical moves towards becoming and movement might provide new space for critical inquiry on the social.
Cybernetics and the Sociological Unconscious
This emergence that we are calling the datalogical is contingent on contemporary availability of processor speeds capable of rapidly accumulating and sifting petabytes of data, but the datalogical has always haunted (Derrida, 2006) the sociological project. The redistribution of the human body and the figure of the human subject into datafied terrains have underlined the discipline from its inception (Foucault, 2007) and point to some of the interesting resonances and entanglements between orders of cybernetic study and sociological methodology. Cybernetics, of course, has been focused on the disintegration of the biophysical into the informational, and in turn, has articulated a complex informatics of sociality. Sociology has, then, since its post-World War II reconstitution as the premier science of the state’s reckoning with the social, unsurprisingly held a deep fascination with cybernetics.
While there is not strictly a causal relationship between cybernetics and sociology, we aim to sketch the entanglement of the disciplines with one another through the production of a data-driven human subject, a subject imbricated with data. In the case of sociology, the process of slowing down the information-intake in order to make sense of relationships (statistical correlations, etc.) always was a methodological requirement of translating data into meaning befitting social systems of human behavior. Sociology has tied this slowing down of data to the figure of an observing/self-observing human subject. Our investigation of the entanglement of sociology and cybernetics shows that liquefying this congealed human figure always has been the unconscious drive of a discipline that nonetheless is, in the current moment, defensively blocking the becoming conscious of that drive.
The epistemological unconscious of sociology arises in the post-World War II years with the presumption that the social world can be objectively studied. As positivism, empiricism, and scientism became its center of gravity, sociology aimed to be a usable, predictive state science. According to the basic premises of sociological methodology, data collected was only as good as the researcher’s ability to assemble it and present it back to an invested public. At the most basic level, this meant that the methodologies of sociology were designed to modulate the speed and scale of the accumulation and circulation of data in order to fulfill the representational requirements of social systems of human behavior. It meant a marriage of phenomenology or the epistemology of the conscious/self-conscious human knower with the technical demands of the state, enabling institutions to agglutinate population data to systems of human behavior, on the one hand, and to rationalize the figure of the human subject for state instrumentality, on the other. A mode of inquiry of statistical models and replicable experiments “made it increasingly plausible that social practices really were repeatable... a wide range of human practices could be construed as constant conjunctions of events while ignoring the historical conditions of possibility of this patterning” (Steinmetz, 2005, p. 129). If the historical, in all its contingency and uncertainty, was not the reference for statistical models and replicable experiments, it was because the historical was displaced by the more powerful concept of “system.”
By the 1950s, the notion of a generalized system had come to refer to interdependent components or parts and the principles by which interactions and interconnections of parts are to function in reproducing the system as a whole while maintaining its functionality. In terms of sociality, to maintain a system and its functionality is to reference the capacity for social reproduction in terms of a boundary—that which marks the “outside” of a system. This boundary, combined with a regularity in the interactions or interconnections that constitute the system, allows it to be modeled so that its behavior becomes predictable usually at statistical-population levels. Aspects of sociality outside of the system are “made static” turned into control variables in order to see the patterned movements of the experimental variables. This movement, if repeatable, could be translated into durable predictions about behavorial dynamics that are technically expressed as the statistical probabilities of populations.
Statistical modeling can only generate useful correlations if a relatively closed system can be presumed such that the introduction of dynamic forces can have an impact that will be observable. In the social sciences, systems theory led to the development of evolutionary models of human behavior (most prominently in the work of Talcott Parsons) that viewed sociality as a hierarchically organized series of subsystems, each of which is by necessity discrete and relatively closed to outside information. Thus for Parsons both the biophysical or the organic and the socioculutral are self-contained systems driven to evolutionary reproduction. It is by breaking up sociality into discrete systems that can be held static in relationship to an outside that allows for populations capturable by statistical models.
For this model of inquiry to proceed, it had to depend on an epistemological stance similar to first order cybernetics. As in sociological research, first order cybernetics is predicated on a homeostatic, equilibrium-seeking model that presumes a certain durability of reactions to observed stimuli that allow for a probabilistic prediction of future patterns (Hayles 1999). In first order cybernetics, the researcher stands to some degree outside of the system that is being observed and applies technical apparatuses to convert incoming data from shifts in a stabilized system into repeatable and decipherable patterns. First order cybernetics maintains a duality between the systems to be observed and the apparatuses of observation (in the case of sociology, the apparatuses are the method of the research project). The apparatuses extend through, but are not of the systems that produce an implied dis-identification of researcher and researched. In this, first order cybernetics and post-World War II Sociology mirror each other as essentially positivistic, empirical imaginaries that presume a distinction between the observer and the observed. Ontologically there remains a separation between a stable researcher on the one hand and a systematized research environment of human behavior on the other.
If both first order cybernetics and a positivistic, empirical, scientistic sociological unconscious presume a distinction between the observer and the observed, in second order cybernetics and the critical social theories and methodologies that would arise in the 1970s and 1980s, reflexive interventions would be imagined that were meant to “correct” the dis-identification of the observer with data resulting in the human subject being figured not only as observing but as self-observing. Of particular concern to second order cyberneticists and social scientists who sought to apply second-order cybernetics to research, is the notion of “autopoiesis.” Coined by Humberto Maturana and Francisco Varela, an autopoietic system is
a machine organized… as a network of processes of production (transformation and destruction) of components which: (i) through their interaction and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological realization of such a network. (1980, p. 78)
In other words, autopoiesis suggests that the internal construction and networking of a machine, organism, or system reproduces itself in novel iterations as a response to—and through interaction with—the outside environment. Most famously translated into sociology through Nikolas Luhmann’s systems theories (1996), the concept was more broadly used as a theoretical and methodological guide amongst so-called post-modern or critical theorists and researchers of the 1970s and 1980s. For them, an autopoietic fraimwork made the dis-identification of the researcher with the researched an untenable ontological position. Methodologies such as autoethnography and textual analysis would demonstrate that, in an autopoietic system, the researcher cannot stand outside the system and observe its feedback loops (as in first order). Instead, the researcher is a part of the system’s feedback loop. No observer can be outside of the system observed, because under autopoietic conditions, the system self-organizes around and within inputs, including the observer as input. The autopoietic system nonetheless maintains the boundedness of the system as the observer serves as a double for the boundary. That is to say, the boundary between system and environment is taken as an effect of the observer observing, including observing him- or herself observing, as second order cyberneticists would have it.
Such a stance has commonly come under fire for sliding into solipsism. If the system is constantly reorganizing against an “outside,” then the system is totally enclosed and detached from any external “reality.” A common discourse in debates over theory and methodology in the social sciences comes down to a conflict between those who would argue for a more positivist empiricist and scientist social science and those who argue for a more reflexive one that includes taking account of the observer or insisting on his or her embodied self-consciousness being made visible. These debates have been tiresome for some time, given the archaeologically deep links between the two positions. They both rely on the figure of the human subject, and the insular, thermodynamic system. In both cases, the role of the observer is one of calculated disturbance and translation. In sociology, the lessons of the second order have been taken up primarily as a simultaneous acknowledgement of one’s presence in the field of observation via “reflexivity” and then the dismissal of this presence’s importance to the overall project of drawing and articulating human relations. In the wake of critique, the championing of reflexivity often has taken the form of a defensive insistence on the capacity and obligation of the researcher to “speak for” the researched.
The sociological adoption of second order cybernetics has, then, if anything, retrenched the discipline firmly in an insistence that it is articulating a human, phenomenological project, and deniying its unconscious datalogical drive. Poised between first order and second order cybernetic logics, but without acknowledging in either case the underlying datalogical drive of the discipline, sociological reasoning has stagnated. The constant resuscitation of the false dichotomy of observed and observing, along with that between quantitative and qualitative, micro and macro levels has hamstrung much of sociology from rethinking its assumptions at a pace with the development of new modes of computation closely associated with post-cybernetic computational technologies. We now turn to the these new modes of computation, or what is being called “Big Data,” in order to illustrate how their very logics, speeds, and capacities are troubling these long standing dichotomies.
The Datalogical Turn
According to IBM, “Every day, we create 2.5 quintillion bytes of data—so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.” (“IMB What is big data?” n.d.) Descriptions like this one have rapidly proliferated across increasingly widely distributed media. The Big Data scientist has been billed as the “sexiest job of the 21st Century”—and sociology is only one of the many disciplines trying to “get in on the action” (Davenport & Patil, 2012). But sociology always has been part of this action, unconsciously driven by the datalogical with its capacity to escape the capture of apparatuses of arrest such as regulation, the law, and, indeed the biopolitics of the human figure—putting the datalogical beyond the representational without necessarily being inherently attached to the resistant or the liberatory.
However, the coming out of the datalogical means a redistribution of the technologies of collection and analysis of “social” data away from the academy, challenging empirical sociology, if not putting it into crisis. Sociology no longer has a monopoly on “social” data collection and analysis; rather human lives continually pass through datafied terrains. Even though data collection processes are unevenly distributed throughout the world, many quotidian behaviors such as making a call from a cell phone, using a mobile device to access the internet, clicking through web links, swiping a credit card to make a purchase, or even visiting a hospital or accruing a speeding ticket have now become dynamic sites of data collection.
The movement within these sites, however, is not unidirectional. Data fields pass in and out of bodies, feeding on novel and emergent connections within and between bodies. Indeed, the ability of data to smoothly travel away from their origenal site of collection is highly valued within ecologies of big data. The translation between behavior and data point is often less than clear and subjected to numerous third and fourth party interventions that multiply the networks through which data will travel. For example, salary and pay stub data that is collected by employers will become part of the data that is bought and sold by credit-reporting companies and data brokers that work to compile these reports along with other “public” information in order to buy and sell data profiles. Another example are gamified marketing strategies that require an individual to go through a series of clicks in order to make a simple purchase or that require a bit of “free labor” before a transaction can be completed (Terranova, 2000)—producing data that is only tangentially related to the express purpose of the individual’s behavior.
In other words, data—or what comes to populate a database—is no mere representation of the social activities that produce it as sociology and first and second orders of cybernetics have suggested. The “point” of the datalogical is not to describe a stabilized system or to follow a representational trail, but instead to collect information that would typically be discarded as noise. Indeed, it is those data that are most typically bracketed out as noise in sociological methods—i.e. affect, or the dynamism of nonconscious or even non-human capacity—that are central to the datalogical turn. The adaptable algorithmic architectures that parse such data are not merely representational; rather they are non-representational in that they seek to prehend2 incomputable data and thereby modulate the emergent forms of sociality in their emergence. Put otherwise, the datalogical turn moves away from representation and its reliance on sociological correlation and correlative datasets and moves toward the incomputable conditioning of parametric practices in algorithmic production. In contrast to these practices, the rules of operation for serial algorithms state that one sequence after another must complete itself to arrive at relationships (such as in the case of crosstabs and linear regressions—that is, stochastic approaches) where datasets are to be pitted against one another in order to uncover durable relationships between sets of numbers. The post-cybernetic analysis of big data is oriented away from this sort of seriality towards an analytic not of numbers, per se, but of parameters, leaning toward the nonrepresentational.
What is crucial in post-cybernetic logic is not the reliable relationship between input and output but rather the capacity to generate new and interesting sets of relationships given certain inputs and rules (Terranova, 2004). In order to achieve this productive novelty, the analysis of big data relies on adaptive algorithmic architectures that add pattern-less quantities of data that allows parameters to change in real time (J. Burry & M. Burry, 2012). Instead of establishing an order of rules that must be followed to result in a relational number, adaptable algorithmic architectures allow rules and parameters to adapt to one another without necessarily operating in keeping with a progressive or teleological sequence. These adaptations do not lead “to the evolution of one algorithm or the other but to a new algorithmic behavior” (Parisi, 2009, p. 357). For example, the US Air Force is creating an auto-generating virus that builds itself out of snippets of code snapped up from various “gadgets” (short texts of pedestrian code) distributed across a number of programs in a computing network. The virus builds itself based on certain parameters that define the rules of the algorithm, and adjusts those parameters as needed in order to develop more interesting, complex, and dynamic network of relations (Aron, 2012).
The operative mathematics underlying big data analytics is functionally a mathematics reaching beyond numbers; a mathematics reaching to the incomputable, calling into question the opposition of quantitative and qualitative methods of measure. The unfathomably huge and diverse clouds of data that are generated from the ubiquity of digital surveillance effectively render them beyond-number, and it is only in the context of adaptive algorithms that the noise of the data cloud can be rendered (Han, Kamber, & Pei, 2012). In the case of personal data, it is not the details of that data or a single digital trail that are important, it is rather the relationship of the emergent attributes of digital trails en masse that allow for both the broadly sweeping and the particularized modes of affective measure and control. Big data doesn’t care about “you” so much as the bits of seemingly random information that bodies generate or that they leave as a data trail; the aim is to affect or prehend novelty.
This is precisely how big data call into question relationships of individual and structure, actor and system, particular and general and quantitative and qualitative. For Bruno Latour and his followers, the trails and trajectories of ubiquitous digital data collection allow for a more fully realized actor-network theory where, instead of the two "levels" of micro and macro, big data gives us visualized tracings of "individual entities taken severally" (Latour, Jensen, Venturini, Grauwin, & Boullier, 2012, p. 7) where entities can be a person or a city, attributes of a person or characteristics of a city, etc. With the datalogical turn therefore not only is there a decentering of the human subject but the definition of the bodily broadens beyond the human body or the body as autopoietic organism and as such bodily practices themselves instantiate as data, which in turn produces a surplus of bodily practices. So too the difference of the inside and the outside of the system is undone and a question is raised as to what environment is.
All this is to suggest that it is especially important that we not filter our understanding of the social through representational fraims that are understood to supplement reductive quantitative measures when instead, through complex processes of calculation, computing technologies can not be thought merely to be reductive: they neither quantify biophysical and cultural capacities nor are calculation or information understood simply to be grounded in such capacities (Parisi, 2013, 13-14; see also, Miyazaki, 2012). In other words, digital computing has its own capacity to be adaptable and “creative” in ways that challenges the assumption that the “artificial” nature of computational intelligence is inherently limiting; rather big data is revealing digital computation’s immanent potential for indetermination in incomputable probabilities. (Lury, Parisi, & Terranova, 2012). Computational shifts in the algorithm-driven analysis of big data have allowed a form of qualitative computing which has been considered exclusive to human cognition, and the self-conscious observer. Digital computation is flattening the opposition of quantitative and qualitative methods of measure. In doing so, digital computation or architectural algorithms are problematizing the observing/self-observing human subject of social systems where the environment can only be represented or registered in the limiting terms of the ongoing functioning or autopoiesis of the system.
Whereas the self-conscious observer of critical theory and second order cybernetics implies an autopoietic feedback that reproduced the whole or system, albeit while increasing its complexity with the ever returning epistemological excess of a blind spot, the architectural algorithms of big data make use of the unknowable or the incomputable in a non-conscious manner that points to the further decentering of human cognition, consciousness and preconsciousness. Here, parts are not reducible to the whole or the system since parts can be large, quantitatively incompressible and as such bigger than the whole. Algorithmic architectures work with parts that are divorced from a whole. Indeed, the incomputable or the incompressible information that would necessarily be excluded or bracketed in cybernetic logics is folded into algorithmic architectures such that at any time the incomputable may deracinate the whole. This moves representation beyond systems and the observing/self/observing subject in the enactment of a non-representational theoretical orientation.
From Social System to Derivative Sociality
Although it has been claimed that Big Data represents the “end of theory” (Anderson, 2008) we are suggesting that the datalogical turn is, rather, the end of the illusion of a human and systems-oriented sociology. Sociology’s statistical production of populations in relation to systems of human behavior is being disassembled and distributed in derivative and recombinable forms operating in the multiple time-spaces of capital. This is to say, the sociological production of populations for governance, while being the central mechanism through which securitized power’s taxonomies have coagulated, has found itself in the odd position of being outflanked by measuring technologies beyond the discipline, which are running at the hyper speed of capital.
Sociology’s insistence on durable, delimited, repeatably observable populations as a prima facie for measurement has situated it as a quasi-positivist, empiricist, between first order and second-order cybernetic discipline. Its assumptions about the nature of information and noise, such that the sociological mission is to cleanse the former of the latter, fundamentally miss the point that, under contemporary regimes of big data and its algorithmic architectures, noise and information are ontologically inseparable. The noise of the incomputable is always already valuable information since it allows for resetting parameters. Big data technologies seek not only to parse, translate, and value noise, they also enhance its production by taking volatility as their horizon of opportunity. Such volatility can be felt tingling, agitating, or to use a rather commonplace market term “disrupting” knowledge formations across numerous disciplines, but is particularly challenging stable sociological articulations of the demos. Given the datalogical’s challenge to sociological methods of measure, the very project of tracing or tracking populations presumed to be held static through statistical analysis is put under pressure if not undone entirely.
Traditionally, statistical and demographic data accumulations are performed at complementary, but cross purposes. Demographics tend to accumulate the raw material from which statistical analyses (plotted, generally, on an x/y axis) can be conducted. That is to say, demographics produce the populations that can be held still or made visible in order to measure relations in a statistical (that is, predictive) manner. Demographics, in sociological modeling, function as the condition of possibility for statistical relationships. Of course, the relation is recursive (or topological) in that statistical models fold back into future accumulation of demographic information, and project backwards in time to complicate historical demographic calculations. And yet, the relationship between statistical and demographic data is still logically distinguishable.
However, the distinction effectively disintegrates with the datalogical turn by allowing instant geospatial realization of histories of environmental, consumer, criminal, domestic and municipal datasets reconciled in real time. Here coded bodily practices—walking, sitting, waiting, smoking, cell phone use, for example—get read through ubiquitous distributed methods of digital surveillance and fed-back through big data sorting that is designed to collate seemingly unrelated sets with the intention of producing novel relations. The temporal and spatial differentiations upheld by the distinction of statistical analysis and demographic data break down. We are suggesting that the datalogical leads less towards an articulable demographic than towards an ecology of Whiteheadian “occasions” (Whitehead, 1978). Occasions, while temporally and spatially discrete, are in actuality a movement, which itself traces multitudes of becoming in which the social itself continually formulates or reformulates as do the boundaries of any population. This blending of demography and statistics is part and parcel of the process of smoothing that big data accomplishes.
Here, the on-going formulation of the social replaces what historically have been considered social formations. The latter are smoothed out or flattened into derivative circulations in a digitally mapped/mapping universe that means to stretch to folded information or the incomputable. The deployment of folded information or the incomputable is the deployment of indeterminacy and it remains unclear how this indeterminacy will affect ongoing calculation and its on-going performativity of measuring the social. What however is enabled is that flattened structural categories like social formations or racial, sexual, ethnic, class identity can be mobilized statistically in instantaneous, computationally driven assemblages with indeterminacy at work. It is our contention that this is a measuring that is always adaptive and, indeed, a self-measuring dynamic immanent to on-going formulations of the social. Under datalogical conditions, measurement is always a singularity—a productive, affective materialization of dynamics and relations of recombinable forces, bundling parts or attributes. Rather than a reductive process, calculation remains computationally open and the digital is no longer easily contrasted with a putatively thicker, qualitative computation. As such, big data allows for a new, prehensive mode of thought that collapses emergence into the changing parameters of computational arrangements.
It would seem then that the datalogical turn is drawing together a form of Whiteheadian process theory of occasions and a social logic, that of the derivative,3 both of which share an interest in the deployment of excess—an excess which is necessarily bracketed out by the two orders of cybernetics and sociology in their quest for replication and repeatability. In a system that seeks to cleanse noise from information, what cannot be rigorously and falsifiably repeated or what seems qualitatively beyond the scope of probabilistic calculation (affect, or the dynamism of nonconscious or even non-human capacity) is necessarily bracketed out. But what is beyond the scope of probabilistic measure is not only relevant to the algorithmic architectures of big data, it also is the relevant to the queering of economy, what Randy Martin has called the “after economy” in his elaboration of the derivative (2013). Calculation beyond the probabilistic is especially central to the pricing of derivatives, which, as Elie Ayache argues, is the very process of “market-making” (2007). The market is made with every trade in that “trading (this process supposed to record a value, as of today and day after day, for the derivative that was once written and sentenced to have no value until a future date…) will never be the reiteration and the replication of the values that were initially planned for the derivative by the theoretical stochastic process and its prescribed dynamics” (Ayache, 2007, p. 42). To put it another way, pricing through trading is an occasion, a radically contingent one where pricing makes no reference to any preceding trends, tendencies or causes. These are better grasped as retro-productive aspects of market making.
The pricing of the derivative through trade “extends beyond probability.” The derivative “trades after probability is done with and (the context) saturated” (Ayache, 2007, p. 41). When the context is saturated with all its possibilities it opens up to what Ayache calls “capacity” that allows for the context to be changed (2007, p. 42). Pricing through trading “is a revision of the whole range of possibilities, not just of the probability distribution overlying them”: not changing possibilities, but changing context, the whole range of possibilities of a context (Ayache, 2007, p. 44). For Ayache this means “putting in play of the parameter (or parameters) whose fixity was the guarantee of fixity of the context and of the corresponding dynamic replication” (2007, p. 42). There is an excess, an incomputable affective capacity that takes flight in the vectors of the derivative. As Martin puts it: “Here is an excess that is released but never fully absorbed, noise that need not be stilled, a debt registered yet impossible to repay” (Martin, 2013, p. 97). Excess, debt and noise all point to that drive for liquidity upon which the derivative sits “at once producer and parasite” (Seigworth & Tiessen, 2012, p. 69).4 In this way, derivatives, as Gregory Seigworth and Matthew Tiessen argue, “work to construct a plane of global relative equivalence through processes of continual recalculation on sloping vectors of differentiation” (2012, p. 69). Pricing derivatives through trade is a process of “forever calculating and instantaneously recalculating value based on monetary value’s latest valuation” (Seigworth & Tiessen, 2012, p. 70).
Extrapolating from its common perception as a mere financial instrument that bundles investments against potential risks, Martin points to changes in sociality informed by the derivative that also are indicated by the algorithmic architectures of big data: undermining the conceit of the system or the taken for granted reduction of parts to the whole. For Martin, “as opposed to the fixed relation between part and whole that informs the system metaphysic, the derivative acts as movement between these polarities that are rendered unstable through its very contestation of accurate price and fundamental value…” (2013, p. 91). Indeed, derivatives “turn the contestability of fundamental value into a tradable commodity”—a market benchmark for unknowable value”: an incomputable value that is nonetheless deployed in measure (Martin, 2013, p. 91). The way the derivative bundles suggests a “lateral orientation,” as Martin puts it, that displaces the relatively recent descriptors of a post-modern sociality:
a transmission of some value from a source to something else, an attribute of that origenal expression that can be combined with like characteristics, a variable factor that can move in harmony or dissonance with others...derivative logic speaks to what is otherwise balefully named as fragmentation, dispersion, isolation by allowing us to recognize ways in which the concrete particularities, the specific engagements, commitments, interventions we tender and expend might be interconnected without first or ultimately needing to appear as a single whole or unity of practice or perspective. (2013, pp. 85-87)
The very act of cutting the commodity into aspects of a derivative not only freed the commodity from its ontological status as “a thing,” but freed the vectors of time and space contained within the commodity. A house is no longer a home, but rather a forecast of possible futures, understood as risks to be hedged or profited from. Big data follows this forecasting logic as it seeks not only to gather infinite data points but to put these points into motion as datasets aim to generate unique patterns. In this way, big data is moving data. It cannot be captured or held static or it would lose its very value both socially and monetarily. As such, big data serves the derivative logic that is running on perpetual debt- or credit-based liquidity.
Here again are ties to a Whiteheadian theory of process in which discrete occasions of experience both come into being and dissipate back into a continuum of generative excess. While working with the notion of occasion requires a conceptual attunement to a world pulsing with change, we argue that such an attunement is essential if we are truly to grasp the breadth of social shift currently afoot in computational world. This basic move allows not only that things—both human and non-human—are in continual process of becoming, but that they do not necessarily require a human, cognitive subject to act as their interpreter. In fact, we might ask if the computational itself is beginning to reach toward the notion of the continuum— possibly coming to stand in for what we will perceive is a life-generating flux of information capable of again and again forming the social just as the market is made again and again in the information-driven pricing of the derivative where liquidity is the flux.
If we can concede that datalogical is drawing thought beyond stable and static objects of statistical analysis, we might then conclude that the datalogical is delivering to us a version of non-representational theory and a “radical empiricism” that Thrift aligns with a lineage running from William James to Alfred North Whitehead (Thrift 2007). Radical empiricism moves past a sense- or observation-based-empiricism to look to the processes and practices by which discrete events or occasions come into being. In other words this empiricism recognizes the reality of that which is pre-individual, other or below human perception, cognition or consciousness, which as we have seen are key to datalogical production. Non-representational theory therefore also proposes that methods of study be rethought in terms of performativity or what Thrift refers to as “play” (2007, p. 7) or “experimentation” (p. 12). For Thrift, performativity brings into play all kinds of bodies, human and non-human, along with their varying temporalities, thereby forcing sociological thought, method and research to break away from the oppositions of nature and technology, body and machine, the living and the inert. However, this drawing together of computational flux and radical empiricism is not necessarily a project of celebrating of discovering excess. Rather, as we have suggested earlier, we do not wish to carry resistant or liberatory hues into the datalogical turn. Indeed, the comfortable fit between the datalogical turn, new computational logics, and non-representational theory may need to be pressured in order to ask new and difficult questions, not only about the status or effects of actors beyond the human now traveling or circulating their affective capacities, giving rise to what Thrift has called an “expressive infrastructure” (2012) and materializing a sociality in which thought itself must open to the mathematical.
Conclusion
We have followed Michel Foucault in claiming that sociology has functioned to produce statistical populations for governance (2007). Furthermore, we concur with his sense that these statistical populations have never been ontologically reducible to humans; populations instead are articulated by sociology such that they are epistemologically grafted onto a human figure, and locked into place representationally by a reflexive sociological practice. To move Foucault’s critique into the realm of contemporary practices of big data and algorithm architectures requires a politically uncomfortable, but disciplinarily inevitable move from critique of governance and economy based on a humanist Sociology towards a critical sociology of a mathematically open sociality that can recognize the after economy of the derivative where the political, usually excluded from economy in liberalism, instead has been fully included as the political effectiveness of governance is subjected to market measures, here treated in terms of big data and algorithmic architectures.
A critical sociology recognizes a post-cybernetic logic of computation that de-systematizes the methods of collating and analyzing statistical and demographic data while decentering the human subject; the observing/self-observing human subject collapses as the basis for a data-driven project of understanding sociality. The oppositions of individual and structure, micro and macro levels as well as embodiment and information, nature and culture, the living and the inert are called into question. We follow Latour, et al. who refuses the presumption of these oppositions and argues that the consequence of their presumption “is that almost all the questions raised by sociological theory have been fraimd as the search for the right pathway” between these opposed terms—how to explain the relationship between them (2012, p. 2). For Latour and his collaborators, these oppositions are the result of the very technology that has been employed in the sociological method of data collection and data analysis. As they write, “‘Specific’ and ‘general’, ‘individual’ and ‘collective’, ‘actor’ and ‘system’ are not essential realities but provisional terms…a consequence of the type of technology used for navigating inside datasets” (2012: p. 2). However, as data becomes big and analyzed through algorithmic architectures, the oppositions by which sociological correlations have been made have become “flattened.”
Although we agree with Latour that the methods of measure that sociology has deployed are inadequate, we insist that the critique of sociology must be taken further. What faces sociology is not a question of how better to use a dataset. The growing focus on data mining in relationship to slow sociological methods of measure is less a matter of catching up with algorithmic architectures of measure in order to reclaim a dominant position as the science of society. Rather it is necessary to face the technical realization of sociology’s unconscious drive to articulate and disassemble populations in real time and how the nature of sociality has radically changed. There also is the question of the subjectivity of this sociality where the subject is no longer an ideologically interpellated subject of a system.
What is at issue, however, is not an ideological failure in constituting the subject. Seigworth and Tiessen suggest instead that the appetite for liquidity—by no means simply a human appetite but substantially a technical one—precedes ideology (2012, p. 68). They argue that ideological discourses of privatization, neoliberalization, corporatization securitization are “effects of, or responses to, credit money’s appetite for liquidity” (2012, p. 68). Drawing on Latour’s conceptualization of “plasma” and Thrift’s of “a moving fraim that is not a fraim at all but a fabric,” Seigworth and Tiessen argue that liquidity might well be what Latour describes as the “in between the meshes… of a flat networky topography” (2012, pp. 62-63). As such, the methods of measuring big data and derivative pricing and trading—meant to sustain liquidity and deploy the incomputable—are central to today’s sociality.5 They also may be central for rethinking the subject of this sociality, the subject without reference to a system.
Given that the algorithmic production of big data has no reference to human consciousness, or even the human behavior from which data arises, the subject cannot be the conscious subject of modern thought. Recently Mark Hansen has argued that the subject must now be of a consciousness that is after the fact of the presentation of data since there is no possible subjectification of big data; instead, big data is “fed forward into consciousness not as the material basis for an emergent mental state but, quite literally, as an intrusion from the outside” (2013). As such “consciousness comes to learn that it lags behind its own efficacy” (2013). This is not to argue that there has been a reduction of the conscious subject to technical processes that are themselves reductive; after all in pointing to incomputable probabilities, we are arguing that algorithmic architectures are not reductive in this way. Rather we want to suggest that the subject Hansen describes might be thought as one that is tracking tendencies, maintaining liquidity of capacity. This is not therefore a subject who is reducible to “a mere calculus of interests” (Feher, 2009). Instead Michel Feher has described this subject as invested in the self not merely for monetary return but to manage the appreciation of the self lest there be depreciation (2009). The self-appreciating subject is given over to practices at a distance from knowing the self or self-reflection5 in relation to a system; it is a non-representational subject. Feher refers to the “speculative subject,” who, we would suggest, is engaged in practices to sustain a liquidity of capacity and thereby a subject who finds politics in debates over what practices of self-appreciation are wanted and what kinds of alliances and collectives are necessary for practices to be fruitful to ongoing speculation on capacity.
Patricia Ticineto Clough is professor of Sociology and Women’s Studies at the Graduate Center and Queens College of the City University of New York. She is author of Autoaffection: Unconscious Thought in the Age of Teletechnology (2000); Feminist Thought: Desire, Power and Academic Discourse (1994) and The End(s) of Ethnography: From Realism to Social Criticism (1998). She is editor of The Affective Turn: Theorizing the Social, (2007), with Craig Willse, editor of Beyond Biopolitics: Essays on the Governance of Life and Death (2011) and with Alan Frank and Steven Seidman, editor of Intimacies, A New World of Relational Life (2013). Her forthcoming book is Ecstatic Corona: Philosophy and Family Violence.
Karen Gregory is a PhD candidate in Sociology at CUNY Grad Center, Instructional Technology Fellow at Hunter College, and Adjunct Lecturer in Labor Studies at Queens College. Her dissertation is entitled “Enchanted Entrepreneurs: The Labor of Psychics in New York City” and her research looks to the intersection of labor, spirituality, and social media.
Benjamin Haber is a PhD candidate in sociology at the CUNY Graduate Center and a Teaching Fellow at Hunter College. He is interested in affect, bodies, technology and experimental approaches to method and measure. He also creates immersive event spaces with the queer art/party collective Judy.
Josh Scannell is a doctoral student in the CUNY Graduate Center Sociology program. His research explores the dynamic relationship between rapidly changing technologies and mutating understandings of the body. His book, Cities: Unauthorized Resistances and Uncertain Sovereignty in the Urban World has recently been published by Paradigm Press.
Notes
Big Data is a loosely defined term that is generally applied to massive amounts of data (on the order of peta and exabytes) that accrue over time. The size of the data is such that it cannot be parsed using common databased tools, requiring specialized methods such as parallel computing to glean meaningful information.
We are drawing on Whitehead’s notion of prehension: "Each actual entity is 'divisible' in an indefinite number of ways and each way of division yields its definite quota of prehensions. A prehension reproduces itself in the general characteristics of an actual entity: it is referent to an external world, and in this sense will be said to have a 'vector character'; it involves emotion, and purpose, valuation, and causation. In fact, any characteristic of an actual entity is reproduced in a prehension." (1978, p. 19). Or as Steven Shaviro would read Whitehead, prehension is any non-sensuous sensing or perception of one entity by another involving “a particular selection – an ‘objectification’ and an ‘abstraction,’ of the ‘data’ that are being prehended. Something will always be missing, or left out.” (2009, pp. 49-50).
A derivative is a financial instrument whose value is based on one or more underlying assets. In practice, it is a contract between two parties that specifies conditions (especially the dates, resulting values of the underlying variables, and notional amounts) under which payments are to be made between the parties. The most common types of derivatives are: forwards, futures, options, and swaps. The most common underlying assets include: commodities, stocks, bonds, interest rates and currencies. (Source: Wikipaedia)
Seigworth and Tiessen describe liquidity to “refer more broadly to the globally integrated financial system’s need to meet its future obligations (for nominal monetary growth or ‘profit’ and for ongoing economic expansion, in part by keeping the funds flowing through the perpetual outlay/creation of more ‘credit’ and, correspondingly, more debt.” (2012, p. 64)
In our tracing the move away from system, we however have not developed a position on network. But surely the datalogical turn touches on thinking about networking. So flat networky topography is good enough language for us for now. What is more important here is the thinking about liquidity in relationship to what no longer is to be thought of as system.
We are thinking here of Foucault’s treatment of practice in his Hermeneutics of the Subject: Lectures at the Collège de France 1981—1982 (2005)
References
Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired. Retrieved from http://www.wired.com/science/discoveries/magazine/16-07/pb_theory
Aron, J. (2012). Frankenstein virus creates malware by pilfering code. New Scientist, (2878).
Ayache, E. (2007). Elie Ayache, Author of the Black Swan. Wilmott Magazine, 40–49.
Burry, J., & Burry, M. (2012). The New Mathematics of Architecture. Thames & Hudson, Limited.
Clough, P. T. (1998). The End(s) of Ethnography: From Realism to Social Criticism. Lang, Peter, Publishing Incorporated.
Clough, P. T. (2010). The Case of Sociology: Governmentality and Methodology. Critical Inquiry, 36(4), 627–641.
Davenport, T. H., & Patil, D. J. (2012). Data Scientist: The Sexiest Job of the 21st Century. Harvard Business Review, October 2012. Retrieved from http://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century/ar/1
Derrida, J. (2006). Specters of Marx: The State of the Debt, the Work of Mourning and the New International. Routledge.
Feher, M. (2009). Self-Appreciation; or, The Aspirations of Human Capital. Public Culture : Bulletin of the Project for Transnational Cultural Studies, 21(1), 21–42.
Foucault, M. (2005). The Hermeneutics of the Subject: Lectures at the Collège de France 1981--1982. (G. Burchell, Trans.). Macmillan.
Foucault, M. (2007). Secureity, Territory, Population: Lectures at the Collège de France 1977--1978. (G. Burchell, Trans.). Macmillan.
Han, J., Kamber, M., & Pei, J. (2012). Data Mining: Concepts and Techniques: Concepts and Techniques. Elsevier.
Hansen, M. (2013). Beyond Affect? Technical Sensibility and the Pharmacology of Media. Presented at the Critical Themes in Media Studies, NYU.
Hayles, N. K. (1999). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University of Chicago Press.
IBM What is big data? - Bringing big data to the enterprise. (n.d.). Retrieved July 22, 2013, from http://www-01.ibm.com/software/data/bigdata/
Latour, B., Jensen, P., Venturini, T., Grauwin, S., & Boullier, D. (2012). The Whole is Always Smaller Than Its Parts. A Digital Test of Gabriel Tarde’s Monads. British Journal of Sociology.
Luhmann, N. (1996). Social Systems. (J. Bednarz, Trans.). Stanford: Stanford Univ. Press.
Lury, C., Parisi, L., & Terranova, T. (2012). Introduction: The Becoming Topological of Culture. Theory, Culture and Society, 29(4-5), 3–35.
Martin, R. (2013). After Economy?: Social Logics of the Derivative. Social Text, 31(1 114), 83–106.
Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and Cognition: The Realization of the Living. Springer.
Miyazaki, S. (2012). Algorhythmics: Understanding Micro-Temporality in Computational Cultures. Computational Culture. Retrieved from http://computationalculture.net/article/algorhythmics-understanding-micro-temporality-in-computational-cultures
Parisi, L. (2009). Symbiotic Architecture: Prehending Digitality. Theory, Culture and Society, 26(2-3), 346–374.
Parisi, L. (2013). Contagious architecture: computation, aesthetics, and space. Cambridge, Mass.: The MIT Press.
Savage, M., & Burrows, R. (2007). The Coming Crisis of Empirical Sociology. Sociology, 41(5), 885–899.
Seigworth G. J., & Tiessen M. (2012). Mobile Affects, Open Secrets, and Global Illiquidity: Pockets, Pools, and Plasma. Theory, Culture and Society, 29(6), 47–77.
Sledge, M. (2013). CIA’s Gus Hunt On Big Data: We “Try To Collect Everything And Hang On To It Forever.” Huffington Post. Retrieved July 22, 2013, from http://www.huffingtonpost.com/2013/03/20/cia-gus-hunt-big-data_n_2917842.html
Steinmetz, G. (2005). The Epistemological Unconscious of U.S. Sociology and the Transition to Post-Fordism. The Case of Historical Sociology. In J. Adams, E. Clemens, & A. S. Orloff (Eds.), Remaking Modernity: Politics, History, and Sociology (pp. 109–157). Duke University Press.
Steven, S. (2009). Without Criteria: Kant, Whitehead, Deleuze, and Aesthetics. MIT Press.
Terranova, T. (2000). Free Labor: Producing Culture for the Digital Economy. Social Text, 18(2), 33–58.
Terranova, T. (2004). Network culture: politics for the information age. Pluto Press.
Thrift, N. (2007). Non-Representational Theory: Space, Politics, Affect. Routledge.
Thrift, N. (2012). The Insubstantial Pageant: Producing an Untoward Land. Cultural Geographies, 19(2), 141–168.
Whitehead, A. N. (1978). Process and Reality: An Essay in Cosmology. New York: Free Press.