On the website formerly known as Twitter, I opened up the first post on this site to my followers. Tanner Greer gave me a challenging prompt:

The extent to which organisms, artificial things like computers, or social organizations can be said to have [intentionality], and whether a common schema applies to all of those categories or if agency and intentionality for a corporation, say, or a military, is fundamentally different than from some other complex system. [M]ore broadly I am interested in your take on concepts like “intentionality,” “agency,” “representation” and so forth and the extent to which they are 1) real 2) applicable to what kind of complex systems.

This is a very difficult question to answer. One typical way of doing it would be to think about some kind of abstract set of capabilities characteristic of adaptive entities fit for a generic task environment. You would then draw some crude commonalities between various ‘systems’ (humans, animals. machines, organizations, etc) that seem to share these capabilities. This would, however, fail to capture something very important.

Accidents and negativity

I am fond 1 of Paul Virilio’s memorable theory of the accident:

When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution…Every technology carries its own negativity, which is invented at the same time as technical progress.

This, at first glance, may seem to be a banal truth or even a superficial ‘deepity.’ But another way of expressing it can be found in the idea of Clausewitzian friction. “Fog and friction” are to be understood simply as the resistance we encounter in attempting to impose our will on the world — and, in the original context of the theory — an opposed will obstructing us. Men and machines break down, intelligence proves inaccurate, and elaborate plans collapse when put into action.

Of course, these are just particular examples of a more generalized resistance to our ability to act on the world around us that all of us in some way encounter in our lives. The military realm is just the most extreme arena in which resistance to human will occurs. Studying complex disasters reveals that we often only learn about the essential features of a system when it breaks. 2 This is why Virilio’s remark about shipwrecks is more than just a pithy quote.3

It is often, sadly, true that the very reason why catastrophic failure occurs cannot be separated from what ordinarily may be the source of success. One unpleasant conclusion that sometimes follows is that the ingredients of the failure are directly embedded in the design of the system itself.4 If a system does something novel and does it well, it usually creates the potential for a unique kind of accident that otherwise would not exist. Moreover, it often removes a prior robustness to failure provided by the system that preceded it.

Downsides can be mitigated, and sometimes mitigated surprisingly well. However, the more that a systemic risk arises from some integral feature of the system itself, the more that mitigation taxes or otherwise interferes with core functionality. Bolting ad-hoc fixes onto the system can make it harder to use, understand, and control. Anticipatory hazard prevention may also get in the way of the system’s availability, responsiveness, flexibility, and overall effectiveness.

Returning to back to friction, the way in which a system “carries its own negativity” is a form of essential resistance to the imposition of will. That resistance may issue from how integral accidents (or costly responses to them) constrain the will-enhancing properties of the system from their theoretical peak effectiveness. But frequently even the fear of an accident is sufficient to act as a form of resistance. 5 Friction can be managed but never fully overcome.

To answer Greer’s question, a common schema for agency ought to compare different types of negativity. The language of systems, control, and agency is bloodless. What is most interesting is how something bleeds. Naively, saying something interesting about a mode of agency means describing how something that enables agency also can simultaneously inhibit it. If humans are unique, we should be unique in how we bleed.

Negativity of agency

The “negativity” of human agency can be found in one of its most important enabling components: imagination and reflective self-awareness. Our ability to dream is how we can live in more than just the immediate moment. But they also expose the chasm between our desires and our current state. There is danger in our dreams. They are beautiful things, but also the instruments of our individual and collective destruction.

I will, over a series of posts, explore this conundrum in further detail. In my own case, the negativity of good writing is indulging my tendency to layer on too much detail and verbosity and lose track of the point I’m trying to make.

Footnotes

  1. On the landing page, I say I’m interested in why things succeed and fail. The second post after this — though not part of this intended analytic series — looks at something vaguely relevant to the overarching theme.

  2. Exploit programming teaches us just as much about computer science as more conventional study of the theory and practice of computation. In medicine, the absence or disturbance of particular biological structures has often enhanced our understanding of the human body as a whole.

  3. For example, our current quality of life is a function of worldwide networks of communication, travel, and commerce. However, they also are generally the vector by which devastating pandemics like COVID-19 spread. Even the very origin of the coronavirus itself may lie in cross-contamination enabled by a Chinese animal market. These kinds of complex disasters are not just integral to (post-)modernity, they also raise difficult questions about who ought to have authority in combating them. In summary, they deprive us of a desired sense of agency and control. However, such inversion of agency may be an inescapable consequence of the bargain we have already made.

  4. The mathematician Kurt Gödel allegedly discovered a fatal flaw in the US constitution but was told by Albert Einstein to avoid mentioning it to expedite his immigration to America. However, historians and political scientists often diagnose such flaws when surveying past and present political events. Juan Linz’s famous paper on the “perils of presidentialism” argues that presidential systems generate unique kinds of democracy-threatening crises. Danny Orbach makes a similar case about the post-Meiji constitutional order and Japan’s path to World War II.

  5. A system falls well short of its potential because of low risk tolerance for the consequences of a design tradeoff that may only be discovered post-hoc. Improvements to the system will similarly be bounded by that risk tolerance.