I’m fascinated with simulation — conceptual, computational, or otherwise — and the unstable reality it attempts to tame. If you’d like to talk, drop me a line. For more about me, you can go here.
About
About this blog
You cannot not simulate. You are always simulating in some fashion, regardless of whether or not you make your assumptions explicit. 1 Reconstructing the past, projecting into the future, taking the perspective of others, conjecturing what might be and what could have been — all involve some fragile model of reality and possibly illusory simulation of how things play out.
This blog 2 focuses on simulation — conceptual, computational, or otherwise — and the unstable reality it attempts to tame. The overlapping topics you’ll see covered here can crudely be classified as follows:
- Simulation: computer and other forms of modeling and simulation, the conceptual models and mechanisms that individuals and organizations use to make sense of their environments, and the consequences of maps losing the eternal arms race with their territories.
- Dissimulation — stratagems in various forms of competitive interaction, how we wittingly and unwittingly mislead ourselves and others, how sense becomes nonsense, and the persistence of seemingly primitive myths and fantasies in a high-tech world.
- Simulacra — imitations of reality and copies without originals, people and places that hide the real and show the fake, assorted artifices we use to understand and extend ourselves, and the success and failure of compelling replicas and illusions.
About me
It wasn’t until I watched a virus bring the world to its knees in 2020 that I realized my fascination with simulation was about coping with the future instead of predicting it. Coming to terms with uncertainty and lack of control is just as much of a personal project for me as an intellectual one.
It’s fashionable to valorize innovation, revolution, and disruption. However, I’m uncomfortable with disorienting change. I want stability and continuity. Reality frequently reminds me that it doesn’t intend to cooperate. I’ve always struggled with the contradiction between my desire for order and my acceptance of its frailty. My investment in learning about simulation is an attempt to manage this overriding contradiction.
My professional interests - -security, technology, and alternative analysis — are part of my ongoing quest to better understand how we try — and fail — to predict and control disruptive forces and the world around us. Below you’ll find some selected projects in which I’ve pursued that elusive understanding.
Security
For better or worse, security and defense issues produce creative adaptations under pressure. My work in security began in college. I learned about the organizational challenge of collective foresight by spending my time outside of class attending and facilitating meetings of the Los Angeles Terrorism Early Warning Group. I helped edit a book manuscript summarizing the TEW’s collaborative network practices. An article I co-wrote while assisting the TEW was later used in a police tactical science course taught by SWAT pioneer Charles “Sid” Heal.
That early exposure led to a recurring topic I’ve written about: how organizations without conventional resources and infrastructure plan and adapt. I have analyzed how terrorists, drug cartels, and other underground organizations improvise their own armored vehicles, form alliances with each other, preserve their secrecy, deceive their targets, adopt emerging technologies, compromise security systems, overwhelm defense responses, and eventually seize decisive advantage.
I have written about these and other topics for general audiences in Foreign Policy, The Atlantic, and Slate and specialists in Infantry Magazine, West Point Combating Terrorism Center Sentinel, and Armed Forces Journal. I have presented and/or participated in workshops and conferences on these and other issues at the US Naval War College, Marine Corps Command and Staff College, the Canadian Forces College, King’s College London, and other similar institutions. Lastly, I co-designed a curriculum of study for the Foreign Policy Research Institute on mechanisms of competitive strategy in violent conflict and contentious politics.
Technology
Technology — like security — is an diffuse problems with porous boundaries and wide-ranging implications. I similarly got started learning about it through projects — such as supporting a Harvard Kennedy School conference on organizational information-sharing and providing research assistance to a Georgetown University professor researching unmanned vehicle technology — that highlighted the complicated relationships between information systems and the people who use them.
My early introduction to the organizational implications of emerging information technology led to work as a Technology Research Analyst at CrucialPoint LLC. I wrote about government and private sector enterprise technology and security issues and attended conferences and trade events in the Washington D.C, area. During this time, I also served as a cybersecurity fellow at the New America Foundation learning about the the intersection of artificial intelligence, information security, and public policy. I participated in workshops, conferences, and other collaborative events combining government, private sector, and academic perspectives.
Some of my work on technology issues is proprietary contract research for government and other clients. An example of this was a report for the Department of Defense on the prospective influence of artificial intelligence advances on the political-military balance. My independent research on the report emphasized the importance of organizational factors, culture, and psychology in state and non-state adoption and deployment of AI. However, I’ve also frequently written about technology, policy, and society as a part of initiatives like the New America-affiliated Future Tense project, a partnership of Slate, the New America Foundation, and Arizona State University examining the societal impact of disruptive technologies.
More generally, in articles for Slate, Vice Magazine, War on the Rocks, The New Atlantis, and other publications, I’ve written about topics such as the use and misuse of automated decision-making systems, the dysfunctions of social media platforms, if and when machines should override their users, and why human stupidity will defeat artificial intelligence.
Alternative analysis
I’ve reluctantly settled on “alternative analysis” in lieu of a better word for the bundle of interrelated concepts, methodologies, and practices I’ve learned about since college. Instead of conclusively defining it, I’ll give several examples of different possible definitions.
The most conventional definition of alternative analysis is finding ways to stress-test known and hidden organizational assumptions. My first exposure to the ideas associated with that definition came through my early introduction to “red-teaming.” As an associate editor at Red Team Journal, I learned about the interrelated, cross-disciplinary methods used to test the robustness of organizational security (and similar) arrangements. Some of the ideas I picked up there led to projects like an occasional paper I co-wrote on adaptive red-teaming practices across the spectrum of security threats. However, another possible definition of alternative analysis is taking a problem and looking at it from every relevant perspective, no matter how niche or eccentric.
The Center for a New American Security (CNAS) was writing a report on the policy implications of ubiquitous robot swarms. “Swarm” is a slippery term used in similar but not interchangeable ways by insectologists, animal behaviorists, computer scientists, and military theorists. I thought poring over obscure books, papers, and monographs on the scientific and technical literature in all of those subjects would be a stimulating challenge. So I did that and assisted them by producing several summaries and guides about the complications and pitfalls of operationally defining swarm in policy analysis. One last definition of alternative analysis could be technical methods that illuminate the possible states resulting from interaction in a complex environment.
My interest in simulation methods eventually took me to graduate coursework in agent-based modeling, cognitive modeling, and other related forms of behavioral and social computer modeling and simulation. Agent modeling, for example, uses multiple simulated actors to investigate emergent outcomes arising from collective individual behaviors. I applied these methods as part of an interdisciplinary project team at Caerus Analytics working on models of complex socioeconomic interactions in large overseas urban environments. I also worked in a similar interdisciplinary group to design and build a multi-agent personal health recommendation system integrating crowdsourced data, authoritative knowledge sources, and curated domain ontologies. Both projects are examples of the utility and flexibility of agent technology for disparate modeling and simulation problems.
Footnotes
-
In Pragmatics of Human Communication, Paul Watziawick stated that “if it is accepted that all behavior in an interactional situation has message value, i.e., is communication, it follows that no matter how one may try, one cannot not communicate. Activity or inactivity, words or silence all have message value.” So “you cannot not simulate” is a mashup of that and Joshua Epstein’s articulation of the inevitability of simulation: “[a]nyone who ventures a projection, or imagines how a social dynamic—an epidemic, war, or migration—would unfold is running some model. But typically, it is an implicit model in which the assumptions are hidden, their internal consistency is untested, their logical consequences are unknown, and their relation to data is unknown. But, when you close your eyes and imagine an epidemic spreading, or any other social dynamic, you are running some model or other. It is just an implicit model that you haven’t written down.” ↩