Complex Adaptive Systems: an Introduction to Computational Models of Social Life

Complex Adaptive Systems
Complex Adaptive Systems by Miller & Page
Miller, J. H., & Page, S. E. (2009). Complex Adaptive Systems: An Introduction to Computational Models of Social Life: An Introduction to Computational Models of Social Life. Princeton University Press.
Here is a review by Paul Ormerod. And here is mine:
The book discusses the advancement of CAS the decade before it was published.
As Paul stated, book has four sections. In this post I will cover the first two and the rest will be the content of another post.
  1. themes in complexity of social worlds
  2. models as maps
  3. general concepts of ABMs
  4. detailed discussion of some actual models

[EDIT: I presented my book review in Computational Analysis of Social Complexity (CSS610), reformatted my presentation in HTML and made itavailable here]


Let’s dwell on the title first.
Adaptivity of a social system is the thoughtfulness of the components, aka agents, of the system.
Complexity in its simplest form is the value in the system which does not belong to any of its components but rather emerge through their interaction.
John H. Holland defines complex adaptive systems (CAS from now on) as systems that have a large numbers of components, often called agents, that interact and adapt or learn.
One of the earliest discussions on complexity was the invisible hand of Adam Smith in the Wealth of Nations (1776). Collections of self-interested agents lead to well-formed structures that are no part of any single agent’s intention. How can we understand or prove this invisible hand? While our ability to theorize about social systems has always been vast, the set of tools available for pursuing these theories has often constrained our theoretical dreams either implicitly or explicitly. The tools and ideas emerging from complex systems research complement existing approaches, and they should allow us to build much better theories about the world when they are carefully integrated with existing techniques.

Complexity in Social Worlds

Complexity emerges from highly interdependent components where taking a component off leads to collapse of the system, though it is quite robust to less radical changes in the parts. On the other hand, complicated worlds are reducible, i.e. we can examine its elements separately to gain insight, which is not the case for complex systems. Model #1 Standing Ovation Problem N spectators, each receive a signal of performance quality s(qi)  decides to stand up. A typical mathematical approach:

  1. si(q) = q + εi  if  T1< si(q) then stand up (εi is for heterogeneity)
  2. if α>T2 then everyone stands up.

Approach #2. Indeed all of the economics graduate students modeled the standing ovation without considering attending the theater with acquaintances and used traditional models like mathematics and statistics. However more elaborate models can be constructed with newer computational techniques like agent-based modeling by including more :

  1. including much more heterogeneity as location and friendship involved.
  2. more than two steps required to reach the equilibrium.
  3. attracting more groups? theater design? where to place shills? etc.

Result: for complex systems, canceling differences with averaging leads to serious issues. Two other examples given

  • how genetic variety in bees let them balance the temperature of their hive smoothly (negative feedback loop) [crowded highways]
  • how diversity of thresholds of responsitivity to pheromones affect the defensing pattern of a hive (positive feedback loop) [discounted products]

With discussion of a more complex example model (Tiebout model), authors point new directions by underlining that the difficulty of answering any particular scientific question is often tied to the tools we have at hand.


  • Road maps vs all possible details. Snow’s 1855 map of cholera revealed the mode of transmission and the source. Intention of map-makers was just to represent the real world.
  • Homomorphisms. Formal discussion of modeling. Modeling modeling.

On Emergence We may see emergence at many levels. Emergence from a mosaic (tile => image => tile => image …) nucleons, atoms, compounds, amino acids, proteins, organelles, cells, tissues, organs, organisms, societies… Prior ignorance makes a phenomenon mystical, such as planetary motion (prior to Kepler) which turn out to be rather simple, just an ellipse. Similarly, organized complexity can be understood with computational modeling.

Computational Modeling

Theoretical tools:

  • detailed verbal descriptions such as Smith’s (1776) invisible hand
  • mathematical analysis like Arrow’s (1951) possibility theorem
  • thought experiments including Hotelling’s (1929) railroad line
  • mathematical models derived from a set of first principles (currently the predominant tool in economics).

Whether the proposition that countries on a map can always be distinguished through the use of only four colors (the so-called Four- Color Map problem) is proved by the exhaustive enumeration of all possibilities through the use of a computer program (which has been done) or through an elegant (or even non-elegant) axiomatic proof (which has not been done) matters little if all you care about is the basic proposition.  Tool is for simplifying a task. Employ different tools for better theories. For example, a full understanding of supply and demand may require

  • thought experiments using Walrasian auctioneers,
  • axiomatic derivations of optimal bidding behavior,
  • computational models of adaptive agents, and
  • experiments with human subjects.

Computation and Theory 

All new introduced tools attract questions and concerns, so does the computational modeling.

  • Can these tools generate new and useful insights?
  • How robust are they?
  • What biases do they introduce into our theories?
Theory is to make the world understandable by finding the right set of simplifications. Modeling proceeds by deciding what simplifications to impose on the underlying entities and then, based on those abstractions, uncovering their implications. Computation in theory vs Computation as theory

The use of a computer is neither a necessary nor a sufficient condition for us to consider a model as computational. (e.g. Schelling’s (1978) coin based method)

  • abstractions maintain a close association with the real-world agents of interest
  • uncovering the implications of these abstractions requires a sequential set of computations involving these abstractions

Neoclassical economics (an example to computation in theory):

  • individuals optimize their behavior
  • given mathematical constraints, most of the underlying agents in the real system are subsumed into a single object (a representative agent)
  • incorporate driving forces (such as system seeks an equilibrium)
  • Note: computation is used in these type of models for solving numerical methods

Agent-based objects (computation as theory):

  • abstractions are not constrained by the limits of mathematics
  • collection of agent based objects solved by their interactions using computations

Modeling vs Simulation:

  • simple entities and interactions vs complicated
  • implications robust to large class of changes vs less robust
  • surprising results that motivates new predictions vs less surprising
  • easily communicated to others vs may not be that easy

Objections and Responses

  • Q: answers are built in to the model, so cannot learn anything new !
    • all tools build in answers. Clarity is key here. hidden or black-box features are bad
    • a model is bounded by initial framework but it can allow for new theoretical insights
  • Q: computations lack discipline !
    • lack of constraints is indeed a great advantage. Mathematical models become unsolvable when practitioners break away from limited set of assumptions.
    • a discipline similar to the one required for lab-experiments is being formed: Is the experiment elegant? Are there confounds? Can it be easily reproduced? Is it robust to differences in experimental techniques? Do the reported results hold up to additional scrutiny?
    • flexibility. mathematical models solved by a set of solution techniques and verification mechanisms. Given the newness of many computational approaches it will take some time to agreed-upon standards for verification and validation
  • Computational Models Are Only Approximations to Specific Circumstances
    • Giving exact answer might not be that important; relying on approximations may be perfectly acceptable in some cases.
    • Generalizability is tied to the way model created, not the medium. Bad mathematical models may not be extended beyond their initial structure too.
  • Computational Models Are Brittle
    • crashes are not unique to computational models
    • can be prevented by better designs
  • Computational Models Are Hard to Understand
    • due to lack of commonly accepted means for communication. UML, ODD.