- themes in complexity of social worlds
- models as maps
- general concepts of ABMs
- detailed discussion of some actual models
[EDIT: I presented my book review in Computational Analysis of Social Complexity (CSS610), reformatted my presentation in HTML and made itavailable here]
Complexity in Social Worlds
Complexity emerges from highly interdependent components where taking a component off leads to collapse of the system, though it is quite robust to less radical changes in the parts. On the other hand, complicated worlds are reducible, i.e. we can examine its elements separately to gain insight, which is not the case for complex systems. Model #1 Standing Ovation Problem N spectators, each receive a signal of performance quality s(qi) decides to stand up. A typical mathematical approach:
- si(q) = q + εi if T1< si(q) then stand up (εi is for heterogeneity)
- if α>T2 then everyone stands up.
Approach #2. Indeed all of the economics graduate students modeled the standing ovation without considering attending the theater with acquaintances and used traditional models like mathematics and statistics. However more elaborate models can be constructed with newer computational techniques like agent-based modeling by including more :
- including much more heterogeneity as location and friendship involved.
- more than two steps required to reach the equilibrium.
- attracting more groups? theater design? where to place shills? etc.
Result: for complex systems, canceling differences with averaging leads to serious issues. Two other examples given
- how genetic variety in bees let them balance the temperature of their hive smoothly (negative feedback loop) [crowded highways]
- how diversity of thresholds of responsitivity to pheromones affect the defensing pattern of a hive (positive feedback loop) [discounted products]
With discussion of a more complex example model (Tiebout model), authors point new directions by underlining that the difficulty of answering any particular scientific question is often tied to the tools we have at hand.
- Road maps vs all possible details. Snow’s 1855 map of cholera revealed the mode of transmission and the source. Intention of map-makers was just to represent the real world.
- Homomorphisms. Formal discussion of modeling. Modeling modeling.
On Emergence We may see emergence at many levels. Emergence from a mosaic (tile => image => tile => image …) nucleons, atoms, compounds, amino acids, proteins, organelles, cells, tissues, organs, organisms, societies… Prior ignorance makes a phenomenon mystical, such as planetary motion (prior to Kepler) which turn out to be rather simple, just an ellipse. Similarly, organized complexity can be understood with computational modeling.
- detailed verbal descriptions such as Smith’s (1776) invisible hand
- mathematical analysis like Arrow’s (1951) possibility theorem
- thought experiments including Hotelling’s (1929) railroad line
- mathematical models derived from a set of first principles (currently the predominant tool in economics).
Whether the proposition that countries on a map can always be distinguished through the use of only four colors (the so-called Four- Color Map problem) is proved by the exhaustive enumeration of all possibilities through the use of a computer program (which has been done) or through an elegant (or even non-elegant) axiomatic proof (which has not been done) matters little if all you care about is the basic proposition. Tool is for simplifying a task. Employ different tools for better theories. For example, a full understanding of supply and demand may require
- thought experiments using Walrasian auctioneers,
- axiomatic derivations of optimal bidding behavior,
- computational models of adaptive agents, and
- experiments with human subjects.
Computation and Theory
All new introduced tools attract questions and concerns, so does the computational modeling.
- Can these tools generate new and useful insights?
- How robust are they?
- What biases do they introduce into our theories?
The use of a computer is neither a necessary nor a sufficient condition for us to consider a model as computational. (e.g. Schelling’s (1978) coin based method)
- abstractions maintain a close association with the real-world agents of interest
- uncovering the implications of these abstractions requires a sequential set of computations involving these abstractions
Neoclassical economics (an example to computation in theory):
- individuals optimize their behavior
- given mathematical constraints, most of the underlying agents in the real system are subsumed into a single object (a representative agent)
- incorporate driving forces (such as system seeks an equilibrium)
- Note: computation is used in these type of models for solving numerical methods
Agent-based objects (computation as theory):
- abstractions are not constrained by the limits of mathematics
- collection of agent based objects solved by their interactions using computations
Modeling vs Simulation:
- simple entities and interactions vs complicated
- implications robust to large class of changes vs less robust
- surprising results that motivates new predictions vs less surprising
- easily communicated to others vs may not be that easy
Objections and Responses
- Q: answers are built in to the model, so cannot learn anything new !
- all tools build in answers. Clarity is key here. hidden or black-box features are bad
- a model is bounded by initial framework but it can allow for new theoretical insights
- Q: computations lack discipline !
- lack of constraints is indeed a great advantage. Mathematical models become unsolvable when practitioners break away from limited set of assumptions.
- a discipline similar to the one required for lab-experiments is being formed: Is the experiment elegant? Are there confounds? Can it be easily reproduced? Is it robust to differences in experimental techniques? Do the reported results hold up to additional scrutiny?
- flexibility. mathematical models solved by a set of solution techniques and verification mechanisms. Given the newness of many computational approaches it will take some time to agreed-upon standards for verification and validation
- Computational Models Are Only Approximations to Specific Circumstances
- Giving exact answer might not be that important; relying on approximations may be perfectly acceptable in some cases.
- Generalizability is tied to the way model created, not the medium. Bad mathematical models may not be extended beyond their initial structure too.
- Computational Models Are Brittle
- crashes are not unique to computational models
- can be prevented by better designs
- Computational Models Are Hard to Understand
- due to lack of commonly accepted means for communication. UML, ODD.