Conflict simulation, peacebuilding, and development

Daily Archives: 19/07/2010

Seers versus Sages: Some Thoughts on the June Roundtable on Innovation in Strategic Gaming

Tim Wilkie and Margaret McCown out at the Center for Applied Strategic Learning (with the very cool acronym CASL) hosted another in their quarterly series of roundtables on innovations in gaming in June.  Rex and I both, somehow, managed to roll the rare 20 on our availability charts and actually got to attend. 

It was a really great afternoon with two presentations, the first by John Hanley on lessons learned from strategic gaming and the second by Kirsten Messer on new technologies being integrated into the practice.  The “discussion” that followed was very interesting, as those of us gathered tried to reconcile the two presentations on “how should we be using games to learn, analyze or teach” and “what are new tools and how can they be integrated”.  I put “discussion” in quotes, because it was really more exchanged volleys between two camps, which, for the purpose of this narrative, I’ll call:

1)      The Seers:  Those who believe that properly designed, calibrated and modeled, simulations can be so representative of the complex universe that we live in that those simulations can teach us about likelihood of future events, vs.

2)      The Sages: Those who believe that it is the process of designing, engaging in, reflecting on simulations that teach us.  Contra to the seers, they would not believe that more complex models or computers or more runs of a simulation would be predictive in any sense.

Tim, as Moderator between the Sages and the Seers, tried his damnedest to keep the discussion open and inclusive, which was noble, but not necessary – it was interesting enough (for me at least) to see the two “camps” exchange and just sit in on an hour discussion between some seminal thinkers on this topic (Hanley, Peter Perla and others basically held court exchanging their opinions on the above). 

What really strikes me about the ongoing dialogue between the Seers and the Sages (which obviously extends beyond that roundtable) is the degree to which the Seers believe in the predictive ability of their models.  Despite evidence presented by Hanley that campaign modeling has historically been off by two orders of magnitude (one example being the gross overestimation of US casualties that would be incurred in the first Gulf War (see aside below)), Seers continue to argue and receive LOTS OF DEFENSE MONEY under the premise that sufficiently complicated modeling can be predictive.  As a theorist/scientist working in a policy organization, I completely understand where this pressure comes from – analysts that design models need to establish the relevance of their models and this work.  Policymakers have limited budgets and need to know what the utility is of locking up some eggheads in a room to design scale replicas of the universe that they can blow up – it is all fun and games when folks want to restage classic world war two battles with orc figurines in their basement with their spare time and hobby money, but with the kinds of budgets and computing defense puts toward these exercises, we’re talking about real money, manpower and resources being devoted here.  As a result, I think, the Seers need to (over?)promise the predictive quality of their exercises and eventually end up believing in what the models say – regardless of how opaque the black box was and how many ballpark assumptions were loaded into the model.   

Clearly, I place myself in the Sage camp – we use simulations to create environments where people can learn the skills necessary for working on these complex issues of development in conflict-affected countries.   We don’t think our model of the universe has any more predictive capacity than the expertise upon which the simulation is designed.    I am likely being a little unfair to the Seers and a bit provocative, but more in the interest of continuing the debate than in quelling it.  Would love to hear any other perspectives.

In any event, the discussion didn’t resolve the debate, but was useful to me in understanding it better.  Many thanks to Tim and Margaret for a great afternoon of discussion (and quite a hors d’oeuvres spread, too!).  Very much looking forward to the next discussion end of September.

[Aside on estimates of casualties in the first gulf war:  Actual US casualties were 240 for the First Gulf War, the closest, “best” estimate was three times that, the next best estimate was six times the actual and the majority were off by an order of magnitude (with some official estimates more than a factor of 200 implying casualties around 40,000).  While I don’t know the source for Hanley’s claim that 2 orders of magnitude as the rule, when you are multiplying half a percentage here and a quarter of a percentage there,  it doesn’t take long before you are talking about real discrepancies….  The data on casualties comes from Biddle (1996))  ]

%d bloggers like this: