Conflict simulation, peacebuilding, and development

Daily Archives: 03/09/2009

Designing Exercises for Teaching and Analysis

The latest issue of Joint Forces Quarterly 55 (4th Quarter 2009) has a short but interesting piece on designing exercises, games, and simulations. Two observations stand out.

The first concerns the importance of validation in social and political simulations.

If we are conducting an exercise to explore the contours of some ill-defined future problem, for instance, it is crucial that we be able to justify why we reach certain conclusions or how we generalize lessons learned from an exercise.

Answering the “How do I know that I know that?” question is routine in the social sciences, including in qualitative work common in political science and sociology, but not always thoroughly discussed in the exercise design and evaluation community. Nevertheless, it is crucial to a defensible analysis.

As developments in information and computer technology make more complex (and visually-appealing) simulations possible—especially in the military, where there are large R&D budgets and considerable demand for software-based training and planning solutions —I must admit to being worried about the theoretical assumptions that may (or may not) be built in. This is especially so because (to be frank) I don’t think social science yet has a very firm grip on such issues as insurgency, civil war, peace building, and political stability, much less how to model them.

A second important point raised by the article is the difference between exercises and simulations for analytical purposes, as opposed to those that primarily have a training role:

The elements of good exercise design for teaching and analysis can be somewhat different for the simple reason that the lessons to be learned are different. Analytically, what we learn from tabletop exercises usually has to do with whether the model of the problem described in the scenario introduces the right independent variables, whether others should be added, how they could be refined and their relative weight, and how differences in them might require different actions and result in different outcomes.

Exercises for teaching purposes are rooted in an assumption of the value of experiential learning, that giving participants a visceral feel for the exigencies of policy decisionmaking will be an effective way of making theoretical lessons they have learned concrete.

The first observation highlights the “garbage in, garbage out” problem once more, and hence the issues of validation raised earlier. The latter points to the importance of “feel”—something that may have as much to do with the way a simulation is staged as it does with the rules and procedures involved. In my own simulations, I have at times deliberately engineered information overload, exhaustion, and time-pressures into the process to give participants a sense of how these influence behaviour. One can go even further, and modify the physical layout of the simulation, the immediate environment, and so forth to create an additional layer of contextual effect. At the Chatham House simulation of Palestinian refugee negotiations, for example, we deliberately provided players with physical resources (dedicated team rooms, phones, printer access, interns assigned as support staff, etc.) in order to proxy the degree of institutional support each party might enjoy in real life. This meant that our refugees—played in the simulation by actual refugees—were given no room at all, and were forced to make do with whatever corners of the building they could find. One effect of this, in turn, was to place them in a marginalized position that nicely captured how often they are left out of consultative and negotiating processes on the issue.

%d bloggers like this: