Conflict simulation, peacebuilding, and development

Daily Archives: 31/08/2012

The Dance of the Simulation Designer (updated)

Earlier this month I offered some rather critical reflections on a recent Syria crisis simulation held at Brookings, highlighting some of the the potential problems of wargaming as a tool of policy analysis, as well as addressing some apparent pitfalls in the Brookings simulation design.

In a subsequent blog post, Natasha Gill then added some thoughts of her own on the issue of designing simulations for think-tank clients.

That in turn led Devin Ellis of the ICONS Project to agree with some of what Natasha had said, but to also disagree with some of the rest. Because he raises some very important points about the inevitable compromises between design ambitions and practical realities, as well as the potential for still designing useful exercises within these constraints, I thought they were worth lifting from the comments section and featuring below as a full blog post. Note to Devin: This doesn’t count as the blogpost you promised us—we still hope to collect on that offer!

Update: Natasha offered a response, which I have added below. Any further discussion I’ll leave to the Comments section.

* * *

As someone whose bread is buttered by think-tank simulations, I am am following this evolving series of posts with interest. Some of the observations from Rex and Natasha Gill are spot on and there are things we all wish were better about these high-level exercises. But there are also very real constraints on these, and I find some of the statements above to be pretty sweeping and harsh without a lot of ‘evidence’ to support them. I know there are constraints on what you can put in a blog format, and I am really looking forward to the book when it comes out – but I think there’s some room for debate here.

Rex has brought up Stephen Downes-Martin’s comments and pointed out that the problem may sometimes be that a wargame or simulation might not really be the best tool in the analytic box. It is also true – however – that if a simulation IS a good approach, that is no guarantee the designer and the client won’t face all the same dilemmas incurred by high level participants and a politicized, media-seeking environment. What bothers me a little about Gill’s comments is the sense of expectation that 1) exercises not meeting all her criteria are inherently less valuable than those that do; 2) the major problem with think tank sims is failure to meet the articulated standards of participation; 3) the designer’s wishes will prevail over the reality of the atmosphere.

Stephen gave an excellent talk on living up to our professional integrity this year at Connections, but the truth is it’s also not a one way street. If we walk away from every simulation or game request where the client can’t or won’t meet every aspect of our ideal design and running, we’re limiting our usefulness to the policy world. Truly. If we’ve decided that a simulation or game is a useful investigative approach to the client’s problem, my responsibility as a designer is to help the client make sure the scope and methodology of the simulation are appropriate both to the questions under investigation AND the resources available.

There are many points I would respond to above (favorably as well to be fair), and I’ll write a full post if need be, but I will here just say a word about the issues with high level participants to offer an example of my reservations. I work on high level think tank sims every year, and the truth is, you will never get participants for the lengths of time envisioned in Gill’s work. Her comments on process are wonderful, and I would give my left arm to have participants at the top level in an isolation environment for a week to run a program – the truth is it rarely ever happens.

On the issues of role sheets Gill writes:

Paradoxically, the tendency to obtain false or weak outcomes from a simulation is more likely with participants who know the issues well than with novices. The former can, consciously or not, leap over the instructions provided in their role sheets, bringing their own interpretations to the table rather than learning from the simulation process. As a result, the simulation will confirm the assumptions of the participants rather than provide them with new insights.

I think the first sentence is a reach. Sometime’s that might be the case, but it’s bold to make that statement categorically – though I am willing to be persuaded by evidence. As for the rest, I raise the following contentions:

  1. Yes, think tanks recruit top-level participants in part to give the event or publication more profile – but that’s not the only reason. From a methodological standpoint the value of having those folks is that the ARE indeed experts. If the purpose of your exercise is to explore possible policy reactions to a crisis (I’m going to take it as a given that no one reading this blog believes the purpose of a well designed and run wargame is to predict the ‘real future’ in a complex policy environment) then the choice of participants is a factor in your design. Real top level experts might not be any better than undergrads at coming up with thoughtful, innovative approaches to a problem (they may be worse at is, as Gill implies) but they are undoubtedly better at depicting the probable behavior of their actual peers in a similar situation. A well run policy exercise acknowledges that. You expect the biases in your game design and you account for them in your debriefs and your analysis of the outcomes. Indeed this can sometimes be enlightening to the folks in the ‘thinking’ world about where they fail to understand which issues are viewed as most relevant to the folks in the ‘action’ world.
  2. Gill’s point about self-confirming, and therefore self-fulfilling, observations from participants who “skip over’ their detailed role sheets actually cuts both ways. ‘Garbage in, garbage out’ is not just the garbage the participants bring, but the garbage the scenario writer brings. I am very leery of what seems like an assumption that we, as designers, are always going to have a better take on realistic policy approaches to our hypothetical scenario than the person who has been at the table in real life. By telling my top level participants to obey the objectives or political attitude I have articulated in the role sheet without introducing their own perspectives and experiences, I am turning the simulation into MY self-fulfilling prophecy rather than theirs. I am also – indeed – making the added value of a top level person very limited. I’d be just as well with any reasonably well informed gamer.

In sum, I’ll say it’s a dance: there are sometimes big problems with high level participants – but there are also excellent insights to be gained from them. Gill’s points are well taken, but it is our job to see those issues and work to address them in a way that does not shop the whole prospect of doing focused games with those types of people.

Devin Ellis

* * *

Natasha sent in this response to the points that Devin raised:

* * *

Thanks to Devin Ellis for his thoughtful comments on my piece. I appreciate his feedback, and would like to clarify a few issues and pose a question to him.

The Time Factor

My first point is just a clarification: I’m not sure where Ellis got the idea that I would expect high level participants to spend an entire week doing a simulation. It’s true, my own specialty is creating and running extended and in-depth modules (if I’m teaching graduate students the simulation can last the full semester!). But when I work with diplomats or professionals in conflict, the modules are limited to two days.

I realize that’s more than most professionals can spare, and it’s always a struggle to get them to commit the time. But when they do, they usually offer two comments specifically on the importance of the time element: 1) it made all the difference in terms of grasping the ‘logic’ of the role and understanding the multiplicity of variables that each player had to manage; 2) it was key in helping them learn the most vital lesson, which was less about content and more about the experience of living out a worldview, an experience that helped them gain new insights into the interests, incentives, resistances and obstacles faced by various actors.

Who Fulfils Which Prophesy

I agree that facilitators/game developers can project their own prejudices onto a scenario with as much vigor as a participant. But I was certainly not suggesting that the alternative to participants running the show is the facilitator creating a simulation out of his/her own head. I think the best model is when simulations are developed with a great deal of input from outside specialists, on each aspect of the scenario and roles. The answer to the ‘projection’ question is that there must be more rigor in how modules are constructed, rather than faith in the abilities or knowledge-base of the participants.

I am not implying that high level participant’s don’t have strong abilities or in depth knowledge: I’m suggesting that a good simulation aims to challenge these in ways that benefit the participants and improve the quality of their policy recommendations.

My Way or the Highway

I can see why it sounded as though I believe each simulation should fit the model I’m outlining (you’re lucky you only heard about one part of that model! Don’t order my book when it comes out…). To clarify, I know each simulation cannot and need not fit one model. But simulations are proliferating like rabbits – in universities, think tanks, peace-making and peace-building training programs. And yet many are developed in an ad hoc manner, and facilitators who run them are not always specialists in education/teaching or in simulation development. Further, because almost any simulation generates a great deal of enthusiasm, we as facilitators and professors running them don’t always do enough in the sense of evaluating the weaknesses of the module.

Consequently, I think it’s worth outlining a best practice model of simulation, which is what I’m trying to do in my book. I accept that many great and rigorous modules will follow a different approach and have different goals and methods. I still think it might be useful to assemble and describe the elements that lead to very strong learning outcomes.

Exploring but Not Predicting

My question to Ellis is this: he writes that the purpose of the exercise is to “explore possible policy reactions to a crisis” but in the same sentence admits that “no one…believes the purpose of a well designed and run wargame is to predict the ‘real future’ in a complex policy environment.”

I think this means that although facilitators and participants are well aware that the details of the future can’t be known, the responses of various players to a crisis might be generally predictable in a simulation, in such a way as to be informative or useful for policy makers.

But upon reflection, if this is what Ellis meant, I’m not sure it makes sense to me. If a simulation can’t predict the future, then how much is it really telling you about possible ‘policy reactions’? And how much of what it does tell you about these actually useful (rather than merely interesting) to real policy makers? Policies are guided by the choices of human beings; and the motives guiding those choices are likely to be revealed in a simulation of it delves deeply into the realities lived by those human beings – the pressures they face inside themselves, within their own camp and in confrontation with their adversaries. It’s my view that in a crisis-simulation it is often the case that many of these elements are caricaturized rather than deepened.

Non-specialists versus The Pros

Finally, I’d like to make a point that I realize will sound like my least credible statement. I’m making it nonetheless because I feel I can take cover under the umbrella of those who originally raised it…

I always run a simulation with ‘coaches’ – ‘real’ negotiators, military/security officials, diplomats or analysts who, in addition to helping create the materials, are onsite throughout the simulation to help participants work through the issues. After watching the simulation evolve, the coaches almost always make the same comment: they say they are astonished and disturbed at the non-difference between novices and, well, themselves and other ‘real ‘actors. In contrast to Ellis point that top level experts are ‘undoubtedly better than non high-levels at depicting the behavior of their actual peers” (italics added) – the coaches I cite above are unsettled precisely by the opposite; the fact that, given very realistic roles, detailed materials, elaborate strategies, the non-experts at the table reproduce reality in ways that are striking: in terms of how they discuss and analyze complex issues, how they express the beliefs of various players, and how the dynamics between parties evolve.

I am not suggesting that these participants are able to offer policy recommendations on the same level as specialists. There are of course many differences in the knowledge and wisdom of high level and non-high level participants, and simulations have to work with and around those. But in my experience, the difference in what participants learn from a simulation is not the result of what they bring to the table, but what the table compels them to take from it.

Natasha Gill

%d bloggers like this: