PAXsims

Conflict simulation, peacebuilding, and development

Rubel and an introduction to mixed-method gaming

Several weeks ago, well-respected wargamer Barney Rubel posted a short article critiquing SecDef Hagel’s call to improve gaming as a means of enhancing innovation. The piece is a thoughtful review of many of the strengths and pitfalls of gaming and is well worth a read. However, I was struck by how many of Rubel’s arguments are old hat within the gaming community. For example those readers who followed up on Rex’s suggestion to read Crisis Games 27 Years Later (which I also highly recommend) will recognize many of Rubel’s arguments.

While these limitations are important to be aware of, too often gaming is either written about in glowing terms by outsiders to the field (for an example, see Dave Anthony’s poorly-received Atlantic Council event from this fall) or are cautious warnings by experienced gamers like Rubel noting the limits of the field. While professional conferences and publication feature a bit more nuance, they too can fall often fall into the same patterns.

While both points are important to consider, neither is very useful in thinking about what can be done to make gaming better. However, in the last two years or so I’ve seen more and more work that explores new techniques, and attempts to capture how they did (or did not) improve the ability of games to meet their objectives.

My goal is to use some of my time here at PAXsims highlighting these new techniques. My hope is that doing so will give a better sense of some of the areas where the field has made improvements, and help circulate new practices for feedback.

To start out this discussion I want to talk about one of the areas I am most excited by – the use of new, qualitative techniques that compliment quantitative techniques and can be used in tandem to produce richer analytic results.

Wargaming has often been something of a little brother of operations research. As a result, often the instinct of gamers (and DoD analysts more broadly) is to reduce problems down to be as specific, and ideally quantitative, as possible. As a result, many gamers have preferred to look at problems that can be reduced down to combat performance tables and other probabilistic ways of generating results. Such a strategy reduces many of the pitfalls of subjectivity raised by Dean Rubel. However, these methods do so by stripping out much of the complexity and indeterminacy that makes problems challenging in the first place, and as a result can greatly limit gaming’s utility.

An alternative to this approach relies on qualitative analysis that may offer less specific findings, but allow complex problems to be approached directly. Good qualitative analytic tools require a structured research design in advance of the game, and careful data collection that can allow the game to be analyzed after the fact. More and more often, gamers are discovering new techniques from these traditions that can contextualize finding to offer rigorous, useful analysis without artificially simplifying problems.

To my (political-scientist-trained) mind, this distinction mirrors the division between quantitative and qualitative approaches in political science. While sometimes treated as antagonistic approaches, a great deal of excellent recent work has been produce that leverages techniques from both approaches. Using “mixed method” or interdisciplinary approaches can allow one method to buttress the weakness other the other. They can also be used iteratively as a way to drive research. (For those interested in more context about how this plays out in political science, I highly recommend this paper by Kai Thaler that discusses the application of a mixed methods approach to the study of political violence.)

I believe that this same approach can also be used to improve the quality of games by effectively leveraging quantitative and qualitative techniques to build better analysis.

One of the most important precepts of mixed methods is that not all techniques are appropriate to all questions. To my mind, questions about what potential causes are linked to specific outcomes are best handled qualitatively. Questions about the size of the effect are best handled through quantitative means. Questions about categories into which events might be divided (or to put it another way how similar or different events are) may be suitable to either approach, depending on what types of quantitative and qualitative data are available.

When determining which approach is most appropriate, I also tend to look at what type of data can be collected and use that as a guide to selecting a method. If you are interested in the impact of an economic policy that would naturally be measured in dollars, or weapons performance that can be measured in rounds fired per minute, quantitative is the way to go.

If, on the other hand, you are interested in the process of decision making between groups of people where the output is spoken words or policy decision there is often not a natural quantitative proxy. In these cases I think you are often better off using qualitative techniques that can be applied to analyze data like transcripts of dialogue, descriptions of interpersonal behavior, and records of what decisions were made at what points in time.

Multi-methods approaches to gaming are still in their infancy. In part, this is because the field lacks a strong understanding of what qualitative approaches are available and what problems they are appropriate to in a gaming context. As a result, I believe a necessary step to build up to the full potential of mixed method games is improving our qualitative analytic practices. There will be more on this point from me in the next year!

4 responses to “Rubel and an introduction to mixed-method gaming

  1. Barney Rubel 05/02/2015 at 7:44 pm

    Ellie, you are correct that my cautions on gaming are pretty traditional. You are also correct that new analytic methods promise to squeeze more meaning out of games. At the Naval War College Wargaming Department several professors have applied various qualitative techniques such as grounded theory to the analysis of games. I also believe that new display and visualization tools and techniques (which we developed at NWC) will permit us to get more gaming value per unit time and perhaps generate insights never before possible. My intent with the piece in War on the Rocks was to sound a cautionary note. The organizational context of gaming is very important, and Hagel’s memo will be like cocaine to various contractors who, despite having smart, honest associates, cannot generate the right context for gaming.

    Sounds like you are doing very interesting work. Be sure to share with the wider community.
    Barney Rubel

  2. elliebartels 06/02/2015 at 11:27 am

    Dean Rubel, thanks you for your kind note. I’ve been keeping up with Hank and Doug’s work on qualitative techniques with great interest, in fact some of my thoughts on Doug’s recent MORS COP talk can be found here: https://paxsims.wordpress.com/2015/01/20/ellies-response-to-ducharme-on-coa-analysis-gaming/.

    To your broader point, I don’t disagree at all for the need to remind DoD of the limits and dangers of poorly done gaming. However, I also think that many leaders have proven that they will continue to ask for (and pay for) games, even when many are poorly executed. As a result, I’d like to see the field provide means to evaluate the quality of games (or better yet, game designs before they are executes) so the government has the ability to be more discriminating.

    Looking forward to continuing the conversation, and always looking for new folks to engage with!

  3. Barney Rubel 06/02/2015 at 4:38 pm

    Ellie,
    It’s easy to get wrapped around the axle when you start trying to categorize games. When I was designing and directing games I tried to focus on what factors would affect game validity, given the purpose of the game. For instance, if the purpose of a game is to educate war college students in the connection between operational art and strategy, the requirements for scenario realism are not as critical as for a game intended to test actual operational plans. All participants learn things in every game – or think they do. The trick is lining up the elements of design and execution such that you can establish a case for why the game produced the results it did.

    I once designed a game that featured an overly aggressive use of Army Airborne forces. Got screams of protests from Service reps from the Army and Air Force, but what the scenario was, was a concept atom smasher forcing together the future concepts of all the services in an extreme situation. The subatomic particle that emerged from the collision was that nobody had put SEAD into their future concepts. Moreover, nobody had thought through the issue of who, on what basis, would be responsible for declaring an airspace safe for C-17s to fly through. The scenario may not have been all that plausible, but there was sufficient realism in the details to generate the insights.

    Barney

  4. elliebartels 06/02/2015 at 5:37 pm

    I agree, but I have also found that new gamers and clients struggle to understand what factors affect game validity. As a result, I’m interested in the extent that methods from social science can be used to structure how we make choices in game design so there are more consistent norms. I’ll be talking a bit about this effort on 18Feb at the next MORS COP meeting and will also post some thoughts here. I’ll be very interested in your (and the rest of the NWC gang’s) thoughts!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: