Earlier this week, Devin and ! both listened to a great talk by Naval War College’s Dr. Doug Ducharme for the MORS Wargaming Community of Practice on best practices for wargaming in support of Course of Action (COA) analysis. This is second of three posts: the first summarized Doug’s talk, and the third will have some thoughts from Devin.
I found Doug’s presentation, as well as the discussion that followed his talk, to be very insightful and thought provoking. It was particularly useful that Doug offered concrete guidance for game designers to improve their practice. The suggested best practices mirror well with my own experiences, and serves as a useful set of guidelines for new gamers. However, there were two points that I want to explore more: Doug’s distinction between educational and analytical gaming, and his distinction between free and rigid adjudication.
Doug argued that all games are experiential. What differentiates educational and analytical games is whether the goal of the game is to change the participants, or to change our base of knowledge. This definition is related, but somewhat different from what I’ve used in my own work. In past work, I’ve defined the types of game purposes using the 2×2 below:
As a result, I tend to think of analytical games as seeking to gain a better understanding of a problem, while education games seek to make people better able to solve similar problems in the future. I need to think more about how the distinction Doug points to fits into this model.
Doug’s definition also suggests to me a somewhat troubling fact: the majority of events that are run to improve US strategy today are actually focused on improving decision makers’ future capacity. On one hand, I think gaming can provide excellent educational value and professional development. On the other, I don’t want that to come at the expense of thinking though strategy and plans to make them as robust as possible. I left Doug’s talk hoping that the comment made by another participant that “all games are both educational and analytical” is right!
The second point I want to tease out a bit more is Doug’s definition of adjudication methods. The talk, and the discussion after, clarified for me something that has been bothering me about how gamers talk about adjudication for a long time. A lot of discussion around gaming for analysis argues that the more rigid the system of game is, the more analytical it is. As a qualitative/mixed methods person, this rush to quantification always rubs me the wrong way, and I think this talk gave me a new way to frame why it bothers me.
I think that most of the time when gamers talk about free or rigid methods, we are actually conflating two different ideas. The first concept is a decision made by the game designer about how structured a technique to use to capture and analyze data about adjudication. Here, we can think about a spectrum that ranges from very loose adjudication, where rulings are made with few restrictions (and likely little documentation), to a very rigid system with detailed protocols for documentation and adjudication. The second concept deals with how specified of a model is used to generate the outcomes of player decision. Unless a game designer misses something in their research, this factor is limited by the state of knowledge on the issue being gamed. In some cases, we may have a very concrete and detailed theory of what should happen, but other times our models of cause and effect are less well developed, and we are left to deal with some pretty underspecified models.
While I do think that it is easier to establish structured adjudication rules when we have a well specified theory behind our adjudication, I don’t think the two concepts are necessarily the same. For example, one participant on the call referenced matrix gaming, which can provide a great deal of structure to game adjudication, even when causal models behind adjudication are fairly nebulous.
Treating the two design criteria like they are connected, or even the same, lets us get away with under-designing games when we are dealing with complicated poorly defined issues. For example, often “free” method games relay on expert judgment for adjudication, who make determinations about the effects of player action without providing much more justification then their credentials. However, by having less structure in the adjudication, game designers often give themselves a pass from looking carefully at what mental models experts are using to determine outcomes. As a result, we end up not ever really knowing how specified the model that drove the action of the game actually was, producing enviably nebulous and unsatisfying post-game analysis.
I’d argue that game designers should treat structured approaches to adjudication as critical to good game design. Then, even when the underlying models are underspecified, games can contribute to clarifying the models that do exist, and over time, to increasing model specificity. This is a concept that has been discussed with regard to wargaming emerging issues, but I think it needs to be applied much more broadly.
This is a topic that a lot of my recent work has focused on, and I’m due to speak to the MORS COP on the topic next month. I’m hoping to be able to share some of my thoughts here in advance of that presentation. As a result, even more than usual, I’d love folks’ feedback on these ideas!