Last month, I participated in a panel on treating games as a quasi-experimental method, organized as part of the 83rd annual MORS symposium. The panel’s participants represented a range of approaches to gaming, as shown in some of our recent presentations to the MORS Wargaming Community of Practice this spring.
The dominant view of the panel was that game are not quasi-experiments, but that quasi-experimental design can serve as a useful metaphor to wargame design. Quasi-experimental design refers to experiments that do not have randomized controls. As a result, the method spends a great deal of time and attention considering what conclusions can be drawn given limited control over the conditions.
Since it’s particularly important to consider how much control we can actually exercise over game design and how we articulate the rationale for our choices to sponsor and consumers of post-game analysis, I think the structures and standards laid out in quasi-experiential methods can provide helpful guidance. Some of my fellow panelists were less sold on the utility of this particular set of tools, though most agreed it was a valid approach.
However, the majority of the panel’s time was spent discussing validation. For many on the panel, this is a loaded term that calls to mind statistical validation, which is not possible in wargames. While I find the concepts and practices related to internal and external validity to be useful guides in game design and analysis, this panel did a good job of convincing me that the effort needed to convince folks that internal and external validity need not mean statistical validity is not worth the fight. Audience member and panels offered a range of alternatives from “trustworthiness” to “analytical caveats” that might provoke less resistance while still helping to articulate a shared, flexible standard that design and reporting on games should be held to.
What made the panel particularly useful to my mind is that the conversation (both between panelists and with the audience) was able to move past simply arguing for the need for standards, to laying out broad approaches to design that might require different standards. These included:
Game Purpose: The differences between game design for analysis and training came up (including a short discussion of the two by two I use to describe the differences). It was agreed that we should think about how much each of these types of games must reflect the real problem set they represents differently depending on the goal of the game.
Game Structure: Somewhat related, panelists stressed that structure of the problem being explored in a game is not necessarily directly related to the structure of the adjudication technique in use in that game(a point I’ve stressed before). One conclusion I drew from the discussion is that the problem structure likely has as much, if not more, bearing on how to think about game results then the adjudication structure selected. Designers often focus on the adjudication model as the bases for why game results should be seen as relevant, but if many of our fundamental design decisions are driven by the problem structure, then we might be better off focusing on the problem.
Epistemological approach: One key point of divergence among members of the panel was what epistemological approach is best applied to wargaming. Arguments in favor of positivist, constructivist, and complexity theory where each made, though it was generally agreed that games could be designed and analyze any of the three approaches. Which approach is the most appropriate to gaming has been a frequent debate within the COP over the last year, but this conversation offered a way out: that each may be valid but have different rules of the road (with implications about when each is appropriate to use).
Game design philosophy: Several panelists mentioned Peter Perla’s three styles of game design artists, architects, and analysts (discussed in this lecture), as a key aspect governing game design standards. I fall very strongly into the architect camp, so my style of game design lends itself particularly well to structured approaches from the social sciences. As a result, it was particularly helpful to hear from others on the panel, particularly the “artist” type designers about their preferred metrics of game success. These metrics focused on participant engagement, which I’ve always considered as a less prominent component of game analysis. Thinking more about how to create standards that center in engagement and emotional connectivity will be useful to creating more differentiated standards that better for the full range of game we use.
Finally, for much of the panel, validation of game results happens outside the game itself as part of the broader “cycle of research.” It’s great to see such a strong explicit focus on games as part of broader efforts. Connecting the design choices and resulting standards to aspects of these broader studies will be a key area for future research in game design.