Late last week, Peter Perla released a pre-publication copy of his most recent paper responding to the recent high level Pentagon interest in gaming as a means of innovation. Perla lays out his vision for how we can take advantage of this moment without allowing gaming at its laziest and least productive to take over. For Perla, good gaming for innovation (what I’ve called “discovery gaming” in other pieces) depends on competition between players. As a result, innovated design is far less important than design that enables strong communication and competition to result in creativity.
This isn’t the first time that the argument has been made to treat gaming more like an art than a science (The Art of Wargaming is called that just to riff on Sun Tzu after all). Art vs. science is also a standing debate between gamers that erupts at least once a year. In the past, I’ve viewed these debates primarily through the lens of what they tell us about how we teach and learn to game—too often science produces a cookie-cutter template while art produces unreliable mentoring.
However this time around, perhaps influenced by my current focus on game design, I’m noticing a different thread in the art vs. science debate: how do we evaluate if a game is good?
Perla argues “Real wargaming is about the conflict of human wills confronting each other in a dynamic decision-making and story-living environment” and “It is this process of competitive challenge and creativity that can produce insights and identify innovative solutions to both known and newly discovered problems.” He also calls on current practitioner to speak out to identify bad games to build up quality control that the field does not always have.
Taken together, these lines suggest that the quality of a game can be determined by the quality of the intellectual output, and that judgement can be rendered based on experience and expertise. But when applied to the environment in which games are created, these become very problematic very quickly.
Professional games are almost never built only to achieve the goals of the designer. Instead, the reality of national security gaming is that game designers work for game sponsors, who evaluate our work to determine both what lines of research to continue, and which of our findings to base policy decisions on.
Given that it is these sponsors who evaluate our work, how might they apply Peter’s standards? I worry that these standards place too much weight on the output of the game. I’ve seen too many “innovative” outcomes in games that are really just the result of ignoring the constraints that shape the real work. Unless the context of the game’s design, and how it replicates the real world problem set of interest is taken into full account, lots of time and energy will be expended on analyzing (or even executing) half-baked ideas.
I also worry that the reliance on the community of gamers to identify good and bad games sets up worrying dynamics. As Peter notes, not all folks currently making national security games are doing a good job. While Peter points to some strong communities that have sprung up, they are hardly monolithic in how they approach, practice, or assess games. What’s more, the field is so fractured that even the most inclusive of these groups can hardly claim to encompass all the good gamers out there. So then how are sponsor to choice which voices in the professional community to base their standard on?
All of this brings me back around to the need for standard for rigorous design. I absolutely agree that a rigid, “systematized” set of game designs cannot work. But adhering to good research design method and standards of evidence can offer us some basic standards that can be applied to all design types, and are accessible to our sponsors as well as practitioner. This may not in and of themselves be enough to guarantee a great game, but it will prevent many bad ones.