William Owen recently offered some thoughts at PAXsims on “what is wrong with professional wargaming.” Jeremy Sepinsky (Lead Wargame Designer at CNA) then replied with some comments—which I have reposted below for greater visibility. The views expressed are those of the author and do not represent the official policy or position of any agency, organization, employer or company.
I think the challenge here comes with equating “wargaming” with an analytic discipline rather than an analytic tool. Wargaming looks completely different in various context, but criticizing the rigor of the discipline is like criticizing the p-test, the Fourier transform, an MRI, or anonymous surveys: there very valuable if done well, and damaging if done poorly. The trick is in educating sponsors and potential sponsors as to what “bad” looks like. Even peer-reviewed journals have gone for years without identifying the “p-hacking” that has been taking place in quantitative analysis. And wargaming is a lot more diverse a toolset with a smaller number of skilled practitioners (and no peer-reviewed journals, as you point out) than quantitative methods, which makes it even harder to call out the bad actors.
To respond to Owen’s question of “so what, is it obvious?”: When a person running a professional wargame cannot effectively translate real-world decision making into relevant impacts in the conduct of the game, then either a) the person needs to be able to fully articulate why decisions at that level are beyond the scope of the mechanics, or b) it is a poorly run wargame. But many of the situations he discusses are “game-time” decisions. And it would be impossible/impractical (though probably beneficial) to include Matt Caffrey’s “Grey team” concept in all games. In that concept, there is an entire cell whose job it is to evaluate the wargame itself. Not the outcomes, or the research, but instead to critique whether the wargame was an appropriate model of reality for the purpose defined. Though, to support the other points in Owen’s article, I have not been able to find any published article discussing the concept.
But this leads into another point: wargames are more than combat modeling. Many of Owen’s examples and statements about the model seem to imply that the wargames he discuses are those that are interested in modelling and evaluating force-on-force conflict—and that the side that understands the underlying wargame mechanics of the conflict will succeed. To that end, those games do not seem to be played manually for just the very reason that you’re discussing. However, they are instead reproduced as “campaign analysis“. Models like STORM and JICM are trusted, I would argue, overly much. It takes away the requirement for the player knowing the rules, because it pits computer v. computer where both sides know all the rules.
When a given conflict can be reduced to pure combat, campaign analytics are a good tool for calculation. But when conflict is more than combat, the human element comes to the fore and wargames have an opportunity to expose new insights. In these cases, the specifics of the combat models should play less of a role in the outcomes. They are more highly abstracted to allow time and attention of the more humanistic elements of war: the move-counter-move in the cognitive domain of the players. Wargames structured properly to emphasize that cognitive domain should overcome the requirement of memorizing volumes of highly detailed rules by simply not having that many rules. Players only have so much mental currency to spend during the play of a single game, and where that currency is placed should be chosen (by the designer) wisely.
Finally, I’ll concluded with a response to Owen’s final statement: “The right wargame applied in the right way clearly does have immense value. It merely suggests we need to get better at understanding what has value and what doesn’t.” Who is it that defines the value of the wargame? Is it the sponsor? The designer? The players? I guarantee you that each come out with some value, and that they all may not agree on what that value was. Most US Department of Defense wargames that I am familiar with are one-off events. Understanding the implications of each wargame rule on every wargame action or decision is beyond the scope of most wargames and beyond the interest of wargame sponsors. Instead, we wargamers can do a better job explaining the limits of our knowledge. When we design a game, there is a delicate balance between fidelity and abstraction. Some aspect of the game are highly faithful to reality, while others are highly abstract. Where you place the fidelity and what you abstract has a tremendous outcome on the conclusions that you can make at the end of a wargame. Wargame designers, facilitators, and analysts owe it to their sponsors to make it clear what insights and conclusions are backed by a high degree of fidelity and which are not. Complex wargame models always run the risk of inputs being identified as insights, and our due diligence is important here. But that diligence extends beyond the numerical combat modelling into the facilitation, scenario, and non-kinetic aspects of the wargame as well.