Conflict simulation, peacebuilding, and development

Daily Archives: 26/04/2020

Owen: What’s wrong with professional wargaming?

The following piece was written by William F. Owen, editor of the Infinity Journal. In addition to this he consults for industry, government agencies and armed forces on a range of command, doctrine and capability issues. He started wargaming as a teenager.



Professional wargaming should aim to provide insights that can inform decisions, based on a degree of evidence. In essence professional Wargaming should equal the test of theory, in that it should explain extant phenomena and enable a degree of prediction. Both the phenomena and prediction would have expression as insights or issues requiring further investigation. If professional Wargaming cannot do this, what use is it.

In other words professional wargaming should tell you why, when, where and how combat occurs, thus give the practitioner a sense of what would occur in reality. Well-executed professional Wargaming of the right type has immense value, though the actual empirical basis for this, while extant, may not be as comprehensive or as rigorous as popularly imagined and there is almost no body of peer reviewed body of unclassified academic research. Most importantly, the problem is that what makes a good wargame seems to be poorly understood, particularly by many who advocate Wargaming professionally.  This paper takes the view that the best insights are derived from multiple iterations of truly adversarial wargames, using a number of different valid models and methods.

A very small number of books written by an equally small number of professional wargamers and/or analysts do exist, but little of it seems concerned with the validity of wargaming as a professional tool as concerns how different models produce differing insights. Books about how to wargame and the history of Wargaming does not a body of academic or professional literature make!

The consequences for being wrong or using a bad wargame are both expensive and extremely serious. Firstly people can die based on bad advice/practice emanating from bad wargames and secondly it seems logical to suggest that poor choices based on wargame evidence can easily waste as much money as Wargaming might supposedly try to save. Wargaming may not always be a cost saving measure, if done badly. Wargaming can be used to give the appearance of validity to bad and very expensive ideas. It needs to be explicitly stated that Wargaming can be both very good and very bad. The problem is no one ever seems to talk about the bad or very bad, and how closely the professional community seems to flirt with it or even knowingly ignores such an issue. Thus evidence derived from Wargames is extremely unsafe unless the modelling and processes used have been subjected to a high degree of rigour.

The heart of a professional wargame relies on the consequence of decisions as explained by a model. That model should be a useful approximation of reality. It doesn’t have to be highly detailed or even complex, but it must be able to produce outcomes that are by and large valid in the real world. This is what separate hobby wargamers from professional wargamers. Professionals need it to make sense, or someone gets hurt.

Given the centrality of the model, what drives that model is clearly critical, yet there seems to be very little, if any operational, rigorous or academic literature based on the validity of the models apparently professional wargames employ. Indeed even professional models may well pander to popular perceptions of outcomes as the mechanics are often modified from hobby games. For example the idea that infantry derive in an increase in effectiveness if defending in wooded terrain is highly context specific, so not the absolute given most models assume. There is a body of operational/historical analysis literature that suggests infantry attempting to defend within wooded terrain usually loose and loose badly. That may well mean that some professional wargames rely on very poor models and thus produce unsafe insights.
This may seem contentious but I would like to propose some simple observations that may strike to the heart of professional wargaming.

For example, the participant or participants in a wargame that best understand the rules and how the model or models work have a disproportionate advantage. What his means is that a highly experienced and capable military commander will probably lose when playing a wargame against a civilian who happens to be a more experienced war gamer, all things being equal. This would be because the military man will fight and operate as per his real life understanding and the teenager will merely do what he knows works in the game. So for a military user of wargames the more he is exposed to the game model of combat the more comfortable and able he will become in terms of its employment. If that model his not strongly based on reality, then he will be learning all the wrong lessons and that will or could have real life consequences. The same applies to wargames used for military education, force development and/or doctrine development.
This extends across all wargames not just the professional domain. The Dungeons and Dragons players who are experts in the rules, books and the combat resolution model should and most probably do make far better decisions than people entirely new to the game who may actually be better decision makers, but lack the knowledge to inform them. The immense challenge produced by computer game AI models is not usually a product of complex tactical algorithms. Most computer game AI is tactically simplistic, but if it plays exactly by the rules, which it has to, it will simply compensate to an incredible degree for its lack of tactical acumen compared to the human player, who is unfamiliar with the detail of how the model works.

If you want to excel at any wargame, play it a lot, learn, test and investigate how the rules predict the outcomes of engagements. You will leverage yourself a measurable advantage over someone who knows less. However this will not make you a skilled commander in the real world or give you operational insights that are safe on which to base expertise or professional judgement. You will merely become an expert war gamer.
However given real world experience the same would apply in the real world. The commander who knows most about his force, in terms of how it does what it does and it strengths and limitations will be the man best able to employ that force in combat. Knowledge of your own force is literally a combat multiplier.

OK, so what? Is this not all obvious?

If it were obvious where is the discussion? Who has addressed these issues in an open forum? Why is there not more written on bad wargames and why is professional Wargaming so variable in terms of output? It seems unlikely that research agencies and armed forces that have had bad experiences with wargaming experimentation would be open about what when wrong, even if they were prepared to admit the error; which strongly suggests this subject is avoided even no classification issues exist.

Almost all wargame literature unquestioningly champions and advocates wargaming for the sake of wargaming with almost no professional rigour. The validity of the model and the rule set is simply terra incognita to the vast majority of wargamers as well as a lot who employ it professionally. The reason why many military professionals have been historically and contemporarily dismissive or agnostic to wargaming is that they simply don’t trust the models used. It would seem logical to suggest that if they thought that the combat modelling was accurate they would engage more than they do with the process. Why would they not?

So there are actually two distinct but closely related problems here. The first is that it is entirely right to be sceptical as to the validity of most wargame models or rule sets. The second is that a high level of familiarity with any rule set, in any game confers an advantage which will not translate into safe insights or professional development, unless the model used can be shown to have a high degree of real world validity.

The issue is thus the models and the rule sets. The view that all wargames have a degree of professional merit is toxic to the validity of Wargaming as both a tool for professional military development or indeed any practical military application. It is entirely valid to note that a model is an approximation of reality, not an exact replication of it. Thus the problems occur when those approximations generate false lessons that would not aid understanding or experience in the real world. That said, real world combat is so infinitely variable and subject to friction that any model will struggle yet the very nature of Wargaming seeks to address this specific issue as in to model warfare. By using models were axiomatically accept both their utility but also their limitations. Wargaming is far more the dim candle that lights the path, than the night vision goggles it is often advertised as.

So the challenge offered to those advocating the professional application of Wargaming is, why should any professional have any faith in the validity of your modelling and does their having deep knowledge of your model gain them an advantage that would not be present in the real world? If playing the wargame does not make them better at their job in reality, what is the use of doing it? To be deliberately contentious, if the good civilian wargamers, experienced in Wargaming alone, can beat experienced military commanders what does that tell you? What would that suggest?

Now the premise of this paper fully concedes that Wargaming can be and has been shown to be an extremely valuable tool, but there needs to be an evidence-based understanding of why and how we know that. For example, why would anyone use a hex-turn-based wargame instead of 1/285th scale micro-amour to address a particular point of force structure design? If the answer doesn’t lie in the validity of the model, but in the human organisational, time, budget and playability needs of the organisation conducting the work, then there maybe something very wrong. Likewise how safe are insights generated by one methodology, if they do not concur with the insights generated by a different method, approach or model, especially when examining the same problem?

One interesting aspect of comparing the validity of wargame models via comparison is both the suggestion and assertion that manual as opposed to computer based Wargaming allows for a greater understanding, and visibility of the underlying model. The counter argument is that limited time and budget means that only relatively simple manual models can be used because they simply cannot process or account for the wide variety of parameters inherent to most computer models. Manual games are forced to use inherently simple models. It seems to be a very reasonable conjecture that computer based game are actually played in a very different way to manual ones, to the extent that given broadly the same problem different behaviour and decision making would be required dependant on whether you were playing a computer based game or a manual one. This is largely to do with the number of parameters the model can process. Hex-based computer wargames actually allow this is to be investigated, as do computer games based on Dungeons and Dragons Rule Sets. If the behaviours, thus outcomes are different then this phenomena lies within the model. The issue is not computer versus manual. The issue is the veracity, thus usefulness of the model. Computer models can actually be investigated to a very high degree, although they cannot often be altered. Given simple tools like scenario editors you can investigate the behaviour of combat resolution models and/or the AI underlying the adversary decision-making or AI behaviours. Games that allow third party mods can allow even deeper levels of investigation and understanding. It could be suggested that agencies and organisations that employ Wargaming are/should be well aware his, though perhaps reluctant to engage in a conversation about. If this isn’t the case then it seems fair to ask why after 15-20 years of such games, does this condition persist? The use of computer simulations for operational analysis is a well-trodden path in defence circles.

To conclude, there seems to be little informed discussion or scientific and academically rigorously writing on what makes a good or bad wargame fit for professional use. In fact there seems to be little beyond opinion and faith based assertions that x or y models are valid and safe to employ and that professional wargames are of value regardless of the model. This is not to say professional Wargaming has no value. The right wargame applied in the right way clearly does have immense value. It merely suggests we need to get better at understanding what has value and what doesn’t.

William F. Owen 




%d bloggers like this: