PAXsims

Conflict simulation, peacebuilding, and development

Squeezing the Turnip: The Limits of Wargaming

The following piece has been written for PAXsims by Robert C. Rubel.


 

squeezing_blood_out_of_a_turnip.gif

“Measure it with a micrometer, mark it with chalk and cut it with an axe” is an old adage that cautions us that the precision we can achieve in a project is limited by the least precise tool we employ.  We should remember this wisdom any time we use wargaming for research purposes.  Dr. John Hanley, in his dissertation On Wargaming says that wargaming is a weakly structured tool that is appropriate for examining weakly structured problems; that is, those with high levels of indeterminacy – those aspects of the problem that are unknown, such as the identity of all the variables.  Problems with lesser degrees of indeterminacy are more appropriately handled by various kinds of measurement and mathematical analysis.  However, as the tools for simulation and the analysis of textual data become more sophisticated, the danger is we will attempt to extract precision from wargaming that it is simply not appropriate to seek.

There are three aspects to this issue that we will address here; the inherent ability of wargaming to supply data that can be extrapolated to the real world, the development of “oracular” new gaming systems, and the number of objectives a particular wargame can achieve.

Peter Perla wrote, back in 1990 what has been the standard reference on wargaming, aptly-titled The Art of Wargaming. Of late there has been a lot of discussion online about wargaming as a science, or perhaps more precisely, the application of scientific methodology to wargaming.  There is no doubt that a rigorous, disciplined and structured approach to designing, executing and analyzing wargames is a good and needed thing. Too often in the past this has not been the case, and lots of money, time and effort have been wasted on games that were poorly conceived, designed and executed.  Worse, decisions of consequence have been influenced by the outcome of such games.  But even the most competently mounted game has its limits.  In this writer’s view, games can indicate possibilities but not predict; judgment is required in handling their results.

It is one thing to use a game to reveal relationships that might not otherwise be detected.  A 2003 Unified Course game at the Naval War College explored how the Services’ future concepts were or were not compatible.  It was designed as a kind of intellectual atom smasher, employing a rather too challenging scenario to see where the concepts failed.  The sub-atomic particle that popped out was that nobody was planning to maintain a SEAD (suppression of enemy air defense) capability that would cover the entry of non-stealth aircraft into defended zones. This was a potentially actionable insight that came out of the game, based on actual elements of future concepts. When games are used this way they are revelatory, not predictive.

Where we run into trouble is when we attempt to infer too much meaning from what game players do or say.  Dr. Stephen Downes-Martin has shown that game player behavior is at least partially a function of their relationships to game umpires, and so the linkage to either present or future reality is broken.  Thus there are limits on the situations where player behavior or verbal / written inputs can be regarded as legitimate output of a game.  There is a difference between having some kind of aha moment via observing player inputs and exchanges, and trying to dig out, statistically, presumed embedded meaning from player responses to questionnaires, interviews or even interactions with umpires or other players.

A first cousin to the attempt to extract too much information from a regular game is the attempt to create some new form of gaming that will be more revelatory or predictive than current practice can achieve.  Most of these are some riff on the Delphi Method, whether a variation of the seminar game or some kind of massively multi-player online game.  I know of none that have justified the claims of their designers and in any case they seem to violate the basic logic Downes-Martin lays out; the problematic connection between game players and the real world. When I was chairman of the Wargaming Department at the Naval War College I challenged my faculty to advance the state of the art of wargaming, but always within the bounds of supportable logic. My mantra was “No BS leaves the building!”

Even if a game is conceived and designed with the above epistemic limitations in mind, there could still be danger that the sponsor will try to burden it with too many objectives.  This was a common problem with the Navy’s Global Wargames in the late 1990s.  Tasked to explore network-centric warfare, the games became overly large and complex, piling on objectives from multiple sponsors, creating a voluminous and chaotic (not to mention expensive) output that was susceptible to interpretation in any way a stakeholder wanted.

The poster child of all this was Millennium Challenge 02, a massive “game” involving over 35,000 “entities” embedded in the supporting computer simulation, many game cells as well as thousands of instrumented troops, vehicles, ships and aircraft in the field and at sea.  Not only was the underpinning logic and design flawed – attempting to stack a game on top of field training exercises – but the multiplicity of objectives obfuscated any ability to extract useful information.  As it turned out, the game was sufficiently foggy to spawn suspicion of its intended use in the mind of a key Red player, retired Lieutenant General Paul VanRiper, and his post-game public criticisms destroyed any credibility the game might have had (I observed the game standing behind him as he directed his forces).

Modesty is called for.  While we might approach game design scientifically, and there are certain scientific philosophies upon which game analysis can be founded, gaming itself is not some form of the scientific method, even though rigor and discipline is necessary for their success.  An example of a good game was one run at the Naval War College in the spring of 2014 for VADM Hunt, then director of the Navy Staff.  The game was designed around the question “How would fleet operators use the LCS if it had various defined characteristics?”  Actual fleet staff officers were brought in as players and they worked their way through various scenarios.  What made a difference in the game was the effect that arming the LCS with long range anti-ship missiles had on opposition players.  The insight that VADM Rowden, Commander Surface Force, took away was that distributing offensive power around the fleet complicated an enemy’s planning problem.  One could consider this a blinding flash of the obvious, but in this case it was revelatory in terms of the inherent logic of an operational situation.  Trying to squeeze more detailed insights from the game, such as the combat effectiveness of the LCS, might have fuzzed the game’s focus and prevented the Admiral from gaining the key insight. He translated that insight into the concept of distributed lethality, now codified into the more general doctrine of Distributed Maritime Operations.

In a very real sense, games are blunt instruments, the analogue of the axe in the old saying.  Like the axe though, they can be very useful.  In this writer’s opinion – informed by many years of gaming – the best games in terms of potential for yielding actionable results, are focused on just a couple of objectives.  That said, in my experience, the most valuable insights are sometimes the ones you don’t expect going in.  In fact, some of the most influential games I have seen were essentially fishing expeditions. In 2006 the Naval War College conducted a six-week long strategy game to support the development of what became the 2007 A Cooperative Strategy for 21stCentury Seapower (CS21).  Going in we did not know what we were looking for but in the end a somewhat unexpected insight emerged (It’s the system, stupid) that ended up underpinning the new strategic document.  “Let’s set up this scenario and see what happens” is an axe-like approach that must not then be measured with a micrometer.


Captain (ret) Robert C. (“Barney”) Rubel served 30 years active duty as a light attack/strike fighter aviator.  Most of his shore duty was connected to professional military education (PME) and particularly the use of wargaming to support it.  As a civilian he worked first as an analyst within the Naval War College Wargaming Department, later becoming its chairman.  In that capacity he transformed the department from a mostly military staff organization to an academic research organization.  From 2006 to 2014 he served as Dean of the Center for Naval Warfare Studies, the research arm of the Naval War College. Over the years he has played in, observed, designed, directed, and analyzed numerous wargames of all types and written a number of articles about wargaming.  For the past four years he has served as an advisor to the Chief of Naval Operations on various issues including fleet design and PME.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: