PAXsims

Conflict simulation, peacebuilding, and development

Category Archives: methodology

Sepinsky: Wargaming as an analytic tool

William Owen recently offered some thoughts at PAXsims on “what is wrong with professional wargaming.” Jeremy Sepinsky (Lead Wargame Designer at CNA) then replied with some comments—which I have reposted below for greater visibility. The views expressed are those of the author and do not represent the official policy or position of any agency, organization, employer or company.


 

1504628-creative-pull-toolbox-free-toolbox-clipart-toolbox-red-png-image-and-clipart-toolbox-png-500_349_preview.png.jpeg

I think the challenge here comes with equating “wargaming” with an analytic discipline rather than an analytic tool. Wargaming looks completely different in various context, but criticizing the rigor of the discipline is like criticizing the p-test, the Fourier transform, an MRI, or anonymous surveys: there very valuable if done well, and damaging if done poorly. The trick is in educating sponsors and potential sponsors as to what “bad” looks like. Even peer-reviewed journals have gone for years without identifying the “p-hacking” that has been taking place in quantitative analysis. And wargaming is a lot more diverse a toolset with a smaller number of skilled practitioners (and no peer-reviewed journals, as you point out) than quantitative methods, which makes it even harder to call out the bad actors.

To respond to Owen’s question of “so what, is it obvious?”: When a person running a professional wargame cannot effectively translate real-world decision making into relevant impacts in the conduct of the game, then either a) the person needs to be able to fully articulate why decisions at that level are beyond the scope of the mechanics, or b) it is a poorly run wargame. But many of the situations he discusses are “game-time” decisions. And it would be impossible/impractical (though probably beneficial) to include Matt Caffrey’s “Grey team” concept in all games. In that concept, there is an entire cell whose job it is to evaluate the wargame itself. Not the outcomes, or the research, but instead to critique whether the wargame was an appropriate model of reality for the purpose defined. Though, to support the other points in Owen’s article, I have not been able to find any published article discussing the concept.

But this leads into another point: wargames are more than combat modeling. Many of Owen’s examples and statements about the model seem to imply that the wargames he discuses are those that are interested in modelling and evaluating force-on-force conflict—and that the side that understands the underlying wargame mechanics of the conflict will succeed. To that end, those games do not seem to be played manually for just the very reason that you’re discussing. However, they are instead reproduced as “campaign analysis“. Models like STORM and JICM are trusted, I would argue, overly much. It takes away the requirement for the player knowing the rules, because it pits computer v. computer where both sides know all the rules.

When a given conflict can be reduced to pure combat, campaign analytics are a good tool for calculation. But when conflict is more than combat, the human element comes to the fore and wargames have an opportunity to expose new insights. In these cases, the specifics of the combat models should play less of a role in the outcomes. They are more highly abstracted to allow time and attention of the more humanistic elements of war: the move-counter-move in the cognitive domain of the players. Wargames structured properly to emphasize that cognitive domain should overcome the requirement of memorizing volumes of highly detailed rules by simply not having that many rules. Players only have so much mental currency to spend during the play of a single game, and where that currency is placed should be chosen (by the designer) wisely.

Finally, I’ll concluded with a response to Owen’s final statement: “The right wargame applied in the right way clearly does have immense value. It merely suggests we need to get better at understanding what has value and what doesn’t.” Who is it that defines the value of the wargame? Is it the sponsor? The designer? The players? I guarantee you that each come out with some value, and that they all may not agree on what that value was. Most US Department of Defense wargames that I am familiar with are one-off events. Understanding the implications of each wargame rule on every wargame action or decision is beyond the scope of most wargames and beyond the interest of wargame sponsors. Instead, we wargamers can do a better job explaining the limits of our knowledge. When we design a game, there is a delicate balance between fidelity and abstraction. Some aspect of the game are highly faithful to reality, while others are highly abstract. Where you place the fidelity and what you abstract has a tremendous outcome on the conclusions that you can make at the end of a wargame. Wargame designers, facilitators, and analysts owe it to their sponsors to make it clear what insights and conclusions are backed by a high degree of fidelity and which are not. Complex wargame models always run the risk of inputs being identified as insights, and our due diligence is important here. But that diligence extends beyond the numerical combat modelling into the facilitation, scenario, and non-kinetic aspects of the wargame as well.

Jeremy Sepinsky 

 

Owen: What’s wrong with professional wargaming?

The following piece was written by William F. Owen, editor of the Infinity Journal. In addition to this he consults for industry, government agencies and armed forces on a range of command, doctrine and capability issues. He started wargaming as a teenager.


 

wargaming.jpg

Professional wargaming should aim to provide insights that can inform decisions, based on a degree of evidence. In essence professional Wargaming should equal the test of theory, in that it should explain extant phenomena and enable a degree of prediction. Both the phenomena and prediction would have expression as insights or issues requiring further investigation. If professional Wargaming cannot do this, what use is it.

In other words professional wargaming should tell you why, when, where and how combat occurs, thus give the practitioner a sense of what would occur in reality. Well-executed professional Wargaming of the right type has immense value, though the actual empirical basis for this, while extant, may not be as comprehensive or as rigorous as popularly imagined and there is almost no body of peer reviewed body of unclassified academic research. Most importantly, the problem is that what makes a good wargame seems to be poorly understood, particularly by many who advocate Wargaming professionally.  This paper takes the view that the best insights are derived from multiple iterations of truly adversarial wargames, using a number of different valid models and methods.

A very small number of books written by an equally small number of professional wargamers and/or analysts do exist, but little of it seems concerned with the validity of wargaming as a professional tool as concerns how different models produce differing insights. Books about how to wargame and the history of Wargaming does not a body of academic or professional literature make!

The consequences for being wrong or using a bad wargame are both expensive and extremely serious. Firstly people can die based on bad advice/practice emanating from bad wargames and secondly it seems logical to suggest that poor choices based on wargame evidence can easily waste as much money as Wargaming might supposedly try to save. Wargaming may not always be a cost saving measure, if done badly. Wargaming can be used to give the appearance of validity to bad and very expensive ideas. It needs to be explicitly stated that Wargaming can be both very good and very bad. The problem is no one ever seems to talk about the bad or very bad, and how closely the professional community seems to flirt with it or even knowingly ignores such an issue. Thus evidence derived from Wargames is extremely unsafe unless the modelling and processes used have been subjected to a high degree of rigour.

The heart of a professional wargame relies on the consequence of decisions as explained by a model. That model should be a useful approximation of reality. It doesn’t have to be highly detailed or even complex, but it must be able to produce outcomes that are by and large valid in the real world. This is what separate hobby wargamers from professional wargamers. Professionals need it to make sense, or someone gets hurt.

Given the centrality of the model, what drives that model is clearly critical, yet there seems to be very little, if any operational, rigorous or academic literature based on the validity of the models apparently professional wargames employ. Indeed even professional models may well pander to popular perceptions of outcomes as the mechanics are often modified from hobby games. For example the idea that infantry derive in an increase in effectiveness if defending in wooded terrain is highly context specific, so not the absolute given most models assume. There is a body of operational/historical analysis literature that suggests infantry attempting to defend within wooded terrain usually loose and loose badly. That may well mean that some professional wargames rely on very poor models and thus produce unsafe insights.
This may seem contentious but I would like to propose some simple observations that may strike to the heart of professional wargaming.

For example, the participant or participants in a wargame that best understand the rules and how the model or models work have a disproportionate advantage. What his means is that a highly experienced and capable military commander will probably lose when playing a wargame against a civilian who happens to be a more experienced war gamer, all things being equal. This would be because the military man will fight and operate as per his real life understanding and the teenager will merely do what he knows works in the game. So for a military user of wargames the more he is exposed to the game model of combat the more comfortable and able he will become in terms of its employment. If that model his not strongly based on reality, then he will be learning all the wrong lessons and that will or could have real life consequences. The same applies to wargames used for military education, force development and/or doctrine development.
This extends across all wargames not just the professional domain. The Dungeons and Dragons players who are experts in the rules, books and the combat resolution model should and most probably do make far better decisions than people entirely new to the game who may actually be better decision makers, but lack the knowledge to inform them. The immense challenge produced by computer game AI models is not usually a product of complex tactical algorithms. Most computer game AI is tactically simplistic, but if it plays exactly by the rules, which it has to, it will simply compensate to an incredible degree for its lack of tactical acumen compared to the human player, who is unfamiliar with the detail of how the model works.

If you want to excel at any wargame, play it a lot, learn, test and investigate how the rules predict the outcomes of engagements. You will leverage yourself a measurable advantage over someone who knows less. However this will not make you a skilled commander in the real world or give you operational insights that are safe on which to base expertise or professional judgement. You will merely become an expert war gamer.
However given real world experience the same would apply in the real world. The commander who knows most about his force, in terms of how it does what it does and it strengths and limitations will be the man best able to employ that force in combat. Knowledge of your own force is literally a combat multiplier.

OK, so what? Is this not all obvious?

If it were obvious where is the discussion? Who has addressed these issues in an open forum? Why is there not more written on bad wargames and why is professional Wargaming so variable in terms of output? It seems unlikely that research agencies and armed forces that have had bad experiences with wargaming experimentation would be open about what when wrong, even if they were prepared to admit the error; which strongly suggests this subject is avoided even no classification issues exist.

Almost all wargame literature unquestioningly champions and advocates wargaming for the sake of wargaming with almost no professional rigour. The validity of the model and the rule set is simply terra incognita to the vast majority of wargamers as well as a lot who employ it professionally. The reason why many military professionals have been historically and contemporarily dismissive or agnostic to wargaming is that they simply don’t trust the models used. It would seem logical to suggest that if they thought that the combat modelling was accurate they would engage more than they do with the process. Why would they not?

So there are actually two distinct but closely related problems here. The first is that it is entirely right to be sceptical as to the validity of most wargame models or rule sets. The second is that a high level of familiarity with any rule set, in any game confers an advantage which will not translate into safe insights or professional development, unless the model used can be shown to have a high degree of real world validity.

The issue is thus the models and the rule sets. The view that all wargames have a degree of professional merit is toxic to the validity of Wargaming as both a tool for professional military development or indeed any practical military application. It is entirely valid to note that a model is an approximation of reality, not an exact replication of it. Thus the problems occur when those approximations generate false lessons that would not aid understanding or experience in the real world. That said, real world combat is so infinitely variable and subject to friction that any model will struggle yet the very nature of Wargaming seeks to address this specific issue as in to model warfare. By using models were axiomatically accept both their utility but also their limitations. Wargaming is far more the dim candle that lights the path, than the night vision goggles it is often advertised as.

So the challenge offered to those advocating the professional application of Wargaming is, why should any professional have any faith in the validity of your modelling and does their having deep knowledge of your model gain them an advantage that would not be present in the real world? If playing the wargame does not make them better at their job in reality, what is the use of doing it? To be deliberately contentious, if the good civilian wargamers, experienced in Wargaming alone, can beat experienced military commanders what does that tell you? What would that suggest?

Now the premise of this paper fully concedes that Wargaming can be and has been shown to be an extremely valuable tool, but there needs to be an evidence-based understanding of why and how we know that. For example, why would anyone use a hex-turn-based wargame instead of 1/285th scale micro-amour to address a particular point of force structure design? If the answer doesn’t lie in the validity of the model, but in the human organisational, time, budget and playability needs of the organisation conducting the work, then there maybe something very wrong. Likewise how safe are insights generated by one methodology, if they do not concur with the insights generated by a different method, approach or model, especially when examining the same problem?

One interesting aspect of comparing the validity of wargame models via comparison is both the suggestion and assertion that manual as opposed to computer based Wargaming allows for a greater understanding, and visibility of the underlying model. The counter argument is that limited time and budget means that only relatively simple manual models can be used because they simply cannot process or account for the wide variety of parameters inherent to most computer models. Manual games are forced to use inherently simple models. It seems to be a very reasonable conjecture that computer based game are actually played in a very different way to manual ones, to the extent that given broadly the same problem different behaviour and decision making would be required dependant on whether you were playing a computer based game or a manual one. This is largely to do with the number of parameters the model can process. Hex-based computer wargames actually allow this is to be investigated, as do computer games based on Dungeons and Dragons Rule Sets. If the behaviours, thus outcomes are different then this phenomena lies within the model. The issue is not computer versus manual. The issue is the veracity, thus usefulness of the model. Computer models can actually be investigated to a very high degree, although they cannot often be altered. Given simple tools like scenario editors you can investigate the behaviour of combat resolution models and/or the AI underlying the adversary decision-making or AI behaviours. Games that allow third party mods can allow even deeper levels of investigation and understanding. It could be suggested that agencies and organisations that employ Wargaming are/should be well aware his, though perhaps reluctant to engage in a conversation about. If this isn’t the case then it seems fair to ask why after 15-20 years of such games, does this condition persist? The use of computer simulations for operational analysis is a well-trodden path in defence circles.

To conclude, there seems to be little informed discussion or scientific and academically rigorously writing on what makes a good or bad wargame fit for professional use. In fact there seems to be little beyond opinion and faith based assertions that x or y models are valid and safe to employ and that professional wargames are of value regardless of the model. This is not to say professional Wargaming has no value. The right wargame applied in the right way clearly does have immense value. It merely suggests we need to get better at understanding what has value and what doesn’t.

William F. Owen 

 

 

 

Bartels: Building better games for national security policy analysis

Bartels.pngIt’s out! Ellie Bartel’s long-awaited PhD dissertation on Building better games for national security policy analysis is now available on the RAND website.

This dissertation proposes an approach to game design grounded in logics of inquiry from the social sciences. National security gaming practitioners and sponsors have long been concerned that the quality of games and sponsors’ ability to leverage them effectively to shape decision making is highly uneven. This research leverages literature reviews, semi-structured interviews, and archival research to develop a framework that describes ideal types of games based on the type of information they generate. This framework offers a link between existing treatments of philosophy of science and the types of tradeoffs that a designer is likely to make under each type of game. While such an approach only constitutes necessary, but not sufficient, conditions for games to inform research and policy analysis, this work aims to offer pragmatic advice to designers, sponsors and consumers about how design choices can impact what is learned from a game.

Table of Contents

  • Chapter One
    • Introduction: Games for National Security Policy Analysis and How to Improve Them
  • Chapter Two
    • Study Approach
  • Chapter Three
    • Towards a Social Science of Policy Games
  • Chapter Four
    • Four Archetypes of Games to Support National Security Policy Analysis
  • Chapter Five
    • Designing Games for System Exploration
  • Chapter Six
    • Designing Games for Alternative Conditions
  • Chapter Seven
    • Designing Games for Innovation
  • Chapter Eight
    • Designing Games for Evaluation
  • Chapter Nine
    • Trends in RAND Corporation National Security Policy Analysis Gaming: 1948 to 2019
  • Chapter Ten
    • Conclusions, Policy Recommendations, and Next Steps
  • Appendix ASample Template for Documenting Game Designs

“Flattening the Curve” matrix game report

covid19.jpg

Tim Price has been kind enough to pass on this report from a recent play of the Flattening the Curve matrix game.


 

Last night I managed to get 11 volunteers together to play a distributed version of the Flattening the Curve matrix game over Zoom. It was an interesting and frustrating experience, but I thought it might be worthwhile sharing it with you.

Technology

We used Zoom for the video chat. We felt it was very important to be able to speak and see each other and Zoom has a simple and intuitive mosaic screen setup that is particularly useful for the Facilitator. The surround to the image is highlighted to show the current speaker, interrupters are shown with a highlighted line under them, and their names appear under their faces (really very useful indeed). Of particular interest for running a Matrix Game, it is possible to sent private messages to named individuals using the chat function in the application. It was also stable for the 3hrs we played.

We used Google Slides for the game map (see here). With the map itself as the background image and a number of counters imported as images onto the map (and left outside the slide boundary), so everyone could see and collaboratively move the counters if necessary. It is useful to duplicate the last slide for every turn, so you have a record of the map after each turn, and that also allows a run through at the end as an After Action Review.

Finally, we used Mentimeter  to be able to carry out the “Estimative Probability” method of adjudication.FTC1.png

When using Estimative Probability players or teams are asked to assess the chances of success of an argument, and these are aggregated to reveal the “Crowd Sourced” chance of success. In analytical games, this provides potentially valuable insight into how participants rate the chances of a particular course of action. Following discussion, players select the option on the Mentimeter slide which, in their view, best represents the probability of the argument’s success. These are displayed immediately to the Facilitator, but not to the players, so it is using hidden voting. It is generally felt that this is a more accurate method to leverage the work on Crowd Sourcing, as well as making the resulting probability more accessible and acceptable to the participants. The terms on the slide also reflected those commonly used in the intelligence community.

The advantage with Mentimeter over other poll and voting systems is that it is free, feedback is instant, and you can use a single slide for all the Matrix Arguments, because you can re-set the results each time. Of course, if you want to have a record of the results, you will have to buy the upgraded version, or save a screenshot each turn (which is a pain).

Running the Game

As is normally the case with video conferences, we had the usual difficulties getting everyone onto the Zoom, with sensible names displayed instead of “Owner’s iPad”, so the start was a little delayed. I had put out a Loom video with a short introduction about Matrix Games, but inevitably a few of the players hadn’t been able to view it, so we were delayed starting as I had to explain how the game would play.

As the game went on, I modified the map (based on some helpful collaboration with TNO in the Netherlands), to make it easier to follow. The revised map is here:

FTC2.png

The game played perfectly well, but at a slower pace that if it had been face to face, and it was certainly more tiring for me as the Facilitator. The inter-turn negotiation between team members and other teams was carried out using Whatsapp:  and Whatsapp Web so was private to the other players.

Results

We were time limited and were only able to have 11 participants in the end – but it was mainly a trial to see if running a Matrix Game remotely is at all possible. We got a few insights from the game, one of which I will share – as we all go into working from home full-time and are switching to remote working, we end up downloading all sorts of software and applications that we would never have normally dealt with. This increases the threat surface for cyber-attacks by an order of magnitude, so correct digital hygiene is going to be as important as washing your hands.

Post-Game Predictions

Following the game, we quickly did a couple of polls, hopefully better informed by the experience of the game:

  • Each participant was asked to give me their MOST IMPORTANT thing that would happen over the next month (please note the definition of “thing” was left deliberately vague so the players could decide for themselves what it meant).
  • They were then asked to vote on which of these was the MOST LIKELY thing to happen.

FTC3.png

  • Next, each participant was asked to give me their MOST IMPORTANT long-term consequence of Coronavirus.
  • They were then asked to vote on which of these was the MOST LIKELY thing to happen.

FTC4.png

Conclusion

It is possible to run a Matrix Game remotely, but it is very tiring for the Facilitator and takes much longer than you thought it would.

The right choice of technology can make a real difference – so mandated standards and corporate choices may well have an impact on the experience. This means that practicing, as I was, while waiting for the corporate roll out of their platform of choice might end up especially frustrating, when I am unable to do something that I know a free app on the internet will let me. But downloading all those free apps and trying them out could be dangerous, because the bad guys are definitely out to get you…


For more resources on the pandemic, see our COVID-19 serious gaming resources page.

Virtual paradox: how digital war has reinvigorated analogue wargaming

DigitalWar.png

The soon-to-be-launched journal Digital War has published an (online first) article by yours truly on the utility of analogue wargaming in examining the challenges of warfare in the digital age.

War has become increasingly digital, manifest in the development and deployment of new capabilities in cyber, uncrewed and remote systems, automation, robotics, sensors, communications, data collection and processing, and artificial intelligence. The wargames used to explore such technologies, however, have seen a renaissance of manual and analogue techniques. This article explores this apparent paradox, suggesting that analogue methods have often proven to be more flexible, creative, and responsive than their digital counterparts in addressing emerging modes of warfare.

Warfare has become increasingly digital. Militaries around the world are developing, deploying, and employing new capabilities in cyber, uncrewed and remote systems, automation, robotics, sensors, communications, data collection and processing, and even artificial intelligence. The wargames used by governments to explore such technologies, however, have seen a renaissance of manual and analogue techniques. What explains this apparent paradox?

This article will explore three reasons why analogue gaming techniques have proven useful for exploring digital war: timeliness, transparency, and creativity. It will then examine how the field of professional wargaming might develop in the years ahead. To contextualize all of that, however, it is useful to discuss wargaming itself. How and why militaries use games to understand the deadly business of warfare?

You can read the full thing at the link above. For more on the journal, see the Digital War website.

Megagaming emergency response

image

ATLANTIC RIM

As readers of PAXsims will know, over the past few years we have run several full day emergency response megagames in Montreal and Ottawa: APOCALYPSE NORTH (simulating a zombie pandemic threat to Quebec and Ontario from south of the border) and ATLANTIC RIM (giant creatures attack Atlantic Canada):

None of these games was meant to be serious, of course—as the after action reports above make clear, we play them for fun. However, the underlying game models can certainly be modified for more serious purposes.

If you would like a copy of my ATLANTIC RIM Design Notes to inspire you in your own megagame design, I’m happy to send them to you in exchange for a donation of any amount to the World Health Organization COVID-19 Solidarity Response Fund. Just make a donation, then email me with the receipt to receive the design notes (pdf). I’m happy to provide tips on adapting the game approach for your needs too.

WHO fund.png

Please note that the Design Notes were not written for an external audience. Instead, this was our internal reference document.  As a result, they do not include all game mechanics nor game materials (such as the maps, science quests, or hospital displays) you require to run a game. They probably still contain a few typos too! Still, at 51 pages long there is quite a bit there to inspire you.

For other inspiration, check out the Jim Wallman’s games at Stone Paper Scissors. The APOCALYPSE NORTH series were modifications of his original URBAN NIGHTMARE megagame, which he has since updated. His GREEN AND PLEASANT LAND national resilience megagame (which he ran at Connections UK 2018) is also very relevant.

Finally, see our ever-growing PAXsims COVID-19 serious gaming resources page.

Atlantic Rim

1.0 GENERAL INTRODUCTION
1.1 Scenario
1.2 Key Game Components and Concepts
1.3 Key Roles and Challenges
2.0 GAME SEQUENCE
2.1 Schedule
2.2 Sequence
3.0 KAIJU
4.0 MOVEMENT, RESILIENCE, AND SPECIAL ACTIONS
4.1 Impediments
4.2 Aircraft
4.3 Transporting Units
4.4 Submarines
5.0 REPORTS, SEARCH, AND DETECTION
5.1 Rumours
6.0 INCIDENTS
6.1 Damage
6.2 Resolving Incidents
6.3 Fires
7.0 COMBAT
7. 1 Collateral Damage
8.0 CASUALTIES AND MEDICAL TREATMENT
8.1 Transporting casualties
8.2 Treating casualties
8.3 Autopsies
9.0 CORPORATION(S)
9.1 The Irving Group
9.2 Maritime Commerce
9.3 Oil Platforms
9.4 Stock Market
10.0 UTILITIES AND ELECTRICAL DISTRIBUTION
10.1 Electrical Generation and Distribution
10.2 Electrical Generation Facilities
10.3 Regional Electrical Demand
11.0 DIPLOMACY
11.1 Territorial Waters and Exclusive Economic Zone
12.0 SCIENCE
12.1 Science Teams
12.2 Scientific Samples
13.0 MOBILIZATION AND REINFORCEMENTS
13.1 Deploying to the Crisis Zone
13.3 SAR and Training Units
13.3 Foreign Forces
14.0 PANIC
APPENDIX A: KAIJU
APPENDIX B: UNITS
APPENDIX C: SCENARIO SET-UP

Gaming the pandemic: Do No Harm

FIRST-DO-NO-HARM-1024x555.png

We at PAXsims believe that serious games are a very useful tool in the analytical or educational toolbox—if we didn’t, we wouldn’t put so much effort into this website and all of our other game-related activities. However, I often find myself warning about the limits of games too. They aren’t magic bullets. In some cases, moreover, they’re not even an especially useful tools.

I have been thinking about this quite a bit in relation to the current COVID-19 pandemic. PAXsims has tried to be helpful by making a number of gaming resources available. Others have done the same, notable the King’s Wargaming Network, which is offering to support appropriate gaming initiatives.

As we collectively grapple with the unfolding global crisis, however, I thought it prudent to also highlight some the risks of serious pandemic gaming. As I will argue below, while serious games have a great deal of utility, they can also be counterproductive. We thus all have a moral responsibility to make sure (as they say in the humanitarian aid community) that we DO NO HARM with our work.

First of all, there’s the modelling problem. We have to be very humble in assessing our ability to examine some issues when so little is known about key dynamics. Related to this is the “garbage in, garbage out” problem. Our data is often weak. The excellent epidemiological projections published by the Imperial College COVID-19 Response Team have been very useful in spurring states to action, but in the interests of avoiding confirmation bias we also need to recognize that some epidemiologists are raising concerns about the adequacy of the data used in such models. We need to make the robustness of our game assumptions to clear to clients and partners. Be humble, avoid hubris, make assumptions and models explicit, caveat findings, and don’t over-sell.

Second, playing games with subject matter experts (SMEs) can pull them away from doing other, more important things. I’ve done a lot of work on interagency coordination, where there is a similar problem: coordination meetings are great, but when you add up the time that goes into them they can actually weaken capacity if you aren’t careful. Of course, you can run games with non-SME’s, but then the GIGO problem is exacerbated.

Any gaming generally needs to be client-driven. Do the end-users of the game actually find it worthwhile? What questions do they want answered? This isn’t a universal rule—it may be that gaming alerts them to something that they hadn’t considered. But do keep in mind the demands on their time, institutional resources, and analytical capacity.

We also have to recognized that the much-maligned BOGSAT (“bunch of guys/gals sitting around a table”) is sometimes preferable to a game, when the former is run well. For a game to be worth designing and running it has to be demonstrably superior to other methods, and worth the time and effort put into it. There is a reason, after all, why the CIA’s Tradecraft Primer: Structured Analytic Techniques for Improving Intelligence Analysis warns that gaming techniques “usually require substantial commitments of analyst time and corporate resources.”

We need to debrief and analyze games carefully. The DIRE STRAITS experiment at Connections UK (2017) highlighted that the analytical conclusions from games are often far from self-evident, and that different people can walk away from the same game with very different conclusions.

Messaging for these games matter. The public is on edge. Some are dangerously complacent. Some are on the verge of panic. One wrong word, and suddenly there’s no toilet paper in the shops. If you don’t consider communication issues, reports from a game could feed either a “don’t worry it’s not that bad” view or a “my god we’re all going to die” response in the media and general public.

We also have to beware of clients with agendas, of course [insert everything Stephen Downes-Martin has ever written here.]

We need to be careful of both uncritical game evangelism and rent seeking—that is the “it would be cool to a game/games solve everything” over-enthusiasm, or “here’s a pot of money, let’s apply for it.”

In short, in a time of international crisis, we need to do this well if we do it. In my view it generally needs to respond to an identified need by those currently dealing with the crisis—or, if it doesn’t, there needs to be a good reason for that. They’re busy folks at the moment, after all.

UPDATE: I did a short presentation on this for the recent King’s Wargaming Network online symposium. My slides can be found here: DoNoHarm.


For more on gaming the pandemic, see our COVID-19 serious gaming resources page.

Using games to explore potential conflicts between emotional reactions and analytical decision making

The following piece was written for PAXsims by Patrick Dresch. Patrick is based in Salisbury (UK), and is interested in the application of board games as training tools for emergency and disaster response. In 2019 he completed an MSc in crisis and disaster management at the University of Portsmouth, supported by a dissertation investigating the potential for cooperative board games to be used to train emergency responders in interoperability. He has also had the opportunity to test the integration of game mechanisms with table top and live simulation exercises by designing and delivering exercises as a volunteer with the humanitarian response charity Serve On.


I am a great believer in the potential for board games to be used as tools to supplement training and exercising for those working in emergency response and disaster relief. My interest in this field has mostly focused on using cooperative board games to practice interpersonal skills which can improve interoperability, including the potential to improve coordination and joint decision making. More recently, however, I have also been considering how this platform could be used to prompt emotional reactions which may be at odds with what might be called a rational solution.

In an abstract game it is often easy to focus on a game as a puzzle which needs to be solved. A player may have a personal aesthetic preference for the red tiles in Azul (2017), for instance, but this is unlikely to determine their strategy when playing the game. Other popular games use art and aesthetics to reinforce the theme of the game, and provide narrative structure to what could otherwise be an abstract puzzle. One example of this is the choice of illustrations on the adventure cards for The Lost Expedition (Osprey Games, 2017) (Figure 1).

TheLostExpedition_Cards.jpg

Figure 1: Examples of adventure cards form The Lost Expedition.

Here we can see that the illustration choices not only reinforce the jungle survival theme, but also help players construct a narrative framework by showing dilemmas which work with the symbols and triggers. It should be recognised thatThe Lost Expedition was developed not as a serious game for training purposes, but as a popular game for general entertainment. Other popular games also use story telling and aesthetic choices to challenge players with moral choices, be it through the crossroads cards in the Dead of Winter (Plaid Hat Games, 2014) games, or asking players how far they would go to survive in This War of Mine (Awaken Realms, 2017) which is based on the Siege of Sarajevo. Other games are less explicit in this aspect of design choices, but may still choose to humanise what could otherwise be non-descript pawns to add extra weight to the implications of decisions. Days of Ire: Budapest 1956 (2016) for example, which is based on Hungarian revolution of the same year, includes historic names on each of the revolutionary markers, as well as historic background on the events cards in the manual. These elements add another layer of depth to a game which could otherwise simply be a strategic puzzle, and encourage players to consider what the human cost of their decisions would be.

In addition to using moral dilemmas as a way to encourage players buy into the universe of the game, designers also make aesthetic choices to prompt emotional reactions. This may range from using cute and cuddly imagery to encourage players to smile and laugh, or even quite the opposite. This is certainly the case in Raxxon (2017) which is set in the Dead of Winter universe during the early stages of a zombie outbreak, requiring players to manage a quarantine and separate the healthy population from the infected. Here, players are presented with cards which not only depict ravenous zombies, but also healthy individuals and various other groups such as uncooperative but healthy, violent individuals, and carriers who could spread the infection. Each of these different groups presents players with different issues to consider when managing a crowd formed of a mixed population, with the game employing push-your-luck and role specialisation mechanisms. Moreover, the illustration choices used on the cards can prompt a player to revel in calling in an airstrike to remove zombies from the crowd, or give them a moment’s pause when dealing with carriers who look like they may just have a bad cold. The design choice to use black and white images which focus on the characters’ facial expressions against a coloured background (Figure 2) starkly portray individuals at a moment of personal crisis as they await to find out if they will be taken to safety or left with the zombies. By doing so, this choice puts players in the role of a frontline responder who must deal directly with the public, once again adding a layer of depth to a problem-solving puzzle.

RAXXON_Cards.jpg

Figure 2: Examples of Raxxon crowd cards.

This is all very well for popular games focusing on entertainment, but is there also an opportunity for serious games to use similar design choices to create discussion points and teachable moments? It is arguable that the more limited market for serious games means that there may not be as much of a financial incentive to develop their aesthetics in the way that commercial entertainment games do. Many serious games also choose to focus on systems where emotional considerations do not have to be included in training, and a print and play approach is aesthetically acceptable. Some sectors, however, may find that including an emotional element is of great benefit to frontline staff who have to deal with the public. In the disaster response sector Thomas Fisher has commented that no matter how well players do in AFTERSHOCK: A Humanitarian Crisis Game (2015), thousands of people will die in the game. This provides an opportunity for those who are new to the sector to reflect on their own feelings to this simulated loss of life and consider whether a career doing this sort of work is really for them. It is also worth noting that Fisher has made the point that when considering design choices for AFTERSHOCK, a conscious decision was made to avoid gratuitous images. Nonetheless, it can be seen that there are some similarities between the illustrations used for Raxxon and some of the Images used in the “at risk” deck for AFTERSHOCK (Figure 3). Unsurprisingly perhaps, the image of children in distress could be considered an effective shorthand for provoking emotional turmoil among players.

AftershockCards.jpg

Figure 3: Examples of images used in the AFTERSHOCK “at risk” deck.

 

If we agree that the design and story choices used in games can provoke emotional reactions and moral dilemmas, how can we develop these ideas as effective teaching tools? One possibility would be to use emotional triggers in games to help players become more aware of their own decision-making processes. With practice, this could also help them become more confident in their intuitive decision-making when there is limited time or opportunity for planning and analytical-decision making. In a game this might be done by using art and story to prompt an emotional or moral reaction which if acted upon would be considered irrational play in an athematic puzzle or even an abstract game. This might mean putting triggers on cards which are comparatively high risk and low reward in a game, and observing if they are acted on more frequently than low risk and high reward cards which have neutral imagery. As always, one should consider the learning objectives one is working towards when designing a game, and how different mechanisms can be used to foster different behaviours. The approach described here may be useful for addressing humanitarian principles, for instance and one could discuss the choice of helping an individual in obvious distress while ignoring a card with a higher value which could represent faceless masses. Furthermore, emotional triggers should not be simply limited to images of crying children but could instead be more subtle and nuanced. An example of this might be addressing the humanitarian principle of impartiality by depicting a diverse population and seeing if there is an expression of personal bias in the players’ choices.

In conclusion I think that use of design choices and story should be carefully considered as a game-based learning tool. Not only should aesthetics be considered as a way of making a product appealing to potential buyers, but careful choices have the potential to provide effective learning outcomes. I certainly hope that this will prompt further discussion and study to establish if these ideas can be developed further. Many of these ideas are already put into practice in live simulation disaster response exercises, for instance by using actors, moulage and prosthetics to provide responders with distressed casualties who may not be cooperative. I certainly think that incorporating story and push-your luck elements into exercises could also benefit them, for instance providing a threat to team safety which may influence deployment decisions. The social, face to face nature of board games also makes them an ideal platform in which to practice skills with a social element in a simulated dynamic and developing situation at relatively low cost and with potentially high engagement among participants.

Patrick Dresch

David and DeRosa: Wargaming Contested Narratives in an Age of Bewilderment

SB160120.png

At The Strategy Bridge, Arnel P. David and John DeRosa discuss “Wargaming Contested Narratives in an Age of Bewilderment.”

The Contested Narratives Wargame builds on the assertions from Peter Perla and Ed McGrady that wargames “embod[y] two types of narrative: the presented narrative, which is what we call the written or given narrative, created by the game’s designers; and the constructed narrative, which is developed through the actions, statements, and decisions of the game’s participants.”[1] Over the course of the game, select participants shared presented narratives (pre-scripted stories) to amplify or dampen adversary and friendly narratives. Participants then moved between tables developing constructed narratives (revised scripts) amidst the various contested narratives. Using the World Café method, a professionally and nationally diverse group of participants took turns sharing stories of national resilience against malign influence wherein the pre-scripted presented narratives contest for resonance.

The World Café is an exploratory method, designed by Juanita Brown and David Isaacs, that elicits communication patterns.[2] Set in a café-like environment with multiple tables, participants are invited to sit in small groups with participants from other nations. A facilitator initiates the conversation with a narrative prompt to the entire room—“share a story about national resilience,” for example. Then the participants engage in multiple rounds of storytelling. Paper tablecloths and colored pens allow participants to scribble and take notes creating artifacts for later review. As participants move around the room, narratives begin to circulate. Contestation emerges as designated players introduce stories scripted prior to the wargame from an adversary’s perspective. At the end of several rounds, Dr. John DeRosa—game designer, lead facilitator, and one of the authors—led discussions with the participants to find the l’entre deux, the between place, of presented and constructed narratives circulating within the room. In this sense, the process seeks to reveal if elements of the pre-scripted narratives (like those representing the adversary) appear in the revised scripts developed within the wargame.

Two key insights emerged. First, stories coupled with symbols construct powerfully resonant narratives. Second, unlike the linear action-counteraction-reaction model of traditional wargames, methods like the World Café can effectively mimic the complexity of the human dimension.

More at the link above.

h/t Mark Jones Jr.

Fielder: Reflections on teaching wargame design

cropped-cropped-wotrweblogo-nobg.png

At War on the Rocks today, James “Pigeon” fielder discusses how to teach wargame design, drawing on his experience at the U.S. Air Force Academy.

I founded my course on three pillars: defining wargames, objective-based design, and learning outcomes over winning. First, I took a blend of James Dunnigan, John Curry and Peter Perla, Phil Sabin, and my own caffeinated madness to define wargaming as “a synthetic decision making test under conditions of uncertainty against thinking opponents, which generates insights but not proven outcomes, engages multiple learning types, and builds team cohesion in a risk-free environment.” Second, I enshrined the primacy of the objective. Put bluntly, without objectives you don’t have a professional game. Although we briefly discussed creating sandbox environments for generating ideas in the absence of objectives, sandbox design at best strays into teaching group facilitation (albeit game refereeing itself is a form of facilitation), and at worst enshrining poorly structured and long-winded BOGSATs as legitimate analysis tools. Finally, neither the U.S. Strategic Command wargame nor the National Reconnaissance wargame included absolute and predetermined winners. Both U.S. Strategic Command and the National Reconnaissance Office faced unmitigated disaster every time they bellied up to the table. The best learning comes from understanding failure, correcting mistakes, and revising strategies, not from sponsors patting themselves on the back. Summoning Millennium Challenge 2002’s chained and howling ghost, gaming with the sole intent to win, prove, and prop up ideas is an exercise in false future bargaining with real lives and materiel.

He cleverly had his cadets design games for real sponsors:

I divided the class into two eight-cadet teams respectively for U.S. Strategic Command and the National Reconnaissance Office. The sponsors and I initiated dialogue, but from that point the games were entirely cadet driven. The teams interviewed the sponsors for objectives, determined how to measure the objectives, prototyped and play-tested their games, and ultimately delivered effective tools for addressing sponsor requirements. Meaning, of course, the games generated more questions than answers: better to ask the questions at the table before bargaining with a real opponent or launching a new military service.

There’s a lot more besides that, including a discussion of the wargame design literature, as well as material on psychological roots and sociological narratives of gaming. James also discusses the importance of learning-through-play.

Go read the entire piece at the link at the top of the page.

RAND: Gaming the gray zone

Greyzonereport.png

RAND has released a new report by Stacie L. Pettyjohn and Becca Wasser on Competing in the Gray Zone: Russian Tactics and Western Responses. This addresses two major sets of research questions: first, “How are gray zone activities defined? What are different types of gray zone tactics?” and second “Where are vulnerabilities to gray zone tactics in Europe? What are those vulnerabilities?”

Recent events in Crimea and the Donbass in eastern Ukraine have upended relations between Russia and the West, specifically the North Atlantic Treaty Organization (NATO) and the European Union (EU). Although Russia’s actions in Ukraine were, for the most part, acts of outright aggression, Russia has been aiming to destabilize both its “near abroad” — the former Soviet states except for the Baltics — and wider Europe through the use of ambiguous “gray zone” tactics. These tactics include everything from propaganda and disinformation to election interference and the incitement of violence.

To better understand where there are vulnerabilities to Russian gray zone tactics in Europe and how to effectively counter them, the RAND Corporation ran a series of war games. These games comprised a Russian (Red) team, which was tasked with expanding its influence and undermining NATO unity, competing against a European (Green) team and a U.S. (Blue) team, which were aiming to defend their allies from Red’s gray zone activities without provoking an outright war. In these games, the authors of this report observed patterns of behavior from the three teams that are broadly consistent with what has been observed in the real world. This report presents key insights from these games and from the research effort that informed them.

greyzonegame.png

While the study is interesting enough as it is, RAND has also released a second 45 page monograph by Becca Wasser, Jenny Oberholtzer, Stacie L. Pettyjohn, and William Mackenzie that outlines the gaming methodology adopted: Gaming Gray Zone Tactics: Design Considerations for a Structured Strategic Game.

Research Questions

  1. Can a game model gray zone competition in a empirically ground sound yet playable way?
  2. What is the game design process for developing a structured strategic game for a complex political-military issue that simultaneously operates in two different time horizons?
  3. How can structured strategic gaming help researchers gain an understanding of adversary gray zone tactics and tools?

To explore how Russia could use gray zone tactics and to what effect, the authors of this report developed a strategic-level structured card game examining a gray zone competition between Russia and the West in the Balkans. In these games, the Russian player seeks to expand its influence and undermine NATO unity while competing against a European team and a U.S. team seeking to defend their allies from Russia’s gray zone activities without provoking an outright war. This report details the authors’ development of this game, including key design decisions, elements of the game, how the game is played, and the undergirding research approach. The authors conclude with recommendations for future applications of the game design.

Key Findings

The Balkans gray zone game demonstrated that structured strategy games are useful exploratory tools and this model could be adapted for other contexts and adversaries.

  • While the gray zone remains a murky topic, this game demonstrated that it was feasible to break the gray zone down into concrete parts, to conduct research on each of these parts, and to link these components to create a playable strategic game that yielded useful insights.
  • The scoped and structured approach to this game allowed for enough structure to keep discussions on track and provided links between inputs and outputs while still allowing for creativity, flexibility, and transparency.
  • This gray zone game can be adapted to focus on different regions or adversaries, could include additional allies, or could be made into a three-way competition.

The RAND team started with a series of matrix games to scope out the problem, and then progressed to semi-structured game. Finally, they moved on to creating a structured, three-sided (US, Europe, Russia) gray zone board game focused on the Balkans.

Greyzonebosnia.png

Countries were tracked for governance quality and diplomatic-political orientation, as well as economic dependence (on Russia) and media freedom.

Greyzoneoptions.png

Players acted through a deck of action cards, each specific to the actor(s) they represented. Potential Russian (RED) actions are shown above, and sample cards below)

greyzonecards.png

The report discusses the game design approaches taken, assesses their utility, and concludes with some suggestions as to future modifications.

All-in-all, it is a rare and outstanding example of serious game designers fully documenting their game design approach and research methods so as to inform future work on the issue. Kudos to all!


Please take a minute to complete our PAXsims reader survey.

Historical research and wargaming (Part 2): Applying the framework to the Third Battle of Gaza (1917)

The following piece has been contributed to PAXsims by James Halstead.


 

Part Two: Applying the Framework

In Part 2, the framework introduced in Part 1 will be used to study debates around a historical battle: the 1917 Third Battle of Gaza. The ‘Gaza School’ counterfactual has been a recurring element of the battle’s historiography since its inception in the immediate aftermath of the battle and was brought to greater prominence in the 1930s with Clive Garsia’s book A Key To Victory which continues to be an influential source for studies on Palestine. The Gaza School therefore remains an intriguing counterfactual possibility amidst continuing debate within the historiography

The Third Battle of Gaza

The ‘Gaza School’ debate revolves around the strategy employed by Edmund Allenby to eject Ottoman forces from their defensive line between the towns of Gaza and Beersheba in southern Palestine through October and November, 1917. Historically Allenby launched attacks on either flank of the Ottoman line between Gaza and Beersheba, drawing Ottoman reserves to both flanks before breaking through the weakly held centre. The inland flank was attacked first with the Desert Mounted Corps (DMC) and XX Corps outflanking, surrounding and capturing Beersheba. Meanwhile XXI Corps diverted Ottoman reserves with a holding attack on Gaza while the formations in Beersheba built up water stockpiles then broke through the Ottoman centre, forcing a full-scale Ottoman retreat.

Halstead1.png

Garsia champions the ‘Gaza School’ counterfactual in his book, A Key To Victory, which posits Allenby should have eschewed the attack on Beersheba and focussed all resources upon breaking through at Gaza then exploiting with cavalry rather than outflanking the Ottoman line on the more logistically precarious inland flank.[1] This article will use the wargaming research framework laid out in the first part to explore the feasibility of Garsia’s alternative plan. Indeed, the suggestion to use a wargame to model this came as early as the early 1930s in Cyril Falls Official History.[2]

Geography

A study of the terrain reveals the difficulty of attacking Gaza with several hills, traditional fieldworks and thick cactus hedges all significant obstacles and made the town difficult to take.[3] Two attacks at the beginning of 1917 had already failed while XXI Corp’s holding attack during Third Gaza did poorly, failing to achieve the modest objectives set.[4]While Garsia argues Gaza could have been masked by XXI Corps while the DMC broke through along the beach even this argument is difficult to qualify. High sand dunes near the coast made the ground unsuitable for wheeled vehicles and would make the movement of three cavalry divisions burdensome.[5] Force to space ratios are also often forgotten and a study of a map reveals the beach route offered a frontage less than a mile wide, through which three cavalry divisions would have to ride. This would necessitate a limited, single Brigade front to overcome the Ottoman positions codenamed Lion, Tiger and Dog positions and then further redeployments and fighting across the Wadi Hesi before the cavalry could cut Gaza’s supply, while a long spread-out column of cavalry might prove vulnerable to artillery fire and Ottoman counterattacks regaining the beach defences.

Halstead2.png

Adherents to the Gaza School maintain the coast road would have made movement and supply much easier, there are a number of factors which discount this. This road moves directly through Gaza; where the heaviest held part of the entire Ottoman line was located so use of the road would have necessitated decisively shattering the heaviest part of the Ottoman defences, before pushing three cavalry divisions across heavily fortified ground, through a major urban area, across the heavily held Wadi Hesi and all along a single-track road.

Exploitation along the coast would also be harder than supposed with XXI Corps advance following Gaza’s evacuation requiring tractors to move supplies along the coast even with road access.[6] Heavier sand also exhausted the cavalry’s horses and bogged down wheeled transport making rapid movement difficult.[7] There were therefore significant obstacles to cavalry exploitation as a serious study of the terrain demonstrates.

Order of Battle and Generic Capabilities of Formations

Study of the order of battle reveals several insights. Firstly, that while the strength of Ottoman formations was highly variable, and the specifics of the numbers employed still remain unknown, they appear to have concentrated their best divisions on the coast behind Gaza. The historical attack on Beersheba pulled these troops away from the coast, to reinforce the inland flank, although even then there were still sufficient reserves to reinforce Gaza against the holding attack by XXI Corps. To focus the offensive on Gaza would, very likely, have meant that Ottoman forces could have concentrated upon holding Gaza and the terrain behind it even more rather than being split between two axis of advance as they were historically.

Third Gaza also provides an example of how order of battle research can reveal sources ignored by military historians in the 15th Imperial Service Cavalry (ISC) Brigade. Cyril Falls omits the Brigade from the Official History’s Order of Battle, a mistake which future historians have copied and while the absence of a lone Brigade may not seem especially significant the existence of 15 ISC is significant because the brigade’s performance during the battle provides direct evidence of how effectively larger bodies of cavalry would have operated on the coastal flank. [8]  Garsia argues it would have been sufficient to simply mask Gaza with XXI Corps and then slip the cavalry along the beach to cut Ottoman communications.[9] 15 ISC’s war diary, however, makes it clear that opening up a gap, and breaking through, would be no simple matter. The Brigade did actually form up behind the XXI Corps infantry assault but were unable to exploit through as Ottoman counterattacks recaptured the beach defences.[10] Additionally, as discovered in the survey of the terrain the heavier sand on the coast would have exhausted the cavalry. Cavalry tactics also heavily relied upon infantry, artillery and air support. Any unsupported cavalry penetration behind Gaza would struggle against renewed Ottoman defences and counterattacks as shown by EEF cavalry actions at Huj on November 8, Beit Hanun and in the (attempted) crossing of the Nahr el Auja.[11] In all of these cases unsupported cavalry on the advance struggled to overcome what were often weakly held defensive positions and indicates that the cavalry might not even have been able to achieve their objectives even had they broken through.

The EEF’s Decision Making Environment

While the creation of the physical model demonstrates the difficulties with the Gaza Camp approach further analysis of the decision-making environment in the EEF in autumn 1917 further supports a case that the Gaza School approach simply did not align with EEF strategic priorities. Philip Chetwode wrote in October: ‘it is desired to get the enemy on the move from his strongly entrenched positions with as few casualties as possible, relying on our preponderance in cavalry to do the execution.’[12] It is also worth bearing in mind the directive given to Allenby before the battle to capture Jerusalem and ‘occupy the Jaffa-Jerusalem line’ as cheaply as possible.[13] Preponderance in cavalry, and the advantage this gave, was a clear motivation for seeking the open, inland flank. While the EEF had three cavalry divisions, and three independent cavalry brigades the Ottoman cavalry only consisted of one division, barely stronger than a British cavalry brigade. Turning a weakly held flank would also likely be much cheaper than a head-on assault against the most strongly held part of the Ottoman line. The more indirect inland route via Beersheba was chosen because it maximised the EEF’s advantage in cavalry while helping to keep casualties as low as possible. XXI Corps losses in just their holding attack on Gaza were double those of the assault on Beersheba; and for little tangible gain with even the single Brigade of cavalry present unable to exploit.[14][15]

Allenby’s decision to risk the inland attack on Beersheba therefore is as much to do with wider strategic priorities as it is to do with the practicalities of the terrain and force composition.

Integrating wargaming within military historical research, not just within the context of counterfactuals, offers a number of important tools that military historians continue to underutilise. By creating an analytical model of events that aims to conform with the course of historical events military historians can analyse individual factors based on under-utilised (but commonly available) evidence while the successful creation of an accurate model encourages historians to explore the full range of evidence. If the model doesn’t work for whatever reason, then this simply encourages further research to understand why the model doesn’t conform. Extra playtesting and refining of the model is something that can introduce previously unknown or unconsidered factors that suddenly appear more decisive for their effect on the accuracy of the model.[16]

Wargaming military history therefore, while still a tool for support of a wider analytical goal (and as such should be employed appropriately), fills in a number of crucial gaps within a military historian’s toolkit. Design of a wargame encourages rigorous analysis of under-utilised sources in a wider framework and, most importantly, incorporates these into a wider model which must be adapted to fit the historical result. When an initial model doesn’t conform then this just encourages further exploration of why your rigorously researched model hasn’t conformed. Much like wargaming mechanics this creates an important feedback loop, and encourages the researcher to go back and check their sources again: something that the dominant research methodology within history fails to do. Indeed, much of the time in traditional military history contradictory, and inconvenient, sources are often seemingly explained away, ignored or subsumed into wider arguments. Wargaming encourages a more involved research process right from the beginning of a project and, furthermore, relies upon sources that very often can be easily obtained without endless days in the archive. Meanwhile testing the design, especially with a third party, can often lead to fundamental reevaluations of either sides decision space: ‘what constitutes ‘victory’ for either side and what are they willing to risk to attain it?’ are just two questions that applying a gaming approach can encourage. Designing a wargame for a battle at the outset of a project can often produce new priorities on archival research and when new evidence is discovered allows it to be reincorporated into the model: often improving the pursuit of a historically accurate result. While military history is increasingly moving to incorporate more qualitative, and innovative methodologies there are still ways that military historians can integrate more traditionally social science approaches like modelling, and wargaming, to the benefit of their research.[17]

[1] Clive Garsia, A Key To Victory: A Study in War Planning (London: Eyre and Spottiswoode, 1940)

[2] Cyril Falls, Military Operations Egypt and Palestine: From June 1917 to the end of the War Part I (London, 1930), p. 32

[3] SHEA 6/2, The Liddell Hart Centre for Military Studies and JONES, CF, The Liddell Hart Centre for Military Studies

[4] Cyril Falls, Military Operations Egypt and Palestine: From June 1917 to the end of the War Part I (London, 1930)

[5] Lieutenant Colonel, The Honorable, R.M.O, Preston, The Desert Mounted Corps: An Account of the Cavalry Operations in Palestine and Syria 1917-1918 (Boston, 1920), p. 6

[6] Falls, Official History p. 142 and Marquess of Anglesey, A History of the British Cavalry 1816-1919: Volume 5, Egypt, Palestine and Syria (London: 1994) p. 188

[7] Anon. History of the 15th Imperial Service Cavalry, p. 17

[8] Garsia, Key To Victory, p. 206

[9] Garsia, Key To Victory, p. 206

[10] Anon. History of the 15th Imperial Service Cavalry, p. 16

[11] Falls, Official History p. 123, 215 and Anon, History of the 15th Imperial Service Cavalry, p. 16

[12] IWM, P183/1: Chetwode Papers, 1st October Letter: ‘Appreciation of the Situation on the 14th October’

[13] Falls, Official History, p. 67

[14] Wavell, Allenby: Soldier and Statesman p. 178

[15] John Ericksen, Ottoman Army Effectiveness in World War I: A Comparative Study p. 123

[16] Phil Sabin, The Future of Wargaming to Innovate and Educate, Public Lecture at Kings College, 22.11.2019

[17] Jonathan Fennel, Fighting the People’s War (Oxford: Oxford University Press, 2019); Ben Wheatley, A Visual Examination of the Battle of Prokhorovka (Journal of Intelligence History,), Volume 18, 2019


James Halstead is a military historian who is primarily interested in the two world wars of the 20th century. He studied for his Masters at Kings College London (including Professor Phil Sabin’s Conflict Simulation module) and is currently studying for his PhD on Information Management in the British and Commonwealth Armies at Brunel University, London. James has delivered lectures on the Royal Flying Corps and Air Force in the Palestine Campaign at the RAF Museum, Hendon and will do so again at Wolverhampton in 2020. James can be found either on twitter at @JamesTTHalstead or you can read his research blog at:  youstupidboy.wordpress.com

Historical research and wargaming (Part 1): Constructing the framework

The following piece has been contributed to PAXsims by James Halstead. Part 2 can be found here.


Fire_and_movement.png

Historical research and wargaming (Part 1): Constructing the framework 

Wargaming offers a unique methodological toolset to study historical conflicts and while there has been interest in using wargames as an educational tool, there is little focus on what wargaming can offer analytical, military history research.[1] The first part of this article will outline how the structured, and exhaustive, research necessary to design historical simulations can provide unique insights for historical research. Since wargame design needs to account for player decisions that diverge from history there is a need to comprehensively research not just the historical record but counterfactuals too. This analysis is carried out in a structured framework which helps the designer to understand both the environment the battle is fought in, but also the military makeup and performance of both sides and how best to incentivise historical play.[2]

The research for a wargame therefore requires the creation of a very different and, in some ways, more rigorous and encompassing model than many traditional military histories. While there is a strong element of the counterfactual to wargaming this still presents ‘a highly useful way of exploring cause and effect.’ Developing a rigorous and thoroughly analytical representational model of historical conflicts can be of huge value in giving greater prominence to underutilised sources and in understanding contemporary opinions and priorities.[3]

Wargames research utilizes a framework that studies the geographical environment, the orders of battle of the opposing sides, generic capabilities of the formations involved and opposing decision environments.[4] This first section will study these factors individually, exploring exactly why they are important and the consequences that proper examination and integration of these factors can have for understanding of military history.

Geography

Studying the ground over which a battle is fought is vital for any study of a battle. Along with the Order of Battle, it is one of the most obvious research benefits of war gaming. Properly modelling a battle’s geographic environment can lead to interesting insights. For example, the German Operation Michael Offensive in March, 1918, against the British Fifth Army and elements of Third Army is often seen as being so successful (at least initially) because of the favourable force to space ratios in favour of the Imperial German Army, better tactics and weak British defences. What is often not considered is the nature of the terrain itself with the British defences lying on a wide, flat plain, with higher ground to the north and south. Approaching Operation Michael as a wargame reveals the nature of the terrain acted against the British defenders and they were forced to give up so much ground, falling back on river lines such as the Somme, partly because of the dearth of defensible features behind Fifth Army’s front line. In turn, these river lines were often only given up when outflanked; meaning that the British Army simply was not able to fall back on terrain favourable to a defence across the entire width of their front line. The German assault against the southern portion of Third Army units to the north of Fifth Army was less successful during Operation Michael and the follow-up, Operation Mars, partly because the British defenders were fighting in much more favourable terrain for defence. Because terrain is such an integral part of the wider model wargames encourage far more engagement than is usual with the characteristics of the terrain on which the historical conflict was fought. With most traditional military histories lacking good-quality maps this can encourage the wider use of easily available sources with a corresponding increase in the level to which terrain is considered as a factor in the historical result.

Order of Battle

Alongside the creation of a proper map, researching an order of battle and the generic capabilities of formations are the basic building blocks in the creation of a rigorously analytical model. This is important to the creation of a wargame because, unlike traditional military history, missing key formations out or incorrectly modelling their capabilities in combat can have important consequences.

The research of an accurate Order of Battle is often nothing much more than a necessary task that doesn’t reveal anything particularly exciting; however, it is still an important step to creating a viable model and therefore something that needs to be properly addressed. Again, like maps, many traditional historical works often give the order of battle only the most cursory of attention. Although orders of battle often do not provide anything particularly revelatory, they undoubtedly contribute a great deal to the wider framework. Knowing exactly which troops were where is an important part of creating a valid simulation and, again, creates a valuable, if incremental contribution to the wider wargame model and can lead to some important, if seemingly minor revelations regarding force to space ratios and the true strength of formations often represented on maps as abstract unit symbols.

However, in some cases the value of proper orders of battle created through commercial wargames have provided interesting revisions to historical works. Dave Parham’s research on the Battle of Stalingrad in the 1980s points out the 76th Infantry Division did not fight at Stalingrad: the assault on the city centre consisting of only two divisions rather than the three that many histories have commonly asserted.[5] Similarly Orders of Battle for Austria-Hungary’s invasion of Serbia in 1914, are obscure and hard to come by, with the most modern, and easily accessible, order of battle found in a commercially published wargame.[6]

Generic Capabilities of Formations

Understanding the generic capabilities of formations which took part in the conflict is really the full marriage of the geographical study and order of battle into a fully realized model simulating the physical capabilities of the military formations involved. Studying the combat record of formations provides a wider appreciation of the generic capabilities of both side’s formations while understanding how the terrain affected the ability of the units collected in the Order of Battle to move and carry out combat introduces completes the basic physical model. The final step is to understand the contemporary military objectives, doctrines and politico-social priorities of participants.

Decision Making Environments

In order to produce an accurate simulation, designers must understand why commanders behaved as they did historically, which requires the priorities and motivations for both sides to be incorporated into the wider model. Historical actors often do not behave rationally to modern perspectives, and what good wargame and historical research does is uncover the reasons that made their choices made appear rational. It is necessary to study the strategic priorities and objectives of both sides to understand why they behaved as they did, and to introduce incentives into the design, to encourage players behave in this way.

For example, in a simulation of the German invasion of France in World War Two, it might seem obvious to the player that they need to attack on either side of any German breakthroughs, neatly cutting off and isolating the Wehrmacht Panzer formations. However, in any accurate simulation of the battle, there will be rules simulating command and control confusion in order to prevent the Allied player from doing precisely this. Similarly, accurately depicting the decision-making environment can also help bridge the gap between military and cultural or social history. A simulation of British and Commonwealth forces in Western Europe in 1944 and 1945 would not just require the accurate modelling of their capabilities but also consideration of the specific style in which they fought battles; to avoid casualties and maintain morale. A successful simulation might, for example, impose heavy penalties on the Commonwealth player for taking infantry casualties and encourage them to use heavy artillery support and set-piece attacks.

Studying the decision environments and the factor’s which the opposing commanders took into account when making their plans can provide very different perspectives from the logical assumptions modern audiences can make when analysing history. This is, of course, something that all good historians should be doing in the first place but the clear analytical framework process that war game design necessitates can often make those perspectives much clearer and assist insight into the wider battle.

Wargames, while utilizing the same skills as traditional military history, research within a framework that provides much more technical and specific understanding of conflicts which can, in turn, challenge many assumptions made by existing histories. It is not so much a radically new way of approaching research but of framing the evidence and creating an emphasis on underutilized, but very accessible, sources such as Orders of Battle or maps. In the second part of this article, this framework will be applied to studying the ‘Gaza School Counterfactual’ that was developed in the 1930s about the Third Battle of Gaza, as an example of the way that this wargaming research framework can benefit historical research by framing underutilized, but easily accessible evidence.

[1] Phil Sabin, Simulating War (London, 2012) and Robert Citino, ‘Lessons from the Hexagon’ in Zones of Control: Perspectives on Wargaming

[2] Phil Sabin, Simulating War (London, 2012) p. 47

[3] Paul Cartledge, The Spartans: An Epic History , (New York, 2013), p. 126

[4] Phil Sabin, Simulating War (London, 2012) p. 47-48

[5] John Hill, Battle for Stalingrad Main Rule Book, (Simulation Publications Incorporated: New York, 1980), p. 19

[6] Serbien Muβ Sterberien, (GMT, 2013)


James Halstead is a military historian who is primarily interested in the two world wars of the 20th century. He studied for his Masters at Kings College London (including Professor Phil Sabin’s Conflict Simulation module) and is currently studying for his PhD on Information Management in the British and Commonwealth Armies at Brunel University, London. James has delivered lectures on the Royal Flying Corps and Air Force in the Palestine Campaign at the RAF Museum, Hendon and will do so again at Wolverhampton in 2020. James can be found either on twitter at @JamesTTHalstead or you can read his research blog at:  youstupidboy.wordpress.com

Room to game (or, the Battle of Winterfell explained)

 

where-is-everyone-during-the-great-battle-of-winterfell.jpeg

Course of action wargaming for the Battle of Winterfell. Might the room be responsible for the defenders’ military missteps?

 

The Battle of Winterfell was the final battle of the Great War against the Night King and Army of the Dead. While ultimately successful, the human defenders adopted a notoriously weak defensive strategy, involving poorly-defended ditches, misplaced archers and artillery, and a suicidal frontal cavalry charge.

Scholars and historians have suggested that weak scriptwriting was responsible for this. However, recent scientific research suggests that the real culprit might be the room selected for pre-battle course of action wargaming.

Everyone who has ever conducted a serious game knows that the room matters. How early can you get access? Are the tables big enough? Can they be moved (and are they all the same height)? Will the audiovisual and IT systems work on the day—and what’s your fallback if they don’t? Are there breakout/team/control rooms nearby? If so, will their location enhance gameplay (by fostering the rights sorts of interaction and immersion), or undermine it? Where will coffee and lunch be served?

There is also, however, considerable evidence that room quality affects player performance in more fundamental ways. A recent study by M. Nakamura in Simulation & Gaming found that the size and layout of the room had significant effects on how players assessed the gaming experience in their debriefings:

Results from the current study demonstrate that the difference in room condition was influential. In HACONORI, participants felt more satisfaction in the small room than in the large room, while in BLOCK WORK, participants felt less usefulness in the small room than in the large room, but only when asked about the degree of usefulness before being asked about their degree of satisfaction. The effect of room condition seems to trend in the opposite direction in the two gaming sessions. This difference is because the amount of space has a different meaning in HACONORI and BLOCK WORK; for example, in HACONORI, group members can successfully work together by providing quick and responsive communication with each other. The small room must have encouraged such speedy communication. Conversely, in BLOCK WORK, participants can successfully work when they have more personal space since the task is more individualized; however, this may be affected by the order of questions. When participants were asked about the degree of usefulness after being asked about their degree of satisfaction, their attitude tone was fixed and the degree of usefulness was not affected by room condition. When asked about the degree of usefulness before being asked about their degree of satisfaction, they recognized the usefulness of the BLOCK WORK session in the large room more than in the small room.

We should take into consideration the movability of the desks as an essential factor in improving room function as this must have affected the results. In HACONORI, participants felt more satisfaction in the small room than in the large room. This is because the movability of the desks was high in the small room but low in the large room. In other words, the small room functioned well because of the movable desks.

Both studies reflect the powerful effect of room condition, which depends on the game attributes. They also demonstrate that the effect of the debriefing form is not as powerful as the effect of room condition, although as noted above, it is advisable to consider the order of the questions.

Perhaps even more striking are the results of a 2016 study by Joseph Allen et al in Environmental Health Perspectives on the impact of room ventilation on cognitive performance. They established three experimental room conditions (“Conventional,” “Green,” and “Green +”) with varying concentrations of volatile organic compounds and C02. The study found that “cognitive scores were 61% higher on the Green building day [and 101% higher on the two Green+ building days than on the Conventional building day].”

In other studies, lighting has also been shown to affect recall, problem solving, and other cognitive tasks (with some gender variation too). Room temperature has demonstrable effects on productivity, with 21-22C the ideal temperature—although this likely also varies with age, gender, and other factors.

Taken together, the existing research on environmental conditions suggests that wargame participants in an appropriately lit, well-ventilated room will perform complex cognitive tasks roughly three times “better” than those in one that is too hot or cold, poorly lit, and poorly ventilated. I suspect that even my PAXsims colleague Stephen Downes-Martin—who could quite rightly quibble about how I’ve rather breezily aggregated different measures of task performance here—would agree that the room matters a lot.

Back to Winterfell. Course of action wargaming of the battle took in a cold and dimly-lit chamber of the castle (above). The tallow candles and open braziers used to illuminate the space undoubtedly produced high levels of CO, CO2, and particulate pollution of various toxic sorts. Moreover, few of the participants had bathed in weeks.

dany-and-tyrion-in-thechamber-of-the-painted-table-on-dragonstone.png.jpeg

Was it the dragon or the room? Use of a well-ventilated war room (with natural lighting and healthy sea air) may have been an important factor in planning the very successful Battle of the Goldroad.

 

By contrast, planning for the very successful Battle of the Goldroad took place in the war room at Dragonstone. Unlike the dark and frozen chamber used at Winterfell, the room here is extremely well ventilated, has natural lighting, and is situated in a much more amenable climate. While many commentators suggest that the deployment of a giant fire-breathing dragon was key to the success of Daenerys Targaryen’s forces, we clearly cannot ignore the contribution made by an appropriate wargaming space during the critical planning phase.


Please take a minute to complete our PAXsims reader survey.

McGrady: Getting the story right about wargaming

WotR.jpeg

McGradyWotR.png

At War on the Rocks today, Ed McGrady notes the recent debates about analytical wargaming within the US defence community, and has some thoughts to offer:

There is a debate about wargaming in the Pentagon and it has spilled out into the virtual pages of War on the Rocks. Some say wargaming is broken. Others believe the cycle of research will solve our problems. There is a deeper problem at the root of all of this: There is a widespread misunderstanding of what wargaming is and a reluctance to accept both the power and limitations of wargames.

What we are seeing in the debate about wargaming looks a lot like what wargaming is best at: telling stories. But we have told ourselves several different stories at the same time, and none of these stories really agree with reality….

But failure to understand wargaming — what it is and what it is not — risks screwing up the one tool that enables defense professionals to break out of the stories we have locked ourselves into.

He goes on to question the notion that wargames are analysis:

Wargames do not do this through analysis. Indeed, wargaming is not analysis. “Analytical wargaming” jams the two terms together in a vague way that can mean anything, and often does. To be sure, good wargaming requires analysis: To design a game, one has to understand how things work. But the most important analysis one does for a wargame is about the people and organizations involved, not the systems. For example, defense analysts often find themselves grappling with future force projections and procurement. But the one organization that matters most for future force structure is not included in the assessments: Congress. Wargames can help senior leaders consider things like Congress whereas standard models and analyses cannot.

Wargames can also be the subject of analysis, but tread carefully: Wargames are not experiments unless they have been specifically, and painstakingly, designed as such. They are events: unrepeatable, chaotic, vague, and messy events. Collecting data from them is difficult — they produce “dirty” data, you often miss the best parts, and they cannot be repeated. But if you think that means you can’t learn anything from them, you might as well stop trying to understand real-world conflicts, because everything I have written about wargames in this paragraph is also true for wars.

So, you can analyze wargames, just not the same way you would analyze a set of data from a radar system or a series of ship trials. But in your analysis you have to focus on what wargames can actually tell you, and avoid making conclusions about what they can’t.

He goes on to suggest what we need to do:

First, we need to get our story straight and get it out there. Wargames are the front-end, door-kicking tool of new ideas, dangers, and concepts. In particular, they help you understand how you will get stuff done in the messy, human organizations that we all work in. They are really good at that. We also need to make sure that people understand what wargames are not good at: detailed, technical, complicated analysis that needs to be done to optimize particular aspects of ideas or concepts. They can tell you that the enemy may target your logistics, but they won’t tell you exactly how many short tons you need to offload per day at the port.

Second, we need to push back against the opportunists and charlatans who are colonizing gaming. While these people always show up when areas get hot, they are particularly dangerous in wargaming. Wargames not only provide new ideas and concepts, but also influence the future decision-makers that play in them. About the best we can do is call out bad games when we see them and, as part of our getting the word out about gaming, describe what games to discount when you hear about a bad game.

We can start by saying meetings are not games and speculation is not play.

Third, we need to make sure decision-makers understand that a good game is only the beginning of the journey, not the end. Much more work needs to be done after the game to figure out, through analysis, whether all those fancy concepts and ideas will work. And if we think they just might work, then we need to burn jet fuel and soldier-hours in instrumented and observed exercises to figure out if our forces and equipment can actually execute them. For future systems where we can’t do exercises, this means bringing the actual engineers into the operational picture. One of the best ways to bring the systems developers into the picture is through games.

You can read the full piece here.


Please take a minute to complete our PAXsims reader survey.

%d bloggers like this: