PAXsims

Conflict simulation, peacebuilding, and development

Category Archives: methodology

Lin-Greenberg: Drones, escalation, and experimental wargames

 

WoTRdrones.pngAt War on the Rocks, Erik Lin-Greenberg discusses what a series of experimental wargames reveal about drones and escalation risk. The finding: the loss of unmanned platforms presents less risk of escalation.

I developed an innovative approach to explore these dynamics: the experimental wargame. The method allows observers to compare nearly identical, simultaneous wargames — a set of control games, in which a factor of interest does not appear, and a set of treatment games, in which it does. In my experiment, all participants are exposed to the same aircraft shootdown scenario, but participants in treatment games are told the downed aircraft is a drone while those in control games are told it is manned. This allows policymakers to examine whether drones affect decision-making.

The experimental wargames revealed that the deployment of drones can actually contribute to lowerlevels of escalation and greater crisis stability than the deployment of manned assets. These findings help explain how drones affect stability by shedding light on escalation dynamics after an initial drone deployment, something that few existing studies on drones have addressed.

My findings build upon existing research on the low barrier to drone deployment by suggesting that, once conflict has begun, states may find drones useful for limiting escalation. Indeed, states can take action using or against drones without risking significant escalation. The results should ease concerns of drone pessimists and offer valuable insights to policymakers about drones’ effects on conflict dynamics. More broadly, experimental wargaming offers a novel approach to generating insights about national security decision-making that can be used to inform military planning and policy development.

You will find a longer and more detailed account of the study here.

This is a good example of using multiple wargames as an experimental method. Above and beyond this, it also shows how that wargames can generate questions worthy of further investigation.

More specifically, while the loss of a drone is less escalatory, an actor might be more likely to introduce a drone for this reason—possibly deploying one in a situation where they would not have risked a manned platform. If this is true, however, drones may still prove more escalatory overall. In other words, if the wargame is expanded to include the prior decision to deploy assets in the first place, the actual outcome might have been something like this:

  • Blue scenario 1: Deploy manned platform?
    • No, too risky.
    • No platform deployed.
    • Nothing shot down.
    • Result: No escalation.
  • Blue scenario 2: Deploy drone?
    • Yes, because no pilot at risk.
    • Drone shot down.
    • Result: Minor escalation.

Or, with regard to another situation—perhaps local air defences would have been reluctant to engage a manned aircraft because of the evident risk of escalation, but would happily shoot down a drone. In this case the experimental findings might have been:

  • Red scenario 1: Shoot down aircraft?
    • No, too risky.
    • Nothing shot down.
    • Result: No escalation.
  • Red scenario 2: Shoot down drone?
    • Yes, because no pilot at risk.
    • Drone shot down.
    • Result: Minor escalation.

In fact, if you read the full paper you will see this is exactly what occurred in a scenario involving a  shoot-down decision: participants were much more likely to use force against an unmanned drone.

In other words, while the study suggests that drones might reduce the chance of escalation, it also suggests that we also need to investigate whether the lower perceived risk of drone-related escalation might cause Blue to undertake more provocative overflights, or might lead Red to undertake more potentially escalatory shoot-downs.

Figure 1 below shows the main experiment: aircraft shoot-downs lead to major escalations, drone shoot-downs to minor escalation.

Slide1.jpeg

Figure 1: Experimental results suggest shoot-down of manned aircraft results in greater escalation.

Given the risk of escalation, however, decision-makers might decide against overflight in the first place.

Figure 2 examines a situation where no drones are available. It incorporates the possibility that decision-makers simply refrain from overflight because of the escalation risk, and assigns a (plausible but entirely made-up) probability to this. Moreover, knowing that a shoot-down of a manned aircraft is likely to cause escalation—a tendency noted by Lin-Greenberg’s other experiment—perhaps Red won’t actually open fire. Again, I have assigned a (plausible) probability to this. These numbers are just for the purposes of illustration, but here we note that with manned overflight as the only option there is a 16% chance of escalation.

Slide3.jpeg

Figure 2: Considering other decision points. Should Blue even send an aircraft, given risk of escalation? Should Red engage it, given the risks?

In this fuller model, now let us introduce drones (Figure 3). Given that they are less likely to cause escalation, let us assume that (1) Blue is likely to prefer them over a manned ISR platform, (as per earlier findings) (2) Red is more likely to shoot them down, and that (3) shooting down a drone causes minor rather than major escalation. Once again, I’ve assigned some plausible probabilities for the purposes of illustration.

Slide4.jpeg

Figure: Adding drones to the mix.

When we add drones into the mix, the risk of major escalation drops from 16% to 4%, but, the risk of some form of escalation actually increases to 60%.  Does this mean that drones have actually limited the risk of escalation, or increased it? Moreover, it is possible that tit-for-tat minor escalation over drone shoot-downs could grow over time to major escalation. If that were the case, it is possible that drones—rather than limiting conflict—are a sort of easy-to-use “gateway drug” to more serious problems.

Remember that I’ve essentially invented all of my probabilities to make a methodological point (although I have tried to make them plausible). My point here is not in any way to criticize Lin-Greenberg’s experimental findings—I suspect he is right. It is to say that the two sets of wargame experiments he undertook are useful not only for their immediate findings, but also to the extent that they generate additional questions to be investigated.

 

 

Beware the confidence heuristic

This quick tweet today by political psychologist Philip Tetlock caught my eye, since it has important implications for serious policy gaming.

As I have noted elsewhere, research on political forecasting (including Tetlock’s seminal book Expert Political Judgment (2005), as well as the work of he and his colleagues with the Good Judgment Project) has highlighted the greater efficacy of cognitive “foxes” (those not overly attached to a single paradigm) and Bayesian updaters in correctly anticipating future outcomes. By their very nature, such individuals are willing to accept new information and change their views accordingly.

By contrast, groups (including teams within wargames or other serious games) may be heavily swayed by persuasive, overly-confident rhetoric—the “confidence heuristic” referenced in the linked Bloomberg article. In many settings—especially with military participants—this dynamic may be further aggravated by the effects of hierarchy and rank. As a result, confident pronouncements by senior leaders may obscure uncertainty and drive out differing views, even if the uncertainty is important and the differing views might be correct.

overconfidence.gif

Much depends on the mix of individuals and group dynamics at work during the game, then, as well as the analysis and aggregation methods used to assess game findings.

For more insight into individuals, groups, and forecasting, I strongly recommend Superforecasting: The Art and Science of Prediction (2015), a highly readable book by Tetlock and Dan Gardener. Nate Silver (of FiveThirtyEight fame) stresses the importance of Bayesian updating in The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t (2015).

For a few brief thoughts of my own, see my presentations earlier this year on Wargaming and Forecasting (Dstl) and In the Eye of the Beholder? Cognitive Challenges in Wargame Analysis (Connections UK, audio available here).

Will to fight

Back in July, we mentioned Ben Connable’s presentation on “the will to fight” at the Connections US wargaming conference. Now we are pleased to post links to the two recently-released RAND studies on the military will to fight (Connable et al, 2018) and national will to fight (McNerney et al, 2018):

x1537447779761.jpg.pagespeed.ic.mrO8JdPXvH.jpgWill to fight may be the single most important factor in war. The U.S. military accepts this premise: War is a human contest of opposing, independent wills. The purpose of using force is to bend and break adversary will. But this fundamental concept is poorly integrated into practice. The United States and its allies incur steep costs when they fail to place will to fight at the fore, when they misinterpret will to fight because it is ill-defined, or when they ignore it entirely. This report defines will to fight and describes its importance to the outcomes of wars. It gives the U.S. and allied militaries a way to better integrate will to fight into doctrine, planning, training, education, intelligence analysis, and military adviser assessments. It provides (1) a flexible, scalable model of will to fight that can be applied to any ground combat unit and (2) an experimental simulation model.

x1537447770588.jpg.pagespeed.ic.-2B1VyhWWt.jpgWhat drives some governments to persevere in war at any price while others choose to stop fighting? It is often less-tangible political and economic variables, rather than raw military power, that ultimately determine national will to fight. In this analysis, the authors explore how these variables strengthen or weaken a government’s determination to conduct sustained military operations, even when the expectation of success decreases or the need for significant political, economic, and military sacrifices increases.

This report is part of a broader RAND Arroyo Center effort to help U.S. leaders better understand and influence will to fight at both the national level and the tactical and operational levels. It presents findings and recommendations based on a wide-ranging literature review, a series of interviews, 15 case studies (including deep dives into conflicts involving the Korean Peninsula and Russia), and reviews of relevant modeling and war-gaming.

The authors propose an exploratory model of 15 variables that can be tailored and applied to a wide set of conflict scenarios and drive a much-needed dialogue among analysts conducting threat assessments, contingency plans, war games, and other efforts that require an evaluation of how future conflicts might unfold. The recommendations should provide insights into how leaders can influence will to fight in both allies and adversaries.

The former study in particular examines the way in which wargames do or do not model “will to fight,” and suggests some key lessons for future wargame design:

Adding will to fight changes combat simulation outcomes

  • Most U.S. military war games and simulations either do not include will to fight or include only minor proxies of it.
  • However, the simulated runs performed for this report showed that adding will-to-fight factors always changes combat outcomes and, in some cases, outcomes are significantly different.

Recommendations 

  • U.S. Army and Joint Force should adopt a universal definition and model of will to fight.
  • Include will to fight in all holistic estimates of ground combat effectiveness.
  • War games and simulations of combat should include will to fight.

Design Matters: Tiny Epic Zombies…and Glasses

Design Matters: A series on matters relating to design, and why design thinking matters.

Rex Brynen and I recently play tested Rex’s brand new copy of Tiny Epic Zombies. Our ensuing after-play discussion got us thinking about the game and certain common, irksome points we thought were design pitfalls to be avoided in any games, whether destined for the entertainment market, or geared toward the serious gaming and educational spheres. Thus the idea of Design Matters was born.

Tiny Epic Zombies – A Game of Brutal Survival
www.gamelyngames.com
www.gamelyngames.com/tiny-epic/tiny-epic-zombies-deluxe

Watch it Played
https://youtu.be/O9u8VXz8u80

I LOVE Gamelyn Games. I do. I own every single one of their games, love the concepts, adore the themes, am awed by the artwork, thrilled with the simple —yet engaging— rulesets: all in small inexpensive packages.

I say this, because, while I do enjoy the theme, concept, and art, Tiny Epic Zombies presents a few significant —avoidable— problems that should come as a lesson to all game designers.

Size matters.

Tiny Epic Games are not small, by any means, in their effect or entertainment value. Where Tiny Epic Zombies’ (TE:Z) size is lacking is in its small font.

Graphic design is about much more that making something pretty. The fundamentals of graphic design deal with visual communication; the key word being communication. If information is not being clearly, and effectively, communicated it can severely impede gameplay. If this is an intended effect, to frustrate or slow players down, it can be an effective tool. Unfortunately, in the case of TE:Z it is not. Sometimes icons, or text are impossible to read at any reasonable distance.

From a graphic design perspective: parts of the rulebook, certain objective cards, some mall map cards, TE:Z comes up short. This author and Rex Brynen both had difficulty discerning the text on certain cards without picking up the card and playing with the distance, necessitating glasses, adjusting glasses, removing glasses, or resorting to using the magnifier function of my iPhone to read some text. In one particular case it was absolutely impossible to discern what icon was being used on an objective card. Not difficult, not challenging, but impossible. The font size used on the Investigate the Source Objective Card —for example— was simply too small. The print resolution would not allow for the icon in the text to be seen as anything other than a circle with a blob. This inexcusable error in graphic design was immensely frustrating, and forced us to work backwards, trying to figure out what the icon could possible be. The design decision to go with such an impossibly small icon is confounding and frustrating.

It is always important to remember that —particularly— in game design, form should follow function. Games enjoyment, and engagement depend so much on a suspension of disbelief that any shock to the system that brings us out of the game experience will have an associated detraction from said game experience. Stopping the action to peer over a card, squinting to read text is anathema to a positive game experience.

Contrast this user experience (UX) with the thoroughly adorable and fun ITEMeeples Gamelyn produces for TE:Z. ITEMeeples, are iconic, specialized, plastic avatars with holes in them to place “reminder” items on a player’s character piece, representing weapons. While fundamentally unnecessary to gameplay, they add so much enjoyment and fun to the UX, and suspension of disbelief (“no, I really am carrying a chainsaw!”) they become an intrinsic piece of the game experience and enjoyment. They are so intrinsic to the positive game experience, their creation and inclusion in a number of the Tiny Epic Games makes one wonder how we ever gamed without them.

This fabulous attention to detail in this particular aspect of the game experience, while ignoring the game experience in another should serve as a cautionary tale to game designers: everything matters.

Location, location, location

The Echo Ridge Mall is the nexus of this little slice of this apocalyptic zombie outbreak. It is beautiful, with a richness of art that I admire tremendously.

However, in our play test this richness in detail sometimes became problematic. Each of the separate “stores” has any or all of: its own written rules box, objective placement icons, room numbers, or secret passages. These elements get lost in the richness of the art at tabletop distances. If our two player test had troubles, I can only imagine the difficulty five players, huddled around a large table in a semi-lit room would have discerning what they were supposed to do o a given card. Certainly, after one has played through a few rounds, the card-store effects become second nature, but having to pick up a piece of the map in order to read what you’re supposed to do, displacing items, meeples, and tokens is problematic.

1528240027632Further, unlike other Tiny Epic Games I’ve played through, the precise placement of the cards can be quite important. Each of these store location cards is divided into three rooms, which are bordered by thick walls. Each card, in turn, is bordered by this same thickness of wall, creating a discrete, modular store. Eight (8) of these stores surround a central courtyard in a layout as pictured below. Gamelyn produces a TE:Z Gamemat and online visual aid to lay this out.

Where other Tiny Epic Games’ card-location is only important insofar as where they are placed relative to each other (adjacent or not), TE:Z’s location-cards are placed and played directly against one another. This impacts movement, shooting, and card legibility.

The problems with this scheme are many fold:

Some cards will be placed upside-down. This would not matter except for the fact that many rules are written on the location-cards themselves resulting in a situation where many cards’ rules will be upside-down relative to the player. Add to this the font-size problem discussed above and early play grinds to a halt as players jockey for position to read a card, or have to pick up said location in order to proceed.

This, in turn leads to another —fiddly— problem: position matters. Each location card has one main “opening” or entrance, otherwise it is bounded by a solid-line wall. Players may move through walls, as they are presumed to find or make gaps through (strangely weak?) mall walls. If players pick-up and replace location-cards, jostle location-cards during gameplay or accidentally shift their position in any way, this can dramatically affect movement, shooting, tactics, and approach to gameplay. The Gamelyn-produced TE:Z Gamemat-for-purchase addresses this somewhat, but this particularly fiddly scheme could have been more easily solved with a simple graphic element — an alignment arrow in the middle of each card edge.

As walls are so fundamentally important to the gameplay, it struck us as very strange that all walls were clear, and of uniform width except for the central courtyard walls. Where all location-cards’ rooms are very clearly delineated by thick walls or uniform width, the central courtyard is divided into five (5) sections by markedly thinner walls. These walls are so different, we didn’t even consider them walls when playing through the game in our play test. Only upon careful review of the rules did we realize, thanks to a simple qualifying statement (p.8 “*Note: the Courtyard has 5 rooms*”), that these were meant to be walls, and the courtyard was not simply one large room. This would have substantially altered our game outcome. The lack of consistency in the application of this design element is inexplicable to me.

The decision to go this particular route with location-cards (stores), has another side-effect: The playmap neither looks nor feel like a mall. Referring back to the suspension of disbelief and user experience (UX) design discussed above: a decision was made to create this particular schema that took Rex and myself out of the game. When something doesn’t feel like what it is expected to be, there is a cognitive disconnect that occurs that informs gameplay. This can be a powerful tool when implemented properly, or a distracting nuisance when accidental. The result was —for us— a persistent feeling that something didn’t quite feel right.

Dissociative Personality Disorder (AKA I can do what?)

On that same front, we questioned the abilities of a number of the Player Cards. Not so much the abilities themselves, but the abilities associated with the names of the Player Cards.

User experience (UX) is a tricky and very particular aspect of any game design to master, largely because it relies on fickle and finicky human emotion, response, behaviour, and expectation. Designers can use psychology, the senses, and numerous devices to shape this experience. Gotten right, a game’s UX can overcome many a shortcoming. Gotten wrong it can detract from the pleasure of play.

There are specific instances where the player has a reasonable expectation of what a particular Player Card should allow the player to do:

Athlete Card: enables greater movement
Burglar Card: expanded item acquisition powers
Mechanic Card: better at repairs

When this expectation (Based purely on Name) meshes with the effect of a particular card, the result is pleasing and harmonious: a triumph of UX design.

When this does not:

Fry Cook Card: somehow make less noise?
Photographer Card: ending your turn in a store with 2 zombies results in finding ammunition?
Scientist Card: if any other player kills three or more zombies gain ammunition?

a disconnect results while questioning the meaning/source of these effects. While not insurmountable, the unintended consequence of a naming convention and the resultant cognitive dissonance when an effect does not match one’s expectation is entirely avoidable.

If these Character cards were named for persons instead of a specific role —Mary instead of Photographer— there would be no (reasonable) expectation of effect: why can’t she, instead, see things better with her zoom lens —improving search— for example. While this won’t break a game, it will distract, and distractions of this type will almost always lead to lessening enjoyment. Anytime a player begins questioning what the designer was thinking, the player is out-of-the-game.

What Went Right

The above should serve as cautionary reminders to PAXSims’ community of game designers and enthusiasts: every aspect of a game needs to be considered. A solid theme/idea/ruleset is not enough, a designer needs to communicate clearly and shape gameplay with intention or the game experience can suffer.

However, when you do get things right —as Gamelyn often does— you can create great experiences.

Excepting the above, TE:Z remains an enjoyable game because what it gets right it gets really right.

Some design shortcomings aside, the game art is —simply— fantastic. The clear theme carries throughout the game and spectacular card and box art. The game’s art direction truly sets the stage for the coming zombie apocalypse. Before the players even open the box, the stage has been set, then reinforced. Gamelyn, in my view, always gets this right. This is the campy, fun, zombie game experience you want with the pièce d’art of the contemporary gaming world: ITEMeeples.

ITEMeeples add so much fun and thrill to the game that no tiny pieces of plastic have any business doing —they are near magical. The excitement of attaching a chainsaw or assault rifle to your character meeple is reminiscent of opening a surprise gift. Completely unnecessary to the rules, this component-based element of UX is beyond spectacularly fun. Add a police car or motorcycle into which you can literally place your ITEMeeple, and you’ll be making engine noises while moving your pawn like you did when you were pretend driving in the back of your parents’ car as a child. This level of engagement clearly demonstrates how well-chosen and designed components can directly impact the game experience. (A phenomenon we harnessed in developing MaGCK, using iconic images as aides-memoire for matrix gaming)

Objectives (excepting some of their card design problems) are largely fun affairs where the ongoing challenge of risk-reward balanced against time constraints and a little bit of greed (but I really want to pick up that bazooka in the other store) played out —for us— down to the wire. The game seems to achieve a great balance of ramping up danger, while keeping you on the edge of your seat with interesting choices. Developing appropriate challenges and choices shape the game experience and flow, great care was taken in creating and testing these objectives, I am certain.

Once you get into the groove of the gameplay (one or two full turns to get up to speed), the game progresses quickly, satisfyingly ramping up intensity. If not for the distractions discussed above, the play is near seamless, with decision points to test each player’s resolve. Ease of access, understanding, and a gradual learning curve benefit this (and many) game greatly.

The card-based AI work very well. We played cooperatively without a Zombie-player, and the anticipation of each end-of-turn search-card’s resolution kept us in some suspense. I look forward to playing a larger, competitive game with the full complement of 5 players to note the differing experience. (clearly knowing each location-card’s ability will be fundamental to this, I believe) Scalability is a great aspect of the game: playable by one to a full complement of five players.

Overall, while not my favourite Gamelyn gameplay experience, Tiny epic Zombies remains a game I would replay. For PAXSims’ readers’ purposes, the game does illustrate a number of avoidable design pitfalls that should be considered by game designers and producers:

Design matters:

We can see, in the example of TE:Z, it is not enough for a game to be pretty (but sometimes, it certainly helps!). While great visuals can immediately engage players, clarity and legibility are fundamental in rules layout, design, and ability descriptions. Form must follow function. Nothing is more frustrating than not being able to read a rule, card, ability, or effect.

Consistency is key. A lack of consistent application of design elements can —and often will— lead to misunderstanding and misplay, affecting the overall game experience. Design must be purposeful and mindful in order to lead the player to the game experience the designer wants. Any lapse in this regard will have unintended consequences.

Expectations must be mindfully considered and managed as they will form an immediate opinion and impression. If something looks out-of-place it creates an uncomfortable cognitive dissonance, which —if purposeful— can be a powerful tool —if accidental— will detract from a game and risk running it off the rails.

Components and visuals can have tremendous positive impact, when properly implemented, or detract from gameplay when applied carelessly. The purposeful use of media will have an important impact on a game. (As discussed at Connections North in the presentation Grand Designs – Design Thinking in Games)

An accessible learning curve, geared toward the target player creates ease and comfort, allowing players to engage in the game quickly. The faster a player can integrate the rules into their experience, and simply engage in the theme of the game, the more effective the game will be.

In-stride adjudication (Connections 2018 working group report)

Stephen Downes-Martin has pulled together a 187 page (!) report on in-stride adjudication from the papers and discussion presented at the Connections US 2018 conference. You can download it here.

In-Stride Adjudication Working Group Report 20180908.jpg

Jane’s Intelligence Review on matrix gaming

The September issue of Jane’s Intelligence Review has an excellent article by Neil Ashdown assessing matrix games as an analytical tool.

Key points

  • Matrix games are comparatively simple wargames, emphasising creativity and original thought, which have been used by a range of government agencies and militaries.
  • These games are focused on the participants’ intentions, which makes them better suited for analysing political-military strategy and novel or obscure subjects, such as cyber security.
  • However, this technique is unsuitable for analysing granular tactical scenarios, and the games’ relatively low cost and complexity can reduce their attractiveness.

 

JIR1809_OSINT2

I would like to thank Neil and JIR for making it available (pdf copy at the link above) to PAXsims readers. If you are interested in reading more about the technique, there are many matrix gaming articles available here at PAXsims, the History of Wargaming Project has just published the Matrix Game Handbook, and you can purchase the Matrix Game Construction Kit (MaGCK) User Guide as a downloadable pdf.

How can we avoid risky and dishonesty shifts in seminar wargames?

iss_12137_00953.jpg

Stephen Downes-Martin has written up the discussion from another Connections game lab session, this time on How can we avoid risky and dishonesty shifts in seminar wargames?

The group identified three research questions and identified and discusses nine ways that the risky and (dis)honest shifts could be baselined, measured, controlled or mitigated.

Two Behavior Shifts During Small Group Discussions

The (Dis)honesty Shift

Research indicates “that there is a stronger inclination to behave immorally in groups than individually,” resulting in group decisions that are less honest than the individuals would tolerate on their own. “Dishonest” in the context of the research means the group decisions break or skirt the ethical rules of the organization and societal norms, involve cheating and lying. Furthermore, the group discussions tend to shift the individuals’ post-discussion norms of honest behavior towards dishonest. First the discussion tends to challenge the honesty norm, then inattention to one’s own moral standards (during the actual discussion) and categorization malleability (the range in which dishonesty can occur without triggering self-assessment and self-examination) create the effect that “people can cheat, but their behaviors, which they would usually consider dishonest do not bear negatively on their self-concept (they are not forced to update their self-concept)”. The research indicates that it is the small group communication that causes the shift towards dishonesty that enables group members to coordinate on dishonest actions and change their beliefs about honest behavior”. The group members “establish a new norm regarding (dis)honest behavior”. Appeals to ethics standards seem to be effective in the short term [Mazar et al] but there is little evidence for long term effectiveness.

The Risky Shift

Research into risky or cautious shifts during group discussion looks at whether and when a group decision shifts to be riskier or more cautious than the decision that the individuals would have made on their own. One element driving the shift appears to be who bears the consequences of the decision – the group members, people the group members know (colleagues, friends, family), or people the group members do not know. There is evidence that individuals tend to be myopically risk averse when making decisions for themselves. Research indicates however that “risk preferences are attenuated when making decisions for other people: risk-averse participants take more risk for others whereas risk seeking participants take less.” Whether the group shows a risky shift or a cautious shift depends on the culture from which the group is drawn and the size of the shift seems to depend on the degree of empathy the group feels for those who will bear the consequences and risks of the decision.

Research into leadership shows that “responsibility aversion” is driven by a desire for more “certainty about what constitutes the best choice when others’ welfare is affected”, that individuals “who are less responsibility averse have higher questionnaire-based and real-life leadership scores” and do not seek more certainty when making decisions that are risky for others than they seek when making decisions that are risky for themselves alone. However, this research says nothing about the starting risk-seeking or risk-avoiding preference of the decision making leader.

See the full paper (link above) for further discussion, including the footnotes (which have been removed from the excerpt above).

How can we credibly wargame cyber at an unclassified level?

253020.jpeg

The frighteningly-efficient Stephen Downes-Martin has been kind enough to pass on a game lab report from the recent Connections US 2018 wargaming conference on “How can we credibly wargame cyber at an unclassified level?”  (pdf).

A small minority of cyber experts with wargaming and research experience have security clearances. If cyber operations are researched and gamed only at high levels of classification, then we limit our use of the intellectual capital of the United States and Allies and put at risk our ability to gain edge over our adversaries. We must find ways to wargame cyber[1]at the unclassified level while dealing with information security dangers to best use the skills within academia, business and the gaming community. During the Connections US Wargaming Conference 2018 a small group of interested people gathered for about an hour to discuss the question:

“How can we credibly wargame cyber at an unclassified level?”

The group concluded that it is possible to wargame cyber credibly and usefully at the unclassified level and proposed eight methods for doing so. The group also suggested it is first necessary to demonstrate and socialize this idea by gaming the trade-offs between the classification level and the value gained from wargaming cyber.

[1]“Wargaming cyber” and “gaming cyber” are loose terms which group deliberately left as such to encourage divergent thinking and to avoid becoming too specific.

Experimenting with DIRE STRAITS

As PAXsims readers will know, the recent Connections UK professional wargaming conference featured a large political/military crisis game exploring crisis stability in East and Southeast China: DIRE STRAITS. This is the second time we have held a megagame at Connections UK, and—judging from last year’s survey—they are popular with participants. This year we organized something that addressed a series of near future  (2020) challenges, said against the backdrop of uncertainties in Trump Administration foreign policy and the growing strategic power of China.

Pulp-O-Mizer_Cover_Image.jpg

We also conducted an experiment.

Specifically, we decided to use the game to explore the extent to which different analytical teams would reach similar, or different, conclusions about the methodology and substantive findings of the game. If their findings converged, that would provide some evidence that wargaming can generate solid analytical insights. If their findings diverged a great deal, however, that would suggest that wargaming suffers from a possible “eye of the beholder” problem, whereby the interpretation of game findings might be heavily influenced by the subjective views and idiosyncratic characteristics of the analytical team—whether that be training/background/expertise, preexisting views,  or the particular mix of people and personalities involved. The latter finding could have quite important implications, in that game results might have as much to do with who was assessing them and how, as with the actual outcome of the game.

To do this, we formed three analytical teams: TEAM UK (composed of one British defence analyst and one serving RAF officer), TEAM EURO (composed of analysts from the UK, Finland, Sweden, and the Netherlands), and TEAM USA (composed of three very experienced American wargamers/analysts). Each team were free to move around and act as observers during the games, and had full access to game materials, briefings, player actions and assessments, and could review the record of game events produced during DIRE STRAITS by our media team.

We were well aware at the outset that DIRE STRAITS would be an imperfect analytical game. It was, after all, required to address multiple objectives: to accommodate one hundred or so people, most of whom would not be subject matter experts on the region; to be relatively simple; to be enjoyable; and to make do with the time and physical space assigned to us by the conference organizers. It was also designed on a budget of, well, nothing—the time and materials were all contributed by Jim Wallman and myself. From an experimental perspective, however, the potential shortcomings in the game were actually assets for the experiment, since they represented a number of potential methodological and substantive issues on which the analytical teams might focus. To make it clearer what their major take aways were, we asked each team to provide a list of their top five observations in each of two categories (game methodology, and substantive game findings).

And the results are now in:

All three teams did a very good job, and there is a great deal of insight and useful game design feedback contained within the reports. But what do they suggest about our experimental question? I have a lot more analysis of the findings to undertake, but here is a very quick, initial snapshot.

First, below is a summary of each team’s five main conclusions regarding game methodology. I have coded the results in dark green if there is full agreement across all three teams, light green for substantial agreement, yellow for some agreement, and red for little/no agreement. The latter does not mean that the teams necessarily would disagree on a point, only that it did not appear in the key take-aways of each. I have also summarized each conclusion into a single sentence—in the report, each is a full paragraph or more.

DS method table

A Venn diagram gives a graphic sense of the degree of overlap in the team methodological assessments.

DS method.png

One interesting point of divergence was the teams’ assessment of the White House subgame. TEAM USA had a number of very serious concerns about it. TEAM EURO, on the other hand—while noting the risks of embedding untested subgames in a larger game dynamic—nevertheless concluded that they “found this modelling fairly accurate.” TEAM UK had a somewhat intermediate position: while arguing that the White House subgame should have have been more careful in its depiction of current US political dynamics to avoid the impression of bias, this “obscured the fact that there were actually quite subtle mechanisms in the White House game, and that the results were the effects of political in-fighting and indeed, it could even show the need to “drain the swamp” to get a functional White House.” The various points made by the teams on this issue, and the subtle but important differences between them, will be the subject of a future PAXsims post.

Next, let us compare the three teams’ assessment of the substantive findings of the game. TEAM USA argued that the methodological problems with the game were such that no conclusions could be drawn. TEAM EURO felt that the actions of some teams were unrealistic (largely due to a lack of subject matter expertise and cultural/historical familiarity), but that overall “the overall course of action seemed to stay within reasonable bounds of what can be expected in the multitude of conflicts in the area.” TEAM UK was careful to distinguish between game outcomes that appeared to be intrinsic to the game design, and those that emerged from player interaction and emergent gameplay, and were able to identify several key outcomes among the latter.

DS substantive table.png

As both the table above and the diagram below indicate, there was much greater divergence here (much of it depending on assessments of game methodology, player behaviour, or plausibility).

DS substance

Again, I want to caution that this is a very quick take on some very rich data and analysis, and I might modify some of my initial impressions upon a deeper dive. However, I do think there is enough here to both underscore the potential value of crisis gaming as an analytical tool, and to sound some fairly loud warning bells about potential interpretive divergence in post-game analysis. At the very least, it suggests the value of using mixed methods to analyze game outcomes, and/or—better yet—a sort of analytical red teaming. If different groups of analysts are asked to draw separate conclusions, and those findings are then compared, convergence can be used as a rough proxy for higher confidence interpretations, while areas of divergence can then be examined in great detail. I am inclined to think, moreover, that producing separate analyses then bringing those together is likely to be more useful than simply combining the groups into a larger analytical team at the outset, since it somewhat reduces the risk that findings are driven by a dominant personality or senior official.

One final point: DIRE STRAITS assigned no fewer than nine analysts to pick apart its methodology, assess the findings in light of those strengths and weaknesses, and we have now published that feedback. Such explicit self-criticism is almost unheard of in think-tank POL/MIL gaming, and far too rare in most professional military wargaming too. Hopefully the willingness of Connections UK to do this will encourage others to as well!

Teaching wargame design at CGSC

us-army-command-and-general-staff-college-office.jpg

Today, James Sterrett made a presentation to the Military Operations Research Society’s wargame community of practice on teaching wargame design at the US Army Command and General Staff College. James is Chief of Simulations and Education in the Directorate of Simulation Education at CGSC, and a periodic PAXsims contributor.

This lecture will feature a discussion of game design within the context of professional military education.  DEPSECDEF Work talked to the need to incorporate wargaming into the formal military education system.  One approach to executing this issue is to offer a course in wargame design to students at multiple levels of professional development.  However, questions on how to implement this approach remain:  At what point(s) within an officer’s career should they be exposed to wargaming?  What aspect of wargaming should be emphasized?  What level of proficiency is desired?  What portions, if any, of the remaining curriculum should be dropped or modified to accommodate this requirement?

While the lecture wasn’t recorded, you’ll find his slides here. For previous discussion on this same topic, see his earlier (January 2017) blogpost.

Dungeons & Dragons as professional development

ADD_Dungeon_Masters_Guide_Old_p1.jpg

In response to one of the final exam questions this year, a student in my upper-level undergraduate course on multilateral peace operations at McGill University commented “I never knew D&D could be so useful until I took POLI 450.” That statement finally provided the impetus I needed to offer some thoughts on role-play games (RPGs) and serious conflict simulation.

In the context of POLI 450, the student concerned was referring to the massive Brynania peacebuilding simulation that we’ve been running for almost two decades. It is a grueling exercise indeed: 125+ players, 5-8 hours of game play per day for a full week, 10,000+ emails sent, and hundreds of hours of real and virtual meetings—all at a time when students are also trying to manage four other courses, plus occasional eating and sleeping. The simulation is designed to highlight a range of issues: political conflict and conflict resolution; insurgency; negotiations; humanitarian crisis and response; the challenges of coordination; stabilization; and longer-term development. Like a good game of D&D, participants face complex situations and even difficult moral choices while having to adjust plans on the fly with limited time, resources, and information. As has been evident from exam answers and course surveys over the years, students learn a lot from it, and it helps a great deal in putting course readings and theory into a practical, operational context.

However, I didn’t want to just comment on the value of RPG-type gaming as an immersive learning environment for students—as important as that is. Above and beyond this, I wanted to offer some thoughts of how role-play gaming can help to develop essential professional game design and facilitation skills. Indeed, in terms of professional wargame facilitation specifically, I would argue that running D&D games is probably a more useful preparation than playing either miniature or board wargames.

Before there’s a backlash from my fellow grognards, let me reiterate I’m talking here about game facilitation. I’m a hobby miniatures/board wargamer too, and I enjoy those a great deal. They’ve been invaluable in learning about military operations and history—indeed, far more useful than the 8+ years I spent studying in university. It is undeniable that hobby wargaming can contribute a great deal to one’s knowledge of how to model time, space, movement, and effects.

However, no one would argue that most hobby wargaming (with the notable exception of megagaming) really contributes a great deal to knowing how to run—as opposed to design—the multi-participant events that are usually characteristic of a serious professional wargame or political-military/crisis simulations.

There’s a certain irony in all this. As it is, professional wargamers already deal with a widespread bias against the gaming element of wargames. It is well-known, for example, that many military officers recoil at the thought of dice or cards determining the outcome of military actions in a wargame, even though they are perfectly happy to have outcomes determined through black-boxed stochastic processes embedded in computer algorithms. That Clausewitz once noted ” the absolute, the mathematical as it is called, nowhere finds any sure basis in the calculations in the art of war; and that from the outset there is a play of possibilities, probabilities, good and bad luck, which spreads about with all the coarse and fine threads of its web, and makes war of all branches of human activity the most like a game of cards” doesn’t change the fact that professional audiences often equate cards, dice, and other common game elements with a glorified version of Snakes-and-Ladders. Given that, suggesting that what they are doing is actually rather more like The Tomb of Horrors would certainly be a gaming system too far. Yet RPGs can develop invaluable skills in terms of scenario design, narrative engagement during game play, subtly keeping players on track for game purposes, and managing groups of people within such a context.

In terms of scenario design, this is very much at the core of role-play gaming—the game, after all, is almost entirely about the scenario and the players’ engagement in it. Good gamemasters are good precisely because they are able to keep players within the universe they have created, facing plausible choices with plausible consequences, and subtly encouraging everyone to internalize appropriate perspectives and motivations. In a well-run campaign the players aren’t simply trying to find treasure and slay beasts, but feel themselves part of it all. They begin to filter their worldview through their (fictional) professional specializations: fighters like to fight; magic-users like to stand back and rain destruction of foes while avoiding injury; clerics provide key support; rogues skulk and deceive; and much-maligned bards (like diplomats everywhere) use silver tongues to gain advantages that cannot be obtained by brute force. As Peter Perla and ED McGrady have argued, this sort of player engagement and immersion is also what makes (serious, professional, potentially life-and-death) wargaming work:

We believe that wargaming’s power and success (as well as its danger) derive from its ability to enable individual participants to transform themselves by making them more open to internalizing their experiences in a game—for good or ill. The particulars of individual wargames are important to their relative success, yet there is an undercurrent of something less tangible than facts or models that affects fundamentally the ability of a wargame to transform its participants.

A dungeonmaster also faces the constant challenge of allowing players to explore their universe, while at the same time keeping the game on-track in terms of general storyline and plot—all without letting players feel railroaded into doing (or not doing) particular things. They do so, moreover in a context of multiple participants with different perspectives and personalities. Take, for example, Phil Sabin‘s comments on a recent professional wargame in the UK (emphasis added):

This week at the UK Defence Academy we ran a two day research wargame with a couple of dozen players and facilitators to investigate nuclear risk dynamics.  I was on the Control team, and our main objective was to get the players first to use conventional force and then to escalate to nuclear strikes, despite their natural reluctance to initiate such dangerous and suicidal actions.  We succeeded, and play ended with wide-ranging conventional conflict, the nuclear devastation of central and eastern Europe, and a grave threat of further escalation, all from an initial spark in the Baltics in which both sides felt they were defending their existing rights and interests.

I remarked in the final plenary that wargame controllers in such games are rather like devils, seeking ways to foster player misperceptions and frustration and to present them with horrible dilemmas in a quest to make them trigger a literal ‘hell on earth’.  We succeeded in this aim, and it was sobering for everyone to realise how such a slide into disaster can occur through a horribly plausible sequence of interacting decisions, despite the initial resolve of each team individually to avoid such an outcome.  At least we can comfort ourselves that nobody really died, and that the whole point of such ‘virtual’ destruction in wargames is to help us to understand crisis dynamics and so make such escalation in the real world even more unlikely….

Replace “nuclear strikes” with “boss fight” or “confronting the dragon in his lair” and you pretty much have every D&D game ever. Phil may be more of a traditional grognard than a RPGer, but it is a gift indeed to be able to nudge participants in such a way that they don’t feel nudged, while giving them the freedom to make real choices.

Similarly, in the Brynania simulation, my task as CONTROL is to facilitate exploration of a plausible path of civil conflict and (hopefully) peacebuilding, while not allowing the game to get distracted or derailed. Doing so requires the subtle use of initial scenario and game injects, but in a way that players are—again—making real choices with real consequences. Certainly the outcomes over the years reveal a sort of bell-curve of results, with some more common than others, but none of them outliers in a way that would undercut the instructional purposes of the simulation.

Brynania outcomes 1

Brynania simulation outcomes and events.

Brynania outcomes 2.jpg

Primary peacebuilding mechanisms used in Brynania simulation.

I’m not the only RPGamer who feels this way. Tom Fisher is a fellow member of my local Montréal gaming group and DM extraordinaire, with an impressive record as a professional game designer and facilitator (he is codeveloper of AFTERSHOCK: A Humanitarian Crisis Game and the forthcoming Matrix Game Construction Kit, and has worked with the World Bank and various international financial intelligence agencies on games addressing financial crimes/corruption and strategic analysis). He had this to say on the topic in a recent email exchange:

I can say, without hesitation, that roleplaying games—particularly D&D—have led to the best jobs I’ve ever had.

There is a natural flow between being a gamer and professionally developing games, that much is obvious. What is less obvious, however, are the lessons derived from playing those games that do not directly impact game development. Role playing games, particularly the gamesmastering (facilitation) thereof engages, develops and encourages a particular way of thinking.

Much has been said about the need for outside the box thinking or lateral thinking. What is less discussed is how to train the mind to think different as some marketing campaigns encourage. Roleplaying games, in their various forms, are a virtual goldmine for the development, testing and experimentation of thought, and ways of thinking.

Roleplay, at its best, teaches through gameplay to account for assumptions, test limits of rules, push the limits of established rules – in short, roleplay is a short course on iterative design: “ design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented.”

Iterative design thinking is, in my view, the foundation of critical, outside-the-box, and lateral thinking. The process of iterative design faces-off actions based on assumptions against reactions based on real-world rules. Famously demonstrated by Tom Wujec’s Marshmallow Challenge, participants succeed by testing their assumptions against real-world effects (in that case, gravity and the relative strength of dry spaghetti).

The experiential and imaginary nature of roleplaying games requires reflection and forces a role-player to account for their assumptions when addressing a situation. In so many of my experiences delivering intelligence analysis or crime analysis courses, it is the recognition and testing of one’s assumptions that has been the lynchpin in achieving success in the training. Roleplaying games –and by extension immersive simulation exercises– are a crucible for developing the thought processes deemed so necessary and desired by modern institutions.

The experience of the gamesmaster, or facilitator, of roleplaying games adds a further level of complexity to the mix. Adult role-players, by their very nature, are an interesting bunch. Most tend to be well-read, quite intelligent, and universally challenging. As noted above, roleplay encourages the testing of limits, pushing of envelopes, and accounting for assumptions. So, a gamesmaster (GM) is confronted with a number of players –with their unique agendas– who inherently want to push the limits of the GM’s world-rules to achieve goals laid out by said GM designed to engage, thrill and enthrall each of the players. In short: herding cats. There is no more cost-effective short-course on diplomacy and small-team management than being a roleplaying game GM.

The complexity of gamesmastering (GMing) increases exponentially as GMs become involved in world-building. At the pinnacle of GMing is the world-building GM, who shapes world from thought to engage players in a truly immersive experience. Herein, the GM accounts for the cause-and-effect of player actions against the backdrop of an entire living world simulation. At this level, fluidity and iterative design are paramount to successful implementation and player-engagement, and will lead to a level of suspension of disbelief that will engage players not only logically in the gameplay, but emotionally, on a truly immersive level.

It is these skills of engagement, coupled with the role-player’s way of thinking, challenging and testing that have led to the best jobs I’ve ever had.

Much can be said about the nature of play and the strong links between creative play and language, physical, social/emotional, and cognitive development. Roleplaying games take this level of play to its limits, and push outward, not only encouraging growth, but in my opinion, forcing it, as new pathways of thought develop to deal with novel situations.

The elusive and mysterious “Tim Price,” prolific author of matrix game articles and scenarios, has certainly been known to frequently design and play RPGs. A certain former British military officer and gifted professional wargame consultant—let’s call him GLB—actually carries an image of the Advanced Dungeons & Dragons Dungeon Master’s Guide (above) surreptitiously taped to his clipboard to inspire him while facilitating serious games.

As for me, I’ve been playing D&D since the very first boxed three-volume set in the mid 1970s. Like the POLI 450 student quoted above, it’s fair to say that at the outset I too “never knew D&D could be so useful.”

TH25c.png


Have your own experiences of using RPG skills in serious gaming? Post them in the comments section!

What is a megagame?

John Mizon has put together a very useful video on “what is a megagame?,” in which he explores the player interaction, immersion, and emergent gameplay that characterize the genre. It even features a few seconds from our own recent War in Binni game!

You’ll find more of John’s megagame videos here. A great deal of insight into designing and running a megagame can also be found at Jim Wallman’s No Game Survives blog.

Discussion welcome: (war)gaming the US as ally and adversary

eagle

I’ll be giving some thought over the next week to “(war)gaming the US as ally and adversary,” for a piece I hope to write soon. I have always been interested in how we model actors with murky or complex decision-making processes, as well as actors who may at times appear irrational (North Korea for example, or Qaddafi’s Libya). How much of this is simply different worldviews and interests, and how much of it is truly non-rational? How can pol-mil wargames best generate policy responses that reflect ideology, confirmation bias, pride, narcissism, bureaucratic infighting, and other non-realist determinants of strategic or operational behaviour?

In particular, in the coming months and years those of us outside the US who do national security gaming may need to consider:

  • How best to model unpredictable US behaviour (say, wavering alliance commitments) or behaviours that veer between supportive and threatening.
  • How best to model the US as a partial adversary or threat to national interests (for example, on trade policy, or liberal democracy).
enet9tmt71_tvhsae-1.png

No this isn’t a real tweet.

Any ideas are welcome in the comments section.

Wikistrat: Turkey’s Intervention in the Syrian Civil War

In April 2016 Wikistrat completed two role-playing simulations that explored the dynamics of Turkish intervention in the Syrian civil war:

140 analysts from Wikistrat’s global community of 2,200 recently wargamed a scenario in which Turkey invades northern Syria to establish a buffer zone in the country’s Kurdish region.

The analysts were divided across two mirrored groups (Alpha and Bravo) which had seven teams of ten analysts each, playing Russia, Assad loyalists in Syria, Turkey, the Kurds, ISIS, anti-Damascus and Western-backed rebels, as well as Iran and its proxies.

The two groups progressed simultaneously from the same starting scenario. But the divergent courses they took revealed key insights into some of the main actors and dynamics in the Syrian Civil War.

Key Findings

  • In the event of a Turkish intervention in Syria, providing Turkish forces stayed within a ten-kilometer buffer zone and avoided direct confrontation with Russia, they would likely not face significant pressure to withdraw — and could even gain international support if they were able to stabilize the border and slow the flow of refugees to Europe.
  • Assad has an interest in encouraging Russian and Kurdish coordination in Kurdish-held areas in order to free resources to fight anti-Assad rebels in the north.
  • Anti-Assad rebels are likely to suffer greatly in the face of escalating tensions, as their backers (e.g., the U.S. and Turkey) will be hesitant to increase the risk of hostilities with Russia by providing them with significant support.
  • The potential for NATO involvement in Syria will likely constrain Turkish, U.S. and European actors far more than Russia.
  • If Russia manages to keep its focus on ISIS while checking Turkey, it could gain significant international public opinion support which could be leveraged on behalf of Assad.
  • ISIS aggression was a major determinant regarding the direction and intensity of both games. However, ISIS aggression was more likely to result in sustained victory if the focus was on insurgent warfare in Syria (e.g., an attack on Russian forces within Syria) rather than terrorist attacks abroad (e.g., an attack against Russia itself).

The findings are interesting to compare with actual developments since the analysis was undertaken, notably the launching of Operation Euphrates Shield in August against ISIS and even more so the PYD/YPG (Syrian Kurds, and their allies in the Syrian Democratic Forces), and recent Russian-Turkish-Iranian cooperation on a ceasefire and proposed Syrian peace negotiations.

You’ll find the full report at the Wikistrat website. For more on their role-play methodologies, see here.

h/t Shay Hershkovitz

Reflections on the wargame spectrum

Colin Marston (Dstl) passed on to me some slides (public domain identifier PUB098428) presented at the recent MORS wargaming special meeting which address the range of wargaming approaches and methodology. Given the growing interest in wargaming—what it is, what it can do, and how it might do it—I thought they would be of interest to PAXsim readers. I’ve also inserted a few thoughts of my own.

You’ll find the full set of slides here here (ppt) and here (pdf).

20161018_MORS UK Allies panel briefing v1.0 (PUB098428)_O_PAXsims.jpg

The first set of slide suggests that wargames can be differentiated by the level of analysis (strategic vs operational, vs tactical), by the nature of the problem (bounded and clear, or wicked and messy), and by the type of adjudication used (open/free versus rigid and rules-based). I would have probably listed the adjudication issue last, because the choice of appropriate methodology can really only be made once you are clear on what sorts of question(s) you are trying to answer.

The slides don’t say much about purpose. Elsewhere, Graham Longley-Brown does so, noting the divide between analytical and training/education games:

Areas-of-Wargame-Use-v1.6.png

While that differentiation is useful because it points to important differences in purpose and hence design, I’ll admit that I’ve been increasingly interested in the extent to which we might be able to develop hybrid games—that is, wargames that serve an education/training function, but in which participants are also generating data that is of analytical value too. My own Brynania civil war/peacebuilding game at McGill, for example, is designed for educational purposes but has now been used to generate data for two PhD theses (one on terrorist violence, the other on educational gaming). While there’s a risk of compromising analytical rigour or educational effectiveness in doing this, it could also provide a useful way of stretching limited resources.

The Dstl presentation goes on to discuss which game approaches are often of value in which contexts:

20161018_MORS UK Allies panel briefing v1.0 (PUB098428)_O_PAXsims3.jpg

Here they comment:

On this slide the top blue line represents the different levels within the problem space.  The red, middle line represents types of adjudication.  The bottom green line indicates the different levels of complexity.  On top of these axes we have the types of wargames that we employ in Dstl and across the MOD.  Please note that these techniques are not limited to their positions on the axes.  We find that the techniques on the left of the spectrum generally provide more opportunity for original thought and creativity (imagination). In addition, methods at this end of the spectrum generally provide an opportunity for doing lots of Courses of Action with little depth – so essentially short games that might last a couple of hours to a day.  The methods on the right can provide increasing depth, but are often slower to set up and run. These methods generally employ more rigorous and precise techniques – although that does not necessarily mean that they give more accurate outputs.  All of these approaches have their merits, some being better at trying to answer certain questions than others. So, when appropriate, we try to use a combination of different approaches.

They also identify some “essential elements” of a wargame:

20161018_MORS UK Allies panel briefing v1.0 (PUB098428)_O_PAXsims4.jpg

Now, the type of game that we use is just one part of the process. This slide highlights the other factors that we need to consider. There’s no fixed order in the way we tackle these – it’s an iterative process and depends on the question.

The wargame is not the simulation. The simulation is but a subset of a wargame.

Effective communication and transparency are crucial throughout the whole of the wargaming process and it is crucial that everyone – from the players to the customers – are involved at the relevant stages.

20161018_MORS UK Allies panel briefing v1.0 (PUB098428)_O_PAXsims5.jpg

The optimal approach to providing decision-support is often to fuse the information pertaining from both human-in-the-loop and non-human-in-the-loop techniques.

There are many different types of wargames and careful consideration should be given as to which type, or types, of game are most appropriate for a particular problem.  Also wargaming should often NOT be used in isolation but as part of a broader analytical tool and / or iterative process that incorporates a range of different techniques.

Feel free to add your own thoughts in the comments section.

%d bloggers like this: