PAXsims

Conflict simulation, peacebuilding, and development

Category Archives: methodology

Setting the (wargame) stage

Slide1.jpeg

I delivered a (virtual) presentation today to the Military Operations Society wargaming community of practice on the importance of “chrome, fluff” and other finer touches in promoting better game outcomes through enhanced narrative engagement. Having forgotten to set a calendar reminder I was a fifteen minutes late for my own talk, which only served to reinforce the stereotype of absent-minded professors. Apologies to everyone who had to wait!

The full set of Powerpoint slides is available here (pdf). Since the content may not be entirely self-evident from the slides, I’ll also offer a quick summary.

Slide4.jpeg

First, I argued—in keeping with Perla and McGrady’s discussion of “Why wargaming works“—that narrative engagement is a key element of good (war)game design and implementation.

Slide6.jpeg

In addition to their experience-based, qualitative argument, I adduced some quantitative, experimental data that shows that role-playing produces superior forecasting outcomes…

Slide8.jpeg

..and that the way we frame and present games has profound effects on the way players actually play them.

Slide9.jpeg

I also noted a substantial literature on the psychology of conflict and conflict resolution that points to the importance of normative and other non-material factors in shaping conflict and negotiating behaviour.

Slide11.jpeg

In other words, if your games don’t have players feeling angry, or aggrieved, or alienated, or attached to normative and symbolic elements, they’re acting unrealistically. Since the selling point of wargaming is that it places humans in the loop, you need those players playing like real humans, not technocratic, minimaxing robots.

Doing that, I suggested, requires nudging participants into the right mindset. One has to be careful one doesn’t overdo it—some participants may recoil at role play fluff that makes it all look like a LARP or game of D&D.

What then followed was a discussion of some considerations and ways that I had done it, but which was also intended to spark a broader conversation. Specifically we looked at:

  • How player backgrounds and player assignment will influence how readily participants internalize appropriate perspectives.
  • Briefing materials should designed to subtly promote desired perspectives and biases (without being too obvious about this). Things like flags, maps, placards, and so forth can all be used to make players more closely identify with their role.
  • In repeated games—for example, some wargames in an educational setting that might be conducted every year)—oral traditions and tales from prior games can make the game setting richer and more authentic (although at the risk of players learning privileged information from previous players). Participants might also contribute background materials, chrome, or fluff that you can use in future games—such as the collection of songs from Brynania that my McGill University students have recorded over the past twenty years.

  • Very explicit objectives and “victory conditions” should often be used sparingly, lest they promote both an unrealistic sense of the rigidity of policy goals and promote excessively “tick-off-the-objective-boxes” game play.
  • Physical space should be used to subtly shape player interaction, whether to foster interaction, limit it, or even create a sense of isolation and alienation.
  • Coffee breaks and lunch breaks should be designed NOT to pull players out of their scenario headspace. The last thing you want is Blue and Red having a friendly hour over lunch talking about non-game matters in a scenario where they are supposed to distrust or even hate each other.
  • Fog and friction should be promoted not only to model imperfect information and imperfect institutions/capabilities, but also to subtly promote atmospheres of uncertainty, fear, crisis, panic, frustration, and similar emotional states, as appropriate to the actors and scenario.
  • The graphic presentation of game materials should encourage narrative engagement and immersion. Avoid inappropriate fonts and formats, make things look “real,” and be aware that game graphics can very much affect how players (and analysts) perceive the game and it’s outcomes.

A variety of other issues came up in the Q&A and discussion. Many thanks to everyone who participated—I hope they found it as useful as I did.

Slide18.jpeg

 

 

CNA: After the wargame

cna-logo

In the third part of their wargaming trilogy, the CNA Talks podcast explores data collection and analysis in professional wargames:

In part three of our occasional series on wargaming, CNA’s chief wargame designer Jeremy Sepinsky returns, accompanied by Robin Mays, research analyst for CNA’s Gaming and Integration program, to discuss how they analyze the results of a CNA Wargame. Jeremy starts by describing the “hotwash” discussion that occurs immediately after a wargame concludes, and what insights participants often take away. Throughout this episode, Jeremy and Robin describe the type of information note takers record during a wargame, and how that data gets used in the final analysis. Using examples from actual wargames about logistics, medical evacuation and disaster relief, they explain how analysis reveals insights not readily apparent to those who played the game.

The link above also contains links to Parts 1 and 2.

Also, for those interested in game analysis, be sure to read the results of our DIRE STRAITS experiment on how analysts can influence (or bias) analysis.

Squeezing the Turnip: The Limits of Wargaming

The following piece has been written for PAXsims by Robert C. Rubel.


 

squeezing_blood_out_of_a_turnip.gif

“Measure it with a micrometer, mark it with chalk and cut it with an axe” is an old adage that cautions us that the precision we can achieve in a project is limited by the least precise tool we employ.  We should remember this wisdom any time we use wargaming for research purposes.  Dr. John Hanley, in his dissertation On Wargaming says that wargaming is a weakly structured tool that is appropriate for examining weakly structured problems; that is, those with high levels of indeterminacy – those aspects of the problem that are unknown, such as the identity of all the variables.  Problems with lesser degrees of indeterminacy are more appropriately handled by various kinds of measurement and mathematical analysis.  However, as the tools for simulation and the analysis of textual data become more sophisticated, the danger is we will attempt to extract precision from wargaming that it is simply not appropriate to seek.

There are three aspects to this issue that we will address here; the inherent ability of wargaming to supply data that can be extrapolated to the real world, the development of “oracular” new gaming systems, and the number of objectives a particular wargame can achieve.

Peter Perla wrote, back in 1990 what has been the standard reference on wargaming, aptly-titled The Art of Wargaming. Of late there has been a lot of discussion online about wargaming as a science, or perhaps more precisely, the application of scientific methodology to wargaming.  There is no doubt that a rigorous, disciplined and structured approach to designing, executing and analyzing wargames is a good and needed thing. Too often in the past this has not been the case, and lots of money, time and effort have been wasted on games that were poorly conceived, designed and executed.  Worse, decisions of consequence have been influenced by the outcome of such games.  But even the most competently mounted game has its limits.  In this writer’s view, games can indicate possibilities but not predict; judgment is required in handling their results.

It is one thing to use a game to reveal relationships that might not otherwise be detected.  A 2003 Unified Course game at the Naval War College explored how the Services’ future concepts were or were not compatible.  It was designed as a kind of intellectual atom smasher, employing a rather too challenging scenario to see where the concepts failed.  The sub-atomic particle that popped out was that nobody was planning to maintain a SEAD (suppression of enemy air defense) capability that would cover the entry of non-stealth aircraft into defended zones. This was a potentially actionable insight that came out of the game, based on actual elements of future concepts. When games are used this way they are revelatory, not predictive.

Where we run into trouble is when we attempt to infer too much meaning from what game players do or say.  Dr. Stephen Downes-Martin has shown that game player behavior is at least partially a function of their relationships to game umpires, and so the linkage to either present or future reality is broken.  Thus there are limits on the situations where player behavior or verbal / written inputs can be regarded as legitimate output of a game.  There is a difference between having some kind of aha moment via observing player inputs and exchanges, and trying to dig out, statistically, presumed embedded meaning from player responses to questionnaires, interviews or even interactions with umpires or other players.

A first cousin to the attempt to extract too much information from a regular game is the attempt to create some new form of gaming that will be more revelatory or predictive than current practice can achieve.  Most of these are some riff on the Delphi Method, whether a variation of the seminar game or some kind of massively multi-player online game.  I know of none that have justified the claims of their designers and in any case they seem to violate the basic logic Downes-Martin lays out; the problematic connection between game players and the real world. When I was chairman of the Wargaming Department at the Naval War College I challenged my faculty to advance the state of the art of wargaming, but always within the bounds of supportable logic. My mantra was “No BS leaves the building!”

Even if a game is conceived and designed with the above epistemic limitations in mind, there could still be danger that the sponsor will try to burden it with too many objectives.  This was a common problem with the Navy’s Global Wargames in the late 1990s.  Tasked to explore network-centric warfare, the games became overly large and complex, piling on objectives from multiple sponsors, creating a voluminous and chaotic (not to mention expensive) output that was susceptible to interpretation in any way a stakeholder wanted.

The poster child of all this was Millennium Challenge 02, a massive “game” involving over 35,000 “entities” embedded in the supporting computer simulation, many game cells as well as thousands of instrumented troops, vehicles, ships and aircraft in the field and at sea.  Not only was the underpinning logic and design flawed – attempting to stack a game on top of field training exercises – but the multiplicity of objectives obfuscated any ability to extract useful information.  As it turned out, the game was sufficiently foggy to spawn suspicion of its intended use in the mind of a key Red player, retired Lieutenant General Paul VanRiper, and his post-game public criticisms destroyed any credibility the game might have had (I observed the game standing behind him as he directed his forces).

Modesty is called for.  While we might approach game design scientifically, and there are certain scientific philosophies upon which game analysis can be founded, gaming itself is not some form of the scientific method, even though rigor and discipline is necessary for their success.  An example of a good game was one run at the Naval War College in the spring of 2014 for VADM Hunt, then director of the Navy Staff.  The game was designed around the question “How would fleet operators use the LCS if it had various defined characteristics?”  Actual fleet staff officers were brought in as players and they worked their way through various scenarios.  What made a difference in the game was the effect that arming the LCS with long range anti-ship missiles had on opposition players.  The insight that VADM Rowden, Commander Surface Force, took away was that distributing offensive power around the fleet complicated an enemy’s planning problem.  One could consider this a blinding flash of the obvious, but in this case it was revelatory in terms of the inherent logic of an operational situation.  Trying to squeeze more detailed insights from the game, such as the combat effectiveness of the LCS, might have fuzzed the game’s focus and prevented the Admiral from gaining the key insight. He translated that insight into the concept of distributed lethality, now codified into the more general doctrine of Distributed Maritime Operations.

In a very real sense, games are blunt instruments, the analogue of the axe in the old saying.  Like the axe though, they can be very useful.  In this writer’s opinion – informed by many years of gaming – the best games in terms of potential for yielding actionable results, are focused on just a couple of objectives.  That said, in my experience, the most valuable insights are sometimes the ones you don’t expect going in.  In fact, some of the most influential games I have seen were essentially fishing expeditions. In 2006 the Naval War College conducted a six-week long strategy game to support the development of what became the 2007 A Cooperative Strategy for 21stCentury Seapower (CS21).  Going in we did not know what we were looking for but in the end a somewhat unexpected insight emerged (It’s the system, stupid) that ended up underpinning the new strategic document.  “Let’s set up this scenario and see what happens” is an axe-like approach that must not then be measured with a micrometer.


Captain (ret) Robert C. (“Barney”) Rubel served 30 years active duty as a light attack/strike fighter aviator.  Most of his shore duty was connected to professional military education (PME) and particularly the use of wargaming to support it.  As a civilian he worked first as an analyst within the Naval War College Wargaming Department, later becoming its chairman.  In that capacity he transformed the department from a mostly military staff organization to an academic research organization.  From 2006 to 2014 he served as Dean of the Center for Naval Warfare Studies, the research arm of the Naval War College. Over the years he has played in, observed, designed, directed, and analyzed numerous wargames of all types and written a number of articles about wargaming.  For the past four years he has served as an advisor to the Chief of Naval Operations on various issues including fleet design and PME.

 

CNA Talks: Playing a Wargame

cna-logo

CNA’s occasional podcast series discusses how to play a wargame.

In part two of our occasional series on wargaming, CNA’s chief wargame designer Jeremy Sepinsky returns, accompanied by Chris Steinitz, director of CNA’s North Korea program, to discuss what it’s like to play a CNA Wargame. Jeremy describes the different players in a wargame, emphasizing the value of people with operational experience who can accurately represent how military leaders would make decisions. Jeremy and Chris lay out the differences between playing Blue team and Red team. They also take us down the “road to war,” describing how the wargaming team lays out the scenario that starts the game.  Finally, Chris and Jeremy take us though the player’s decisions and how the results of a turn are adjudicated.

Lin-Greenberg: Drones, escalation, and experimental wargames

 

WoTRdrones.pngAt War on the Rocks, Erik Lin-Greenberg discusses what a series of experimental wargames reveal about drones and escalation risk. The finding: the loss of unmanned platforms presents less risk of escalation.

I developed an innovative approach to explore these dynamics: the experimental wargame. The method allows observers to compare nearly identical, simultaneous wargames — a set of control games, in which a factor of interest does not appear, and a set of treatment games, in which it does. In my experiment, all participants are exposed to the same aircraft shootdown scenario, but participants in treatment games are told the downed aircraft is a drone while those in control games are told it is manned. This allows policymakers to examine whether drones affect decision-making.

The experimental wargames revealed that the deployment of drones can actually contribute to lowerlevels of escalation and greater crisis stability than the deployment of manned assets. These findings help explain how drones affect stability by shedding light on escalation dynamics after an initial drone deployment, something that few existing studies on drones have addressed.

My findings build upon existing research on the low barrier to drone deployment by suggesting that, once conflict has begun, states may find drones useful for limiting escalation. Indeed, states can take action using or against drones without risking significant escalation. The results should ease concerns of drone pessimists and offer valuable insights to policymakers about drones’ effects on conflict dynamics. More broadly, experimental wargaming offers a novel approach to generating insights about national security decision-making that can be used to inform military planning and policy development.

You will find a longer and more detailed account of the study here.

This is a good example of using multiple wargames as an experimental method. Above and beyond this, it also shows how that wargames can generate questions worthy of further investigation.

More specifically, while the loss of a drone is less escalatory, an actor might be more likely to introduce a drone for this reason—possibly deploying one in a situation where they would not have risked a manned platform. If this is true, however, drones may still prove more escalatory overall. In other words, if the wargame is expanded to include the prior decision to deploy assets in the first place, the actual outcome might have been something like this:

  • Blue scenario 1: Deploy manned platform?
    • No, too risky.
    • No platform deployed.
    • Nothing shot down.
    • Result: No escalation.
  • Blue scenario 2: Deploy drone?
    • Yes, because no pilot at risk.
    • Drone shot down.
    • Result: Minor escalation.

Or, with regard to another situation—perhaps local air defences would have been reluctant to engage a manned aircraft because of the evident risk of escalation, but would happily shoot down a drone. In this case the experimental findings might have been:

  • Red scenario 1: Shoot down aircraft?
    • No, too risky.
    • Nothing shot down.
    • Result: No escalation.
  • Red scenario 2: Shoot down drone?
    • Yes, because no pilot at risk.
    • Drone shot down.
    • Result: Minor escalation.

In fact, if you read the full paper you will see this is exactly what occurred in a scenario involving a  shoot-down decision: participants were much more likely to use force against an unmanned drone.

In other words, while the study suggests that drones might reduce the chance of escalation, it also suggests that we also need to investigate whether the lower perceived risk of drone-related escalation might cause Blue to undertake more provocative overflights, or might lead Red to undertake more potentially escalatory shoot-downs.

Figure 1 below shows the main experiment: aircraft shoot-downs lead to major escalations, drone shoot-downs to minor escalation.

Slide1.jpeg

Figure 1: Experimental results suggest shoot-down of manned aircraft results in greater escalation.

Given the risk of escalation, however, decision-makers might decide against overflight in the first place.

Figure 2 examines a situation where no drones are available. It incorporates the possibility that decision-makers simply refrain from overflight because of the escalation risk, and assigns a (plausible but entirely made-up) probability to this. Moreover, knowing that a shoot-down of a manned aircraft is likely to cause escalation—a tendency noted by Lin-Greenberg’s other experiment—perhaps Red won’t actually open fire. Again, I have assigned a (plausible) probability to this. These numbers are just for the purposes of illustration, but here we note that with manned overflight as the only option there is a 16% chance of escalation.

Slide3.jpeg

Figure 2: Considering other decision points. Should Blue even send an aircraft, given risk of escalation? Should Red engage it, given the risks?

In this fuller model, now let us introduce drones (Figure 3). Given that they are less likely to cause escalation, let us assume that (1) Blue is likely to prefer them over a manned ISR platform, (as per earlier findings) (2) Red is more likely to shoot them down, and that (3) shooting down a drone causes minor rather than major escalation. Once again, I’ve assigned some plausible probabilities for the purposes of illustration.

Slide4.jpeg

Figure: Adding drones to the mix.

When we add drones into the mix, the risk of major escalation drops from 16% to 4%, but, the risk of some form of escalation actually increases to 60%.  Does this mean that drones have actually limited the risk of escalation, or increased it? Moreover, it is possible that tit-for-tat minor escalation over drone shoot-downs could grow over time to major escalation. If that were the case, it is possible that drones—rather than limiting conflict—are a sort of easy-to-use “gateway drug” to more serious problems.

Remember that I’ve essentially invented all of my probabilities to make a methodological point (although I have tried to make them plausible). My point here is not in any way to criticize Lin-Greenberg’s experimental findings—I suspect he is right. It is to say that the two sets of wargame experiments he undertook are useful not only for their immediate findings, but also to the extent that they generate additional questions to be investigated.

 

 

Beware the confidence heuristic

This quick tweet today by political psychologist Philip Tetlock caught my eye, since it has important implications for serious policy gaming.

As I have noted elsewhere, research on political forecasting (including Tetlock’s seminal book Expert Political Judgment (2005), as well as the work of he and his colleagues with the Good Judgment Project) has highlighted the greater efficacy of cognitive “foxes” (those not overly attached to a single paradigm) and Bayesian updaters in correctly anticipating future outcomes. By their very nature, such individuals are willing to accept new information and change their views accordingly.

By contrast, groups (including teams within wargames or other serious games) may be heavily swayed by persuasive, overly-confident rhetoric—the “confidence heuristic” referenced in the linked Bloomberg article. In many settings—especially with military participants—this dynamic may be further aggravated by the effects of hierarchy and rank. As a result, confident pronouncements by senior leaders may obscure uncertainty and drive out differing views, even if the uncertainty is important and the differing views might be correct.

overconfidence.gif

Much depends on the mix of individuals and group dynamics at work during the game, then, as well as the analysis and aggregation methods used to assess game findings.

For more insight into individuals, groups, and forecasting, I strongly recommend Superforecasting: The Art and Science of Prediction (2015), a highly readable book by Tetlock and Dan Gardener. Nate Silver (of FiveThirtyEight fame) stresses the importance of Bayesian updating in The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t (2015).

For a few brief thoughts of my own, see my presentations earlier this year on Wargaming and Forecasting (Dstl) and In the Eye of the Beholder? Cognitive Challenges in Wargame Analysis (Connections UK, audio available here).

Will to fight

Back in July, we mentioned Ben Connable’s presentation on “the will to fight” at the Connections US wargaming conference. Now we are pleased to post links to the two recently-released RAND studies on the military will to fight (Connable et al, 2018) and national will to fight (McNerney et al, 2018):

x1537447779761.jpg.pagespeed.ic.mrO8JdPXvH.jpgWill to fight may be the single most important factor in war. The U.S. military accepts this premise: War is a human contest of opposing, independent wills. The purpose of using force is to bend and break adversary will. But this fundamental concept is poorly integrated into practice. The United States and its allies incur steep costs when they fail to place will to fight at the fore, when they misinterpret will to fight because it is ill-defined, or when they ignore it entirely. This report defines will to fight and describes its importance to the outcomes of wars. It gives the U.S. and allied militaries a way to better integrate will to fight into doctrine, planning, training, education, intelligence analysis, and military adviser assessments. It provides (1) a flexible, scalable model of will to fight that can be applied to any ground combat unit and (2) an experimental simulation model.

x1537447770588.jpg.pagespeed.ic.-2B1VyhWWt.jpgWhat drives some governments to persevere in war at any price while others choose to stop fighting? It is often less-tangible political and economic variables, rather than raw military power, that ultimately determine national will to fight. In this analysis, the authors explore how these variables strengthen or weaken a government’s determination to conduct sustained military operations, even when the expectation of success decreases or the need for significant political, economic, and military sacrifices increases.

This report is part of a broader RAND Arroyo Center effort to help U.S. leaders better understand and influence will to fight at both the national level and the tactical and operational levels. It presents findings and recommendations based on a wide-ranging literature review, a series of interviews, 15 case studies (including deep dives into conflicts involving the Korean Peninsula and Russia), and reviews of relevant modeling and war-gaming.

The authors propose an exploratory model of 15 variables that can be tailored and applied to a wide set of conflict scenarios and drive a much-needed dialogue among analysts conducting threat assessments, contingency plans, war games, and other efforts that require an evaluation of how future conflicts might unfold. The recommendations should provide insights into how leaders can influence will to fight in both allies and adversaries.

The former study in particular examines the way in which wargames do or do not model “will to fight,” and suggests some key lessons for future wargame design:

Adding will to fight changes combat simulation outcomes

  • Most U.S. military war games and simulations either do not include will to fight or include only minor proxies of it.
  • However, the simulated runs performed for this report showed that adding will-to-fight factors always changes combat outcomes and, in some cases, outcomes are significantly different.

Recommendations 

  • U.S. Army and Joint Force should adopt a universal definition and model of will to fight.
  • Include will to fight in all holistic estimates of ground combat effectiveness.
  • War games and simulations of combat should include will to fight.

Design Matters: Tiny Epic Zombies…and Glasses

Design Matters: A series on matters relating to design, and why design thinking matters.

Rex Brynen and I recently play tested Rex’s brand new copy of Tiny Epic Zombies. Our ensuing after-play discussion got us thinking about the game and certain common, irksome points we thought were design pitfalls to be avoided in any games, whether destined for the entertainment market, or geared toward the serious gaming and educational spheres. Thus the idea of Design Matters was born.

Tiny Epic Zombies – A Game of Brutal Survival
www.gamelyngames.com
www.gamelyngames.com/tiny-epic/tiny-epic-zombies-deluxe

Watch it Played
https://youtu.be/O9u8VXz8u80

I LOVE Gamelyn Games. I do. I own every single one of their games, love the concepts, adore the themes, am awed by the artwork, thrilled with the simple —yet engaging— rulesets: all in small inexpensive packages.

I say this, because, while I do enjoy the theme, concept, and art, Tiny Epic Zombies presents a few significant —avoidable— problems that should come as a lesson to all game designers.

Size matters.

Tiny Epic Games are not small, by any means, in their effect or entertainment value. Where Tiny Epic Zombies’ (TE:Z) size is lacking is in its small font.

Graphic design is about much more that making something pretty. The fundamentals of graphic design deal with visual communication; the key word being communication. If information is not being clearly, and effectively, communicated it can severely impede gameplay. If this is an intended effect, to frustrate or slow players down, it can be an effective tool. Unfortunately, in the case of TE:Z it is not. Sometimes icons, or text are impossible to read at any reasonable distance.

From a graphic design perspective: parts of the rulebook, certain objective cards, some mall map cards, TE:Z comes up short. This author and Rex Brynen both had difficulty discerning the text on certain cards without picking up the card and playing with the distance, necessitating glasses, adjusting glasses, removing glasses, or resorting to using the magnifier function of my iPhone to read some text. In one particular case it was absolutely impossible to discern what icon was being used on an objective card. Not difficult, not challenging, but impossible. The font size used on the Investigate the Source Objective Card —for example— was simply too small. The print resolution would not allow for the icon in the text to be seen as anything other than a circle with a blob. This inexcusable error in graphic design was immensely frustrating, and forced us to work backwards, trying to figure out what the icon could possible be. The design decision to go with such an impossibly small icon is confounding and frustrating.

It is always important to remember that —particularly— in game design, form should follow function. Games enjoyment, and engagement depend so much on a suspension of disbelief that any shock to the system that brings us out of the game experience will have an associated detraction from said game experience. Stopping the action to peer over a card, squinting to read text is anathema to a positive game experience.

Contrast this user experience (UX) with the thoroughly adorable and fun ITEMeeples Gamelyn produces for TE:Z. ITEMeeples, are iconic, specialized, plastic avatars with holes in them to place “reminder” items on a player’s character piece, representing weapons. While fundamentally unnecessary to gameplay, they add so much enjoyment and fun to the UX, and suspension of disbelief (“no, I really am carrying a chainsaw!”) they become an intrinsic piece of the game experience and enjoyment. They are so intrinsic to the positive game experience, their creation and inclusion in a number of the Tiny Epic Games makes one wonder how we ever gamed without them.

This fabulous attention to detail in this particular aspect of the game experience, while ignoring the game experience in another should serve as a cautionary tale to game designers: everything matters.

Location, location, location

The Echo Ridge Mall is the nexus of this little slice of this apocalyptic zombie outbreak. It is beautiful, with a richness of art that I admire tremendously.

However, in our play test this richness in detail sometimes became problematic. Each of the separate “stores” has any or all of: its own written rules box, objective placement icons, room numbers, or secret passages. These elements get lost in the richness of the art at tabletop distances. If our two player test had troubles, I can only imagine the difficulty five players, huddled around a large table in a semi-lit room would have discerning what they were supposed to do o a given card. Certainly, after one has played through a few rounds, the card-store effects become second nature, but having to pick up a piece of the map in order to read what you’re supposed to do, displacing items, meeples, and tokens is problematic.

1528240027632Further, unlike other Tiny Epic Games I’ve played through, the precise placement of the cards can be quite important. Each of these store location cards is divided into three rooms, which are bordered by thick walls. Each card, in turn, is bordered by this same thickness of wall, creating a discrete, modular store. Eight (8) of these stores surround a central courtyard in a layout as pictured below. Gamelyn produces a TE:Z Gamemat and online visual aid to lay this out.

Where other Tiny Epic Games’ card-location is only important insofar as where they are placed relative to each other (adjacent or not), TE:Z’s location-cards are placed and played directly against one another. This impacts movement, shooting, and card legibility.

The problems with this scheme are many fold:

Some cards will be placed upside-down. This would not matter except for the fact that many rules are written on the location-cards themselves resulting in a situation where many cards’ rules will be upside-down relative to the player. Add to this the font-size problem discussed above and early play grinds to a halt as players jockey for position to read a card, or have to pick up said location in order to proceed.

This, in turn leads to another —fiddly— problem: position matters. Each location card has one main “opening” or entrance, otherwise it is bounded by a solid-line wall. Players may move through walls, as they are presumed to find or make gaps through (strangely weak?) mall walls. If players pick-up and replace location-cards, jostle location-cards during gameplay or accidentally shift their position in any way, this can dramatically affect movement, shooting, tactics, and approach to gameplay. The Gamelyn-produced TE:Z Gamemat-for-purchase addresses this somewhat, but this particularly fiddly scheme could have been more easily solved with a simple graphic element — an alignment arrow in the middle of each card edge.

As walls are so fundamentally important to the gameplay, it struck us as very strange that all walls were clear, and of uniform width except for the central courtyard walls. Where all location-cards’ rooms are very clearly delineated by thick walls or uniform width, the central courtyard is divided into five (5) sections by markedly thinner walls. These walls are so different, we didn’t even consider them walls when playing through the game in our play test. Only upon careful review of the rules did we realize, thanks to a simple qualifying statement (p.8 “*Note: the Courtyard has 5 rooms*”), that these were meant to be walls, and the courtyard was not simply one large room. This would have substantially altered our game outcome. The lack of consistency in the application of this design element is inexplicable to me.

The decision to go this particular route with location-cards (stores), has another side-effect: The playmap neither looks nor feel like a mall. Referring back to the suspension of disbelief and user experience (UX) design discussed above: a decision was made to create this particular schema that took Rex and myself out of the game. When something doesn’t feel like what it is expected to be, there is a cognitive disconnect that occurs that informs gameplay. This can be a powerful tool when implemented properly, or a distracting nuisance when accidental. The result was —for us— a persistent feeling that something didn’t quite feel right.

Dissociative Personality Disorder (AKA I can do what?)

On that same front, we questioned the abilities of a number of the Player Cards. Not so much the abilities themselves, but the abilities associated with the names of the Player Cards.

User experience (UX) is a tricky and very particular aspect of any game design to master, largely because it relies on fickle and finicky human emotion, response, behaviour, and expectation. Designers can use psychology, the senses, and numerous devices to shape this experience. Gotten right, a game’s UX can overcome many a shortcoming. Gotten wrong it can detract from the pleasure of play.

There are specific instances where the player has a reasonable expectation of what a particular Player Card should allow the player to do:

Athlete Card: enables greater movement
Burglar Card: expanded item acquisition powers
Mechanic Card: better at repairs

When this expectation (Based purely on Name) meshes with the effect of a particular card, the result is pleasing and harmonious: a triumph of UX design.

When this does not:

Fry Cook Card: somehow make less noise?
Photographer Card: ending your turn in a store with 2 zombies results in finding ammunition?
Scientist Card: if any other player kills three or more zombies gain ammunition?

a disconnect results while questioning the meaning/source of these effects. While not insurmountable, the unintended consequence of a naming convention and the resultant cognitive dissonance when an effect does not match one’s expectation is entirely avoidable.

If these Character cards were named for persons instead of a specific role —Mary instead of Photographer— there would be no (reasonable) expectation of effect: why can’t she, instead, see things better with her zoom lens —improving search— for example. While this won’t break a game, it will distract, and distractions of this type will almost always lead to lessening enjoyment. Anytime a player begins questioning what the designer was thinking, the player is out-of-the-game.

What Went Right

The above should serve as cautionary reminders to PAXSims’ community of game designers and enthusiasts: every aspect of a game needs to be considered. A solid theme/idea/ruleset is not enough, a designer needs to communicate clearly and shape gameplay with intention or the game experience can suffer.

However, when you do get things right —as Gamelyn often does— you can create great experiences.

Excepting the above, TE:Z remains an enjoyable game because what it gets right it gets really right.

Some design shortcomings aside, the game art is —simply— fantastic. The clear theme carries throughout the game and spectacular card and box art. The game’s art direction truly sets the stage for the coming zombie apocalypse. Before the players even open the box, the stage has been set, then reinforced. Gamelyn, in my view, always gets this right. This is the campy, fun, zombie game experience you want with the pièce d’art of the contemporary gaming world: ITEMeeples.

ITEMeeples add so much fun and thrill to the game that no tiny pieces of plastic have any business doing —they are near magical. The excitement of attaching a chainsaw or assault rifle to your character meeple is reminiscent of opening a surprise gift. Completely unnecessary to the rules, this component-based element of UX is beyond spectacularly fun. Add a police car or motorcycle into which you can literally place your ITEMeeple, and you’ll be making engine noises while moving your pawn like you did when you were pretend driving in the back of your parents’ car as a child. This level of engagement clearly demonstrates how well-chosen and designed components can directly impact the game experience. (A phenomenon we harnessed in developing MaGCK, using iconic images as aides-memoire for matrix gaming)

Objectives (excepting some of their card design problems) are largely fun affairs where the ongoing challenge of risk-reward balanced against time constraints and a little bit of greed (but I really want to pick up that bazooka in the other store) played out —for us— down to the wire. The game seems to achieve a great balance of ramping up danger, while keeping you on the edge of your seat with interesting choices. Developing appropriate challenges and choices shape the game experience and flow, great care was taken in creating and testing these objectives, I am certain.

Once you get into the groove of the gameplay (one or two full turns to get up to speed), the game progresses quickly, satisfyingly ramping up intensity. If not for the distractions discussed above, the play is near seamless, with decision points to test each player’s resolve. Ease of access, understanding, and a gradual learning curve benefit this (and many) game greatly.

The card-based AI work very well. We played cooperatively without a Zombie-player, and the anticipation of each end-of-turn search-card’s resolution kept us in some suspense. I look forward to playing a larger, competitive game with the full complement of 5 players to note the differing experience. (clearly knowing each location-card’s ability will be fundamental to this, I believe) Scalability is a great aspect of the game: playable by one to a full complement of five players.

Overall, while not my favourite Gamelyn gameplay experience, Tiny epic Zombies remains a game I would replay. For PAXSims’ readers’ purposes, the game does illustrate a number of avoidable design pitfalls that should be considered by game designers and producers:

Design matters:

We can see, in the example of TE:Z, it is not enough for a game to be pretty (but sometimes, it certainly helps!). While great visuals can immediately engage players, clarity and legibility are fundamental in rules layout, design, and ability descriptions. Form must follow function. Nothing is more frustrating than not being able to read a rule, card, ability, or effect.

Consistency is key. A lack of consistent application of design elements can —and often will— lead to misunderstanding and misplay, affecting the overall game experience. Design must be purposeful and mindful in order to lead the player to the game experience the designer wants. Any lapse in this regard will have unintended consequences.

Expectations must be mindfully considered and managed as they will form an immediate opinion and impression. If something looks out-of-place it creates an uncomfortable cognitive dissonance, which —if purposeful— can be a powerful tool —if accidental— will detract from a game and risk running it off the rails.

Components and visuals can have tremendous positive impact, when properly implemented, or detract from gameplay when applied carelessly. The purposeful use of media will have an important impact on a game. (As discussed at Connections North in the presentation Grand Designs – Design Thinking in Games)

An accessible learning curve, geared toward the target player creates ease and comfort, allowing players to engage in the game quickly. The faster a player can integrate the rules into their experience, and simply engage in the theme of the game, the more effective the game will be.

In-stride adjudication (Connections 2018 working group report)

Stephen Downes-Martin has pulled together a 187 page (!) report on in-stride adjudication from the papers and discussion presented at the Connections US 2018 conference. You can download it here.

In-Stride Adjudication Working Group Report 20180908.jpg

Jane’s Intelligence Review on matrix gaming

The September issue of Jane’s Intelligence Review has an excellent article by Neil Ashdown assessing matrix games as an analytical tool.

Key points

  • Matrix games are comparatively simple wargames, emphasising creativity and original thought, which have been used by a range of government agencies and militaries.
  • These games are focused on the participants’ intentions, which makes them better suited for analysing political-military strategy and novel or obscure subjects, such as cyber security.
  • However, this technique is unsuitable for analysing granular tactical scenarios, and the games’ relatively low cost and complexity can reduce their attractiveness.

 

JIR1809_OSINT2

I would like to thank Neil and JIR for making it available (pdf copy at the link above) to PAXsims readers. If you are interested in reading more about the technique, there are many matrix gaming articles available here at PAXsims, the History of Wargaming Project has just published the Matrix Game Handbook, and you can purchase the Matrix Game Construction Kit (MaGCK) User Guide as a downloadable pdf.

How can we avoid risky and dishonesty shifts in seminar wargames?

iss_12137_00953.jpg

Stephen Downes-Martin has written up the discussion from another Connections game lab session, this time on How can we avoid risky and dishonesty shifts in seminar wargames?

The group identified three research questions and identified and discusses nine ways that the risky and (dis)honest shifts could be baselined, measured, controlled or mitigated.

Two Behavior Shifts During Small Group Discussions

The (Dis)honesty Shift

Research indicates “that there is a stronger inclination to behave immorally in groups than individually,” resulting in group decisions that are less honest than the individuals would tolerate on their own. “Dishonest” in the context of the research means the group decisions break or skirt the ethical rules of the organization and societal norms, involve cheating and lying. Furthermore, the group discussions tend to shift the individuals’ post-discussion norms of honest behavior towards dishonest. First the discussion tends to challenge the honesty norm, then inattention to one’s own moral standards (during the actual discussion) and categorization malleability (the range in which dishonesty can occur without triggering self-assessment and self-examination) create the effect that “people can cheat, but their behaviors, which they would usually consider dishonest do not bear negatively on their self-concept (they are not forced to update their self-concept)”. The research indicates that it is the small group communication that causes the shift towards dishonesty that enables group members to coordinate on dishonest actions and change their beliefs about honest behavior”. The group members “establish a new norm regarding (dis)honest behavior”. Appeals to ethics standards seem to be effective in the short term [Mazar et al] but there is little evidence for long term effectiveness.

The Risky Shift

Research into risky or cautious shifts during group discussion looks at whether and when a group decision shifts to be riskier or more cautious than the decision that the individuals would have made on their own. One element driving the shift appears to be who bears the consequences of the decision – the group members, people the group members know (colleagues, friends, family), or people the group members do not know. There is evidence that individuals tend to be myopically risk averse when making decisions for themselves. Research indicates however that “risk preferences are attenuated when making decisions for other people: risk-averse participants take more risk for others whereas risk seeking participants take less.” Whether the group shows a risky shift or a cautious shift depends on the culture from which the group is drawn and the size of the shift seems to depend on the degree of empathy the group feels for those who will bear the consequences and risks of the decision.

Research into leadership shows that “responsibility aversion” is driven by a desire for more “certainty about what constitutes the best choice when others’ welfare is affected”, that individuals “who are less responsibility averse have higher questionnaire-based and real-life leadership scores” and do not seek more certainty when making decisions that are risky for others than they seek when making decisions that are risky for themselves alone. However, this research says nothing about the starting risk-seeking or risk-avoiding preference of the decision making leader.

See the full paper (link above) for further discussion, including the footnotes (which have been removed from the excerpt above).

How can we credibly wargame cyber at an unclassified level?

253020.jpeg

The frighteningly-efficient Stephen Downes-Martin has been kind enough to pass on a game lab report from the recent Connections US 2018 wargaming conference on “How can we credibly wargame cyber at an unclassified level?”  (pdf).

A small minority of cyber experts with wargaming and research experience have security clearances. If cyber operations are researched and gamed only at high levels of classification, then we limit our use of the intellectual capital of the United States and Allies and put at risk our ability to gain edge over our adversaries. We must find ways to wargame cyber[1]at the unclassified level while dealing with information security dangers to best use the skills within academia, business and the gaming community. During the Connections US Wargaming Conference 2018 a small group of interested people gathered for about an hour to discuss the question:

“How can we credibly wargame cyber at an unclassified level?”

The group concluded that it is possible to wargame cyber credibly and usefully at the unclassified level and proposed eight methods for doing so. The group also suggested it is first necessary to demonstrate and socialize this idea by gaming the trade-offs between the classification level and the value gained from wargaming cyber.

[1]“Wargaming cyber” and “gaming cyber” are loose terms which group deliberately left as such to encourage divergent thinking and to avoid becoming too specific.

Experimenting with DIRE STRAITS

As PAXsims readers will know, the recent Connections UK professional wargaming conference featured a large political/military crisis game exploring crisis stability in East and Southeast China: DIRE STRAITS. This is the second time we have held a megagame at Connections UK, and—judging from last year’s survey—they are popular with participants. This year we organized something that addressed a series of near future  (2020) challenges, said against the backdrop of uncertainties in Trump Administration foreign policy and the growing strategic power of China.

Pulp-O-Mizer_Cover_Image.jpg

We also conducted an experiment.

Specifically, we decided to use the game to explore the extent to which different analytical teams would reach similar, or different, conclusions about the methodology and substantive findings of the game. If their findings converged, that would provide some evidence that wargaming can generate solid analytical insights. If their findings diverged a great deal, however, that would suggest that wargaming suffers from a possible “eye of the beholder” problem, whereby the interpretation of game findings might be heavily influenced by the subjective views and idiosyncratic characteristics of the analytical team—whether that be training/background/expertise, preexisting views,  or the particular mix of people and personalities involved. The latter finding could have quite important implications, in that game results might have as much to do with who was assessing them and how, as with the actual outcome of the game.

To do this, we formed three analytical teams: TEAM UK (composed of one British defence analyst and one serving RAF officer), TEAM EURO (composed of analysts from the UK, Finland, Sweden, and the Netherlands), and TEAM USA (composed of three very experienced American wargamers/analysts). Each team were free to move around and act as observers during the games, and had full access to game materials, briefings, player actions and assessments, and could review the record of game events produced during DIRE STRAITS by our media team.

We were well aware at the outset that DIRE STRAITS would be an imperfect analytical game. It was, after all, required to address multiple objectives: to accommodate one hundred or so people, most of whom would not be subject matter experts on the region; to be relatively simple; to be enjoyable; and to make do with the time and physical space assigned to us by the conference organizers. It was also designed on a budget of, well, nothing—the time and materials were all contributed by Jim Wallman and myself. From an experimental perspective, however, the potential shortcomings in the game were actually assets for the experiment, since they represented a number of potential methodological and substantive issues on which the analytical teams might focus. To make it clearer what their major take aways were, we asked each team to provide a list of their top five observations in each of two categories (game methodology, and substantive game findings).

And the results are now in:

All three teams did a very good job, and there is a great deal of insight and useful game design feedback contained within the reports. But what do they suggest about our experimental question? I have a lot more analysis of the findings to undertake, but here is a very quick, initial snapshot.

First, below is a summary of each team’s five main conclusions regarding game methodology. I have coded the results in dark green if there is full agreement across all three teams, light green for substantial agreement, yellow for some agreement, and red for little/no agreement. The latter does not mean that the teams necessarily would disagree on a point, only that it did not appear in the key take-aways of each. I have also summarized each conclusion into a single sentence—in the report, each is a full paragraph or more.

DS method table

A Venn diagram gives a graphic sense of the degree of overlap in the team methodological assessments.

DS method.png

One interesting point of divergence was the teams’ assessment of the White House subgame. TEAM USA had a number of very serious concerns about it. TEAM EURO, on the other hand—while noting the risks of embedding untested subgames in a larger game dynamic—nevertheless concluded that they “found this modelling fairly accurate.” TEAM UK had a somewhat intermediate position: while arguing that the White House subgame should have have been more careful in its depiction of current US political dynamics to avoid the impression of bias, this “obscured the fact that there were actually quite subtle mechanisms in the White House game, and that the results were the effects of political in-fighting and indeed, it could even show the need to “drain the swamp” to get a functional White House.” The various points made by the teams on this issue, and the subtle but important differences between them, will be the subject of a future PAXsims post.

Next, let us compare the three teams’ assessment of the substantive findings of the game. TEAM USA argued that the methodological problems with the game were such that no conclusions could be drawn. TEAM EURO felt that the actions of some teams were unrealistic (largely due to a lack of subject matter expertise and cultural/historical familiarity), but that overall “the overall course of action seemed to stay within reasonable bounds of what can be expected in the multitude of conflicts in the area.” TEAM UK was careful to distinguish between game outcomes that appeared to be intrinsic to the game design, and those that emerged from player interaction and emergent gameplay, and were able to identify several key outcomes among the latter.

DS substantive table.png

As both the table above and the diagram below indicate, there was much greater divergence here (much of it depending on assessments of game methodology, player behaviour, or plausibility).

DS substance

Again, I want to caution that this is a very quick take on some very rich data and analysis, and I might modify some of my initial impressions upon a deeper dive. However, I do think there is enough here to both underscore the potential value of crisis gaming as an analytical tool, and to sound some fairly loud warning bells about potential interpretive divergence in post-game analysis. At the very least, it suggests the value of using mixed methods to analyze game outcomes, and/or—better yet—a sort of analytical red teaming. If different groups of analysts are asked to draw separate conclusions, and those findings are then compared, convergence can be used as a rough proxy for higher confidence interpretations, while areas of divergence can then be examined in great detail. I am inclined to think, moreover, that producing separate analyses then bringing those together is likely to be more useful than simply combining the groups into a larger analytical team at the outset, since it somewhat reduces the risk that findings are driven by a dominant personality or senior official.

One final point: DIRE STRAITS assigned no fewer than nine analysts to pick apart its methodology, assess the findings in light of those strengths and weaknesses, and we have now published that feedback. Such explicit self-criticism is almost unheard of in think-tank POL/MIL gaming, and far too rare in most professional military wargaming too. Hopefully the willingness of Connections UK to do this will encourage others to as well!

Teaching wargame design at CGSC

us-army-command-and-general-staff-college-office.jpg

Today, James Sterrett made a presentation to the Military Operations Research Society’s wargame community of practice on teaching wargame design at the US Army Command and General Staff College. James is Chief of Simulations and Education in the Directorate of Simulation Education at CGSC, and a periodic PAXsims contributor.

This lecture will feature a discussion of game design within the context of professional military education.  DEPSECDEF Work talked to the need to incorporate wargaming into the formal military education system.  One approach to executing this issue is to offer a course in wargame design to students at multiple levels of professional development.  However, questions on how to implement this approach remain:  At what point(s) within an officer’s career should they be exposed to wargaming?  What aspect of wargaming should be emphasized?  What level of proficiency is desired?  What portions, if any, of the remaining curriculum should be dropped or modified to accommodate this requirement?

While the lecture wasn’t recorded, you’ll find his slides here. For previous discussion on this same topic, see his earlier (January 2017) blogpost.

Dungeons & Dragons as professional development

ADD_Dungeon_Masters_Guide_Old_p1.jpg

In response to one of the final exam questions this year, a student in my upper-level undergraduate course on multilateral peace operations at McGill University commented “I never knew D&D could be so useful until I took POLI 450.” That statement finally provided the impetus I needed to offer some thoughts on role-play games (RPGs) and serious conflict simulation.

In the context of POLI 450, the student concerned was referring to the massive Brynania peacebuilding simulation that we’ve been running for almost two decades. It is a grueling exercise indeed: 125+ players, 5-8 hours of game play per day for a full week, 10,000+ emails sent, and hundreds of hours of real and virtual meetings—all at a time when students are also trying to manage four other courses, plus occasional eating and sleeping. The simulation is designed to highlight a range of issues: political conflict and conflict resolution; insurgency; negotiations; humanitarian crisis and response; the challenges of coordination; stabilization; and longer-term development. Like a good game of D&D, participants face complex situations and even difficult moral choices while having to adjust plans on the fly with limited time, resources, and information. As has been evident from exam answers and course surveys over the years, students learn a lot from it, and it helps a great deal in putting course readings and theory into a practical, operational context.

However, I didn’t want to just comment on the value of RPG-type gaming as an immersive learning environment for students—as important as that is. Above and beyond this, I wanted to offer some thoughts of how role-play gaming can help to develop essential professional game design and facilitation skills. Indeed, in terms of professional wargame facilitation specifically, I would argue that running D&D games is probably a more useful preparation than playing either miniature or board wargames.

Before there’s a backlash from my fellow grognards, let me reiterate I’m talking here about game facilitation. I’m a hobby miniatures/board wargamer too, and I enjoy those a great deal. They’ve been invaluable in learning about military operations and history—indeed, far more useful than the 8+ years I spent studying in university. It is undeniable that hobby wargaming can contribute a great deal to one’s knowledge of how to model time, space, movement, and effects.

However, no one would argue that most hobby wargaming (with the notable exception of megagaming) really contributes a great deal to knowing how to run—as opposed to design—the multi-participant events that are usually characteristic of a serious professional wargame or political-military/crisis simulations.

There’s a certain irony in all this. As it is, professional wargamers already deal with a widespread bias against the gaming element of wargames. It is well-known, for example, that many military officers recoil at the thought of dice or cards determining the outcome of military actions in a wargame, even though they are perfectly happy to have outcomes determined through black-boxed stochastic processes embedded in computer algorithms. That Clausewitz once noted ” the absolute, the mathematical as it is called, nowhere finds any sure basis in the calculations in the art of war; and that from the outset there is a play of possibilities, probabilities, good and bad luck, which spreads about with all the coarse and fine threads of its web, and makes war of all branches of human activity the most like a game of cards” doesn’t change the fact that professional audiences often equate cards, dice, and other common game elements with a glorified version of Snakes-and-Ladders. Given that, suggesting that what they are doing is actually rather more like The Tomb of Horrors would certainly be a gaming system too far. Yet RPGs can develop invaluable skills in terms of scenario design, narrative engagement during game play, subtly keeping players on track for game purposes, and managing groups of people within such a context.

In terms of scenario design, this is very much at the core of role-play gaming—the game, after all, is almost entirely about the scenario and the players’ engagement in it. Good gamemasters are good precisely because they are able to keep players within the universe they have created, facing plausible choices with plausible consequences, and subtly encouraging everyone to internalize appropriate perspectives and motivations. In a well-run campaign the players aren’t simply trying to find treasure and slay beasts, but feel themselves part of it all. They begin to filter their worldview through their (fictional) professional specializations: fighters like to fight; magic-users like to stand back and rain destruction of foes while avoiding injury; clerics provide key support; rogues skulk and deceive; and much-maligned bards (like diplomats everywhere) use silver tongues to gain advantages that cannot be obtained by brute force. As Peter Perla and ED McGrady have argued, this sort of player engagement and immersion is also what makes (serious, professional, potentially life-and-death) wargaming work:

We believe that wargaming’s power and success (as well as its danger) derive from its ability to enable individual participants to transform themselves by making them more open to internalizing their experiences in a game—for good or ill. The particulars of individual wargames are important to their relative success, yet there is an undercurrent of something less tangible than facts or models that affects fundamentally the ability of a wargame to transform its participants.

A dungeonmaster also faces the constant challenge of allowing players to explore their universe, while at the same time keeping the game on-track in terms of general storyline and plot—all without letting players feel railroaded into doing (or not doing) particular things. They do so, moreover in a context of multiple participants with different perspectives and personalities. Take, for example, Phil Sabin‘s comments on a recent professional wargame in the UK (emphasis added):

This week at the UK Defence Academy we ran a two day research wargame with a couple of dozen players and facilitators to investigate nuclear risk dynamics.  I was on the Control team, and our main objective was to get the players first to use conventional force and then to escalate to nuclear strikes, despite their natural reluctance to initiate such dangerous and suicidal actions.  We succeeded, and play ended with wide-ranging conventional conflict, the nuclear devastation of central and eastern Europe, and a grave threat of further escalation, all from an initial spark in the Baltics in which both sides felt they were defending their existing rights and interests.

I remarked in the final plenary that wargame controllers in such games are rather like devils, seeking ways to foster player misperceptions and frustration and to present them with horrible dilemmas in a quest to make them trigger a literal ‘hell on earth’.  We succeeded in this aim, and it was sobering for everyone to realise how such a slide into disaster can occur through a horribly plausible sequence of interacting decisions, despite the initial resolve of each team individually to avoid such an outcome.  At least we can comfort ourselves that nobody really died, and that the whole point of such ‘virtual’ destruction in wargames is to help us to understand crisis dynamics and so make such escalation in the real world even more unlikely….

Replace “nuclear strikes” with “boss fight” or “confronting the dragon in his lair” and you pretty much have every D&D game ever. Phil may be more of a traditional grognard than a RPGer, but it is a gift indeed to be able to nudge participants in such a way that they don’t feel nudged, while giving them the freedom to make real choices.

Similarly, in the Brynania simulation, my task as CONTROL is to facilitate exploration of a plausible path of civil conflict and (hopefully) peacebuilding, while not allowing the game to get distracted or derailed. Doing so requires the subtle use of initial scenario and game injects, but in a way that players are—again—making real choices with real consequences. Certainly the outcomes over the years reveal a sort of bell-curve of results, with some more common than others, but none of them outliers in a way that would undercut the instructional purposes of the simulation.

Brynania outcomes 1

Brynania simulation outcomes and events.

Brynania outcomes 2.jpg

Primary peacebuilding mechanisms used in Brynania simulation.

I’m not the only RPGamer who feels this way. Tom Fisher is a fellow member of my local Montréal gaming group and DM extraordinaire, with an impressive record as a professional game designer and facilitator (he is codeveloper of AFTERSHOCK: A Humanitarian Crisis Game and the forthcoming Matrix Game Construction Kit, and has worked with the World Bank and various international financial intelligence agencies on games addressing financial crimes/corruption and strategic analysis). He had this to say on the topic in a recent email exchange:

I can say, without hesitation, that roleplaying games—particularly D&D—have led to the best jobs I’ve ever had.

There is a natural flow between being a gamer and professionally developing games, that much is obvious. What is less obvious, however, are the lessons derived from playing those games that do not directly impact game development. Role playing games, particularly the gamesmastering (facilitation) thereof engages, develops and encourages a particular way of thinking.

Much has been said about the need for outside the box thinking or lateral thinking. What is less discussed is how to train the mind to think different as some marketing campaigns encourage. Roleplaying games, in their various forms, are a virtual goldmine for the development, testing and experimentation of thought, and ways of thinking.

Roleplay, at its best, teaches through gameplay to account for assumptions, test limits of rules, push the limits of established rules – in short, roleplay is a short course on iterative design: “ design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented.”

Iterative design thinking is, in my view, the foundation of critical, outside-the-box, and lateral thinking. The process of iterative design faces-off actions based on assumptions against reactions based on real-world rules. Famously demonstrated by Tom Wujec’s Marshmallow Challenge, participants succeed by testing their assumptions against real-world effects (in that case, gravity and the relative strength of dry spaghetti).

The experiential and imaginary nature of roleplaying games requires reflection and forces a role-player to account for their assumptions when addressing a situation. In so many of my experiences delivering intelligence analysis or crime analysis courses, it is the recognition and testing of one’s assumptions that has been the lynchpin in achieving success in the training. Roleplaying games –and by extension immersive simulation exercises– are a crucible for developing the thought processes deemed so necessary and desired by modern institutions.

The experience of the gamesmaster, or facilitator, of roleplaying games adds a further level of complexity to the mix. Adult role-players, by their very nature, are an interesting bunch. Most tend to be well-read, quite intelligent, and universally challenging. As noted above, roleplay encourages the testing of limits, pushing of envelopes, and accounting for assumptions. So, a gamesmaster (GM) is confronted with a number of players –with their unique agendas– who inherently want to push the limits of the GM’s world-rules to achieve goals laid out by said GM designed to engage, thrill and enthrall each of the players. In short: herding cats. There is no more cost-effective short-course on diplomacy and small-team management than being a roleplaying game GM.

The complexity of gamesmastering (GMing) increases exponentially as GMs become involved in world-building. At the pinnacle of GMing is the world-building GM, who shapes world from thought to engage players in a truly immersive experience. Herein, the GM accounts for the cause-and-effect of player actions against the backdrop of an entire living world simulation. At this level, fluidity and iterative design are paramount to successful implementation and player-engagement, and will lead to a level of suspension of disbelief that will engage players not only logically in the gameplay, but emotionally, on a truly immersive level.

It is these skills of engagement, coupled with the role-player’s way of thinking, challenging and testing that have led to the best jobs I’ve ever had.

Much can be said about the nature of play and the strong links between creative play and language, physical, social/emotional, and cognitive development. Roleplaying games take this level of play to its limits, and push outward, not only encouraging growth, but in my opinion, forcing it, as new pathways of thought develop to deal with novel situations.

The elusive and mysterious “Tim Price,” prolific author of matrix game articles and scenarios, has certainly been known to frequently design and play RPGs. A certain former British military officer and gifted professional wargame consultant—let’s call him GLB—actually carries an image of the Advanced Dungeons & Dragons Dungeon Master’s Guide (above) surreptitiously taped to his clipboard to inspire him while facilitating serious games.

As for me, I’ve been playing D&D since the very first boxed three-volume set in the mid 1970s. Like the POLI 450 student quoted above, it’s fair to say that at the outset I too “never knew D&D could be so useful.”

TH25c.png


Have your own experiences of using RPG skills in serious gaming? Post them in the comments section!

%d bloggers like this: