PAXsims

Conflict simulation, peacebuilding, and development

Calling all National Security Policy Gamers: Make your opinions heard!

 

If you have some time, I’d very much appreciate PAXsims readers who work as professional National Security Policy Gamers (aka wargamers supporting policy making clients) taking a few minutes to contribute to a survey I’m running as part of my dissertation research. More information is below:uncle-sam-we-want-you1-kopie_1 (1)

I’m Ellie Bartels, a PhD candidate at the Pardee RAND Graduate School and researcher at the RAND Corporation. As part of my dissertation research, I am studying the practices of national security policy gamers like you. I am interested in understanding what types of games you run, what tools you use to design and analyze them, and how you assess your work and the work of your peers. To this end, I invite you to participate in a 15-30 min survey on your game design, execution, and analysis practices at the link below before 30 May 2017.

Click here to be taken to the survey’s Google Form <https://docs.google.com/forms/d/e/1FAIpQLSfV-I_JwosnxjLhr9GlDkJoL2uARWyGBpxphJ5zX34JSPUzZw/viewform?usp=sf_link>

(please note, some firewalls may block google documents. If you encounter problems, I recommend trying to access the form on a different network and computer).

Your answers will inform two different projects looking at policy gaming practices. Survey results will be reported in the section of my public dissertation monograph on current practices, will be available on request as a data annex, and may be used in associated articles and presentations. In addition to the primary purpose of this survey, the questions on participant engagement and immersion will be used to inform internally funded RAND research to produce an article on the potential for Alternative and Virtual Reality technologies in policy gaming. Both efforts will produce work that is publicly available, with the hope that it will prove helpful to researchers like you.

Participation, both in the survey as a whole and in answering specific questions, is completely voluntary. Your name, office, and other individual identifying information will not be collected as part of the survey, and no effort will be made by the researchers to link your individual identity to your responses. If you have questions about your rights as a research participant or need to report a research-related injury or concern, contact information for RAND’s Human Subjects Protection Committee is available on the first page of the survey.

 

Paul Vebber on Fleet Battle School

This week’s MORS Community of Practice talk featured Paul Vebber from the Naval Undersea Warfare Center, presenting on a game sandbox tool, “Fleet Battle School”. Vebber has shared quite a bit about the development process over time, and has posted many material in conjunction with this talk, so this is a good project to look at for folk interested in how digital game development actually happens in DoD.

Vebber started with a general information on gaming and game design he uses for audiences that are less familiar with gaming. While not the focus of the talk, I wanted to highlight two of his graphics, both of which provide some useful synthesis of recent debates on the nature of gaming. The first integrates the “cycle of research” with some of our recent discussion about what type of logic is used in games to generate new knowledge.

Vebber--cycle of research

The second discusses the relationship between OR analysis, gaming, and the level of problem, which is a common concern in my work and the broader field.

My interest in broad methods issues aside, Vebber’s presentation was focused on a overview of the current state of the Fleet Battle School game sandbox.

The core goal of the project was to design a capability game to determine what point does the change in capability cause changes in player decision making. As a result, the platform should have the ability to support very interesting sensitivity analysis about the intersection of combat effectiveness and decision making that is often elided or ignored in current gaming.

Right now the focus of the Fleet Battle School game is weapons and sensor capabilities of different platforms (though features like speed and fuel use are also built into the current rules). The game instantiates relative differences in capability rather than trying to mimic specific current capabilities. While you can input values that mimic specific real world platforms, that isn’t really the focus of the project. Again the focus here is how the ratio between different platforms’ capabilities impacts player decision making. As a result, the adjudication model was built to focus on plausible results, rather than claiming any type of predictive power.

Vebber-Gaming and OAFleet Battle School is a digital platform for naval operations planning game built on krigspiel principles (that is rigid rules for physical movement and combat, in contrast to a seminar table top game that would use looser rules focused on organization and political decision making). The system allows the game designer to edit the map terrain, platform capabilities, order of battle, and rules (though some programming skills are needed for really deep changes here) within the game platform.

The game also allows for multiple “levels” of players on both the blue and red team, so that the gap between commanders and line officers can be included in the game. The commander can set a daily intent, individual “officers” can then set more specific orders which the commander then approves and submits. C2 is largely handled outside of the game platform in order to accommodate different networks, but it will require that C2 be documented outside of the platform.

The system can then either auto-adjudicate or allow an umpire to override outcomes either in all cases or only in less probable die rolls. There are also some nuanced setting to represent friction that can also allow penalization of bad leadership and declining capabilities over long deployments.

That’s all is a pretty superficial description of the platform and its capabilities, but if all this sounds interesting, I would urge folks to check out the Wargaming Connections blog where Paul has posted more materials in association with the launch of the beta.

One point that I think is critical to highlight is the way Vebber’s development experience also allowed the project to avoid many common pitfalls with computer game development in the government. This has been a long development process with lots of paper playtesting, use of off the shelf products for some game functions where appropriate, and “good enough” graphics until end-users articular clear priorities. Having been involved in computer game development from within the government, I’ve seen how a lot of these can go wrong, so in many ways I think Fleet Battle School is a great case study about how to do this kind of development. Vebber’s regular and detailed updated on the development process should be a reference to anyone attempting this kind of project in the future.

Frost on Game Facilitation

SAGDEarlier this month I was able to tune into the MORS Community of Practice for a talk by Adam Frost from the Studies, Analysis, and Gaming Division of the Joint Staff (J8) on game facilitation. Frost’s office is responsible for many high level policy games (including for the Chairman as mentioned in this article), so the talk was given from the perspective of running games for very high level policy makers at the strategic level. That said, much of Frost’s advice resonated with the training I’ve received, as well as my own practical experience, and I believe his advice is broadly applicable.

Frost intelligently started the session by facilitating a short discussion on what makes for good and bad facilitation—no mean feat when over half the participants were on a phone line and he had no list of participants! The group identified some patterns in poor facilitation, including:

  • losing control of the room or letting someone else take over
  • asking the wrong questions
  • having an agenda or obvious point of view that biases the outcome of discussion
  • having a facilitator unwilling to adjust to reality in the room
  • “dead air” or obvious gaps in the discussion that drain energy

Similarly, signs of good facilitation include:

  • asking the right questions
  • successful synthesis, particularly that which leads participants to novel conclusions
  • player immersion, indicated by signs like players’ willingness to play past the deadline
  • energy and pacing
  • redirecting conflict and emotion into productive discussion

One point that came up early in discussion was what the division should be between a game and a facilitated discussion. To my mind, facilitation is a skill that is needed to lead all kinds of discussions, from meetings to strategy sessions to games. It ensures that the conversation produces the desired end products, and that everyone leaves the room more or less alive. The structure of the discussion that the facilitator adheres to and the end results are all that is different.

One overarching comment was that good facilitators aren’t remembered; they fade into the background of the discussion, which is what folks remember on leaving the room. This notion came up in many ways over the course of discussion, and I think it is a particularly helpful reminder for inexperienced facilitators. The goal of facilitation is not to make an impression on senior leaders, teach, or to show off how much you know—rather it is to collect information from others so it can be processed by the group. As a result, many of the tips and tricks for facilitation mentioned by Frost and other participants depended on not having an ego and being willing to say dumb things in order to move the conversation forward. As a result, in many cases it can be better for the facilitator not to be an expert so they don’t have a reputation to maintain.

This also means that age need not be a barrier to being a facilitator, I’ve seen young facilitators who were very effective because they allowed the participants to teach them over the course of the discussion. However, it takes a very particular mindset to let go of your own ego, and that mindset takes time and practice to achieve.

Frost then moved into a more formal briefing. He first discussed the difference between the roles of the game director and game facilitator, summarized in the table below:

Game Director Game Facilitator
Primary Responsibility Ensures the game meets the purpose and objectives laid out by the sponsor or client Ensures that the discussion in the room is productive
Team Roles Team lead Supports director; should be a part of the design team from as early in the development process as possible
Game Design— Process and Rules “What questions do I need to answer?”

Determines best method/mechanics of the game to meet purpose and objectives defined in conversations with the sponsor

“How do I ask the game director’s questions of the participants?”

Determines how the flow of conversation can best meet the director’s vision, including what questions need to be answered at each stage of the game in order to move forward

Game Design— Roles Determines the right types of roles to include in the game, based on the purpose and objectives of the game Determines who are the right participants, matching individuals with designed roles; knows the background of the participants and anticipates how it will shape behavior during the game
Game Room Setup Ensures game materials are available and correct, including slides Ensures layout of room will support the discussion, including location of different teams, seating chart, etc.
During the Game Makes sure the game is meeting objectives; makes changes to the design to account for unexpected problems that are preventing the game from achieving objectives Ensures the game “goes”—that participants buy into the game process, and that discussion stays moving and on track with game director’s vision
Adjudication Designs the adjudication process Ensures participants understand and don’t fight the adjudication process—that is, “sells” it to participants

This division also comes with an interesting, and not necessarily intuitive, division of responsibility/credit for the game’s outcome. The room generally agreed that good facilitation may be able to save a badly-designed game, and bad facilitation can sink a well-designed game. That may make the facilitator more responsible for the game’s success than the director, even though they are rarely the project lead. Because of this, game directors can get lazy and dependent on the skills of a good facilitator to ensure the success of a game. One specific example of this is the development of the questions to be answered at each step of the game. These should be the purview of the director, but too often fall to the facilitator.

Frost also noted that responsibility for data capture (whether through note-taking or other means of recording game events) should be a third role. I can speak from experience: thinking that you can capture notes while facilitating will be a lost cause. You just cannot listen to the speaking participant, think about the next question(s) you will ask, and kept the session objectives and schedule in mind at the same time you are trying to keep a written transcript of the discussion!

Frost then delved more deeply into the characteristics of a good facilitator. This included:

  • Facilitation is not command. You are guiding, not telling. (Relatedly, facilitation is not teaching in the normal sense, though it does share some similarities with Socratic-style questioning.) This means that it can be an uncomfortable role for senior leaders, particularly from the military, where leaders are used to a very different role.
  • Facilitators need not be senior. In fact, Frost mentioned several times that he has found his relative youth to be an asset because it allows him to “play ignorant” and draw better explanations from participants. I’ve had similar experiences, both with age and with being an “outside” civilian who can ask for more details about military tactics and strategy. That said, Frost also highlighted ways in which his facilitation style differs from more senior facilitators, including needing to stand in order to hold the room. Time to practice and experiment is critical to figuring out what will, and will not, work for you!
  • As a facilitator, your opinion does not matter. The goal of a game is not to affirm that you are right; it is to bring the group to consensus on a decision, and you need to have the mental flexibility to let the discussion go where the participants want. Most of the community of practice agreed that floating an idea as the facilitator can be a way to bring missing ideas into discussion, but whether the group runs with, or abandons, the idea is up to them. Another way to think about this is that the facilitator can be a foil for participants’ ideas, but shouldn’t be an advocate for any idea themselves. Likewise, in some cases, you can set up questions for particular people based on a general idea of what they will say, but, as Frost put it, “if I know exactly what the person is going to say the game is going to feel scripted.” That said, particularly when participants have specific, narrow expertise or are introverts, it can be a good way to get folks engaged in the discussion.
  • Have a strong opening. In the first few minutes, you need to establish why participants are in the game; set expectations about how the game will proceed (purpose and schedule of the game); and provide the minimal critical information participants need to set the scene for initial game discussion. This information is usually in the game read-ahead, but it’s a safe bet participants haven’t read or remembered it. That said, overviews of the scenario should be short (1 slide not 13), as more details take time from discussion.
  • Anticipate participants’ tendencies. This includes common patterns of behavior (military participants won’t contradict a senior officer; policy and intel participants want to get a lay of the room before they speak) and how to manage them (ask senior leaders to speak later in the game; let people know what question you are going to ask them before you put them on the spot). Relatedly, as a facilitator, you want to make the participants look good by setting them up for success, so it is critical to let folks know you are going to call on them and give them time to think about the question.
  • Always have a question to ask to keep conversation from dying out, but don’t use it unless it has died. This question should be short, and have a question mark at the end. You talking as the facilitator both takes time away from participant discussion, and can make you look uncertain. Anyone who has ever been to a DC think tank event knows what this looks like!
  • End on time. This requires that the facilitator pay attention to the game, and as some sessions run long, develop a plan to make up for lost time. This is particularly important the more senior your participants are.
  • The facilitator sets the room’s energy level. As a facilitator, you need to seem enthusiastic and motivated, or you will lose participants. A trick I learned is to think about the volume of your voice, size of gestures, and general energy level when you are chatting with someone at home, and contrast it with the way you talk in a large, noisy room like a bar. When facilitating, you want your energy to be right in the middle—definitely more than a quiet conversation, but not so much that you seem like a crazy person!

The discussion concluded with a brief discussion of how to gain skills as a facilitator. The group generally agrees that facilitation is a practical skill that is best learned by doing. However, a few folks (myself included) spoke in favor of formal coursework on facilitation. In my experience, facilitation training often feels silly at the time, but can be critical to learning good basic practice. I left the discussion believing that coursework can be helpful in getting from bad to competent — but to get good, you need talent, mentorship, and practice.

Saur on Teaching Gaming

Joe Saur gave a good talk on teaching gaming at the MORS Community of Practice. I’ve been remiss in not posting my notes before now, particularly because teaching gaming is a subject near and dear to my heart.

chalkboard-generator-poster-teaching-simulation-and-gaming

Saur’s presentation focused on his experience teaching 70-plus students across the military, many who lead organizations that use wargames for analysis and training. One point that Saur highlighted was that even though his students had extensive operational experience and are quite likely to be game sponsors, very few had previously seen a wargame. This is a critical point to consider as the community thinks more about how to best communicate of methods and results to our sponsors. It really reinforces the need to spend more time and energy thinking about how we as gamers to educate sponsors and stakeholders. While Saur was working within one of the military school houses, we are going to need more approaches in the long run to get a broader understanding of the benefits and uses of gaming.

Saur noted that there are not many syllabi for wargaming classes. He was able to reference a UK wargaming and combat modeling class, but that was largely focused on the math required for combat and campaign modeling with participation in a staff game. As a result, this course provided limited guidance on how to teach gaming.

In building his syllabus, Saur aimed to teach mechanics that staff officers can actually use. His goal was to expose students to a range of games as a starting point to support student development of operations game. As a result, he tended to focus on concrete mechanisms like dice, hex grids, miniatures, and cards drawn from hobby gaming, with only limited coverage of less structure techniques like matrix and seminar games.

One point that I found particularly interesting is that during student discussions, they hypothesized that as the average member of the force has less combat experience moving forward (or their combat tour is further in the past), rigid adjudication will become more critical. Students argued that free adjudication relies on operational experience.

Not surprisingly, I’m fairly skeptical of this claim, particularly in the case of operational and strategic games. Most of the strong game designers I know are civilian analysts, because members of the military are rotated through positions too quickly to gain mastery. Furthermore, rigid systems of adjudication rarely survive analytical games intact, as players almost always seem to do something not anticipated in the game rules. As a result, even highly formalized rules will often require impromptu adjudication calls. Finally, I’m fairly skeptical of rigid adjudication’s ability to capture interpersonal social and political dynamics that strongly impact strategic and operational outcomes. Limiting ourselves to rigid rule sets cuts off from gaming many of the complex, unstructured problems that games are best suited to examine.

The presentation concluded with a selection of the games built by the students. These covered an impressive range of topics and game design approaches. In part, the approach seemed particularly impressive because Saur instructed the students to tie the games they designed to their follow-on posting. As a result, the games were designed to be practical and helpful, rather than academic in nature. I’ll be interested to see if any of the students follow up with notes about how deploying the game in their new posts goes!

AAR from RAND’s Gaming Center Open House

logo-1200Last month, RAND’s new Gaming Methods Center hosted an open house for gamers in the DC area. The event provided an opportunity for an exchange on the current state of gaming. Highlights of the discussion are summarized below. The event was a great chance to see what different folks in the community are thinking about, and I hope that similar events will occur regularly in the future!

The Gaming Methods Center is one of six new internal organizations, intended to “facilitate the development and dissemination of analytic tools and methodologies as well as employ existing ones in a collaborative and synergistic fashion across the entire RAND research and policy domain landscape.” The center’s immediate objectives include:

  • Encourage the development of innovative new tools and techniques and encourage the evolution of existing forms and methods
  • Encourage the use of these methods across the entire RAND research portfolio (cross-disciplinary)
  • Encourage interdisciplinary cooperation on methods

The open house featured presentations from long-time RAND gamers who highlighted the history of gaming at RAND, the important role gaming can play in confronting current security challenges, and current RAND methods for both seminar style and board game/table top design games. A few highlights include:

  • A discussion of past RAND gamers work, which highlighted a favorite RAND paper of mine, Crisis Games 27 Years Later
  • A discussion of the evolution of the “Day After” method for seminar gaming, which I’ve used to good effect in some of my educational games

In the course of the event, there was lively debate from both RAND staff and outside participants about how gaming can best be employed to support national security decision makers. Highlights include:

Benefits of Gaming. Over the course of the day, participants offered a range of thoughts about the benefits gaming offers to the national security community. One participant describe three criteria for problems that are tractable to gaming:

  • Blue or red operational concepts are not decided or not good
  • Human agency is a major determinant of outcomes (adversary behavior in particular)
  • Designer needs to convey a future people haven’t yet experienced

Other participants linked the practice of national security gaming to research on the power of the “urge to play” as a means of education and discovery. Still others noted that games tap into human’s need for narrative by providing an opportunity to build our own narratives in a setting that’s shaped by designers to further the right narrative.

Finally, participants highlighted the benefits that gaming offers to analysts. One individual commented on the tendency of analyst to waste too much time “worshiping the model.” Gaming encourages analyst to get to insights more quickly by forcing the analyst to work in broad strokes and model the truly important without getting tied up by the minutia. Games are also helpful when they disrupt assumptions that are built into the model. By watching what assumptions break down during the game, analysts can then go back to develop a better model.

Is gaming a scientific method or not. The art vs. science debate is an old standard in the field. In addition to a discussion of Peter Perla’s concept of games as part of the cycle of research, highlights of the discussion include the analogy of gaming methods to method acting, discussion of “the cult of spurious precision” that falsely seeks precise quantification rather than broader insight, and the importance of good design.

While this debate was interesting, I found myself agreeing with a participant who said that he was tired of hearing the same debate on the topic over the last several years. He urged the community to move forward to determine the implications of this debate for gaming practice. More work that lays out how the practical process of design and assessment would differ base on this theoretical debate seems more likely to move the field forward.

Game design standards. Some interesting commonalities in the participants’ standards for game design emerged during the discussion. Participants stressed that good game design, like good analysis of any kind, is primarily about caveating the limitations of analysis. However, right now there are not consistent standards about how such caveats are documented and communicated to fellow analysts and sponsors. Instead, there is currently a lot of responsibility on the principle investigator to communicate limitations. Some participants stated that this is best done by telling stories about “dynamics of the campaign” to senior leaders.

Relationship between gaming and other common types of modeling. Participants stressed the differences between gaming and other common types of modeling such as campaign planning. Gaming isn’t a cheap or fast way to do campaign planning and assessment, and should not be used in its place. As a result, it is critical that gamers know other methods well enough to direct sponsors to other methods that are more appropriate to answer the questions at hand.

At the same time, participants also stressed that gaming and modeling can and should work in tandem, not in oppositions. For example, games can be used as a screening tool, in order to determine what topics are worth spending the time to create in-depth, higher-resolution model. Games can also be used to test assumption that will be used to build models to minimize the risks of a faulty foundation to analysis.

Finally, participants stressed that gaming is a broad field that likely includes many related techniques. Being more aware of the strengths and weaknesses of different techniques can make us better able to pick the correct technique for the problems we are asked to analyze.

Challenges of communicating the potential and limitations of games to sponsors. Discussion highlighted the necessity of educating sponsors about what games can and cannot achieve. Many in the group stressed that right now sponsors who are new to gaming don’t understand what types of questions are appropriate to game. Furthermore, because gaming has become the main tool in the box for a number of current issues, the professional community needs to set ground rules and expectations (a point I’ve discussed at length here). Right now, there is not enough guidance available outside of the expertise of senior practitioners to help identify good games and bad games. If the field cannot develop alternative ways to ensure the quality of games, there is a real concern that “bad money will drive out good.” Participants agreed that there is a need to make sure that the interest in gaming doesn’t drive us into bad practices, not only for our own careers, but also for national security.

Additionally, unlike many other approaches to analysis, gaming is an event and a process rather than a group of methods for data analysis. Issues like the inability to replicate games, the need for space and in-person interaction, and other related challenges are hard to communicate to sponsors. In particular, stressing organizations through gaming often stresses process and procedures—space requirements, security, etc. all become major challenge.

Shortcomings of the traditional guide-style gaming education system. The current system for educating new gamers has been challenged by the noticeable generational gap in the field, where an established core of senior gamers is supported by staffs of entry-level folks (of whom there were a number in attendance, a nice change of pace from the more senior crowd that attends many gaming events). Discussion highlighted how the field’s traditional reliance on commercial board wargames may limit mentorship. The group also discussed the ways in which limited formal methods for game design make entering the field challenging.

Balancing the convenience of digital with impact of in-person events. There was a sustained discussion of whether games can be done remotely, using technology to bridge distances and allow for asynchronous games. The hope is not only will this make games cheaper, but also allow more, and more diverse player perspectives. However, the utility of such “virtual” games was contested. In general folk felt that the usefulness of virtual games depends on the purpose of the game, what types of findings are you looking for, what type of events you want to simulate, the audience, and what kind of time and interaction are available. One comment that particularly resonated with me is that games give a simulated experience that can shape behavior in the real world, so they require “intellectual if not physical proximity.” If analysts can created the intellectual proximity required in a virtual environment than these games can be successful, but often interpersonal interaction is still required.

Converting individual discoveries in the game to institutional insights. Participants discussed the necessity, and the challenges, of converting the individual experience of game designers and participants into organizational change.

Participants noted that often the group that learns the most from games in the design team, who may not be in the best position to advocate for change after the game is compete. Participants stressed the importance of game designs building up the ability to communicate game results in compelling (often narrate based) ways.

Likewise individual player experiences during games can be profound, but missed by game designs that focus on documenting group discussion and decision making.   Participants suggested interviews with individual players focused on how individuals framed problems can be helpful. Likewise, focusing on understanding the thinking of individuals who deviate from the group’s “mean” opinion can be particularly valuable. To capture these perspectives, ensuring that research questions and data capture plans include a focus on individuals and small groups as well as the group as a whole is critical.

MORS 83th Panel AAR: Typologies of Game Standards

CA3EPFuWsAAL1qu

Last month, I participated in a panel on treating games as a quasi-experimental method, organized as part of the 83rd annual MORS symposium. The panel’s participants represented a range of approaches to gaming, as shown in some of our recent presentations to the MORS Wargaming Community of Practice this spring.

The dominant view of the panel was that game are not quasi-experiments, but that quasi-experimental design can serve as a useful metaphor to wargame design. Quasi-experimental design refers to experiments that do not have randomized controls. As a result, the method spends a great deal of time and attention considering what conclusions can be drawn given limited control over the conditions.

Since it’s particularly important to consider how much control we can actually exercise over game design and how we articulate the rationale for our choices to sponsor and consumers of post-game analysis, I think the structures and standards laid out in quasi-experiential methods can provide helpful guidance. Some of my fellow panelists were less sold on the utility of this particular set of tools, though most agreed it was a valid approach.

However, the majority of the panel’s time was spent discussing validation. For many on the panel, this is a loaded term that calls to mind statistical validation, which is not possible in wargames. While I find the concepts and practices related to internal and external validity to be useful guides in game design and analysis, this panel did a good job of convincing me that the effort needed to convince folks that internal and external validity need not mean statistical validity is not worth the fight. Audience member and panels offered a range of alternatives from “trustworthiness” to “analytical caveats” that might provoke less resistance while still helping to articulate a shared, flexible standard that design and reporting on games should be held to.

What made the panel particularly useful to my mind is that the conversation (both between panelists and with the audience) was able to move past simply arguing for the need for standards, to laying out broad approaches to design that might require different standards. These included:

Game Purpose: The differences between game design for analysis and training came up (including a short discussion of the two by two I use to describe the differences). It was agreed that we should think about how much each of these types of games must reflect the real problem set they represents differently depending on the goal of the game.

Game Structure: Somewhat related, panelists stressed that structure of the problem being explored in a game is not necessarily directly related to the structure of the adjudication technique in use in that game(a point I’ve stressed before). One conclusion I drew from the discussion is that the problem structure likely has as much, if not more, bearing on how to think about game results then the adjudication structure selected. Designers often focus on the adjudication model as the bases for why game results should be seen as relevant, but if many of our fundamental design decisions are driven by the problem structure, then we might be better off focusing on the problem.

Epistemological approach: One key point of divergence among members of the panel was what epistemological approach is best applied to wargaming. Arguments in favor of positivist, constructivist, and complexity theory where each made, though it was generally agreed that games could be designed and analyze any of the three approaches. Which approach is the most appropriate to gaming has been a frequent debate within the COP over the last year, but this conversation offered a way out: that each may be valid but have different rules of the road (with implications about when each is appropriate to use).

Game design philosophy: Several panelists mentioned Peter Perla’s three styles of game design artists, architects, and analysts (discussed in this lecture), as a key aspect governing game design standards. I fall very strongly into the architect camp, so my style of game design lends itself particularly well to structured approaches from the social sciences. As a result, it was particularly helpful to hear from others on the panel, particularly the “artist” type designers about their preferred metrics of game success. These metrics focused on participant engagement, which I’ve always considered as a less prominent component of game analysis. Thinking more about how to create standards that center in engagement and emotional connectivity will be useful to creating more differentiated standards that better for the full range of game we use.

Finally, for much of the panel, validation of game results happens outside the game itself as part of the broader “cycle of research.” It’s great to see such a strong explicit focus on games as part of broader efforts. Connecting the design choices and resulting standards to aspects of these broader studies will be a key area for future research in game design.

Innovation, Art, and Professional Standards in Gaming

Late last week, Peter Perla released a pre-publication copy of his most recent paper responding to the recent high level Pentagon interest in gaming as a means of innovation. Perla lays out his vision for how we can take advantage of this moment without allowing gaming at its laziest and least productive to take over. For Perla, good gaming for innovation (what I’ve called “discovery gaming” in other pieces) depends on competition between players. As a result, innovated design is far less important than design that enables strong communication and competition to result in creativity.

This isn’t the first time that the argument has been made to treat gaming more like an art than a science (The Art of Wargaming is called that just to riff on Sun Tzu after all). Art vs. science is also a standing debate between gamers that erupts at least once a year. In the past, I’ve viewed these debates primarily through the lens of what they tell us about how we teach and learn to game—too often science produces a cookie-cutter template while art produces unreliable mentoring.

However this time around, perhaps influenced by my current focus on game design, I’m noticing a different thread in the art vs. science debate: how do we evaluate if a game is good?

Perla argues “Real wargaming is about the conflict of human wills confronting each other in a dynamic decision-making and story-living environment” and “It is this process of competitive challenge and creativity that can produce insights and identify innovative solutions to both known and newly discovered problems.” He also calls on current practitioner to speak out to identify bad games to build up quality control that the field does not always have.

Taken together, these lines suggest that the quality of a game can be determined by the quality of the intellectual output, and that judgement can be rendered based on experience and expertise. But when applied to the environment in which games are created, these become very problematic very quickly.

Professional games are almost never built only to achieve the goals of the designer. Instead, the reality of national security gaming is that game designers work for game sponsors, who evaluate our work to determine both what lines of research to continue, and which of our findings to base policy decisions on.

Given that it is these sponsors who evaluate our work, how might they apply Peter’s standards? I worry that these standards place too much weight on the output of the game. I’ve seen too many “innovative” outcomes in games that are really just the result of ignoring the constraints that shape the real work. Unless the context of the game’s design, and how it replicates the real world problem set of interest is taken into full account, lots of time and energy will be expended on analyzing (or even executing) half-baked ideas.

I also worry that the reliance on the community of gamers to identify good and bad games sets up worrying dynamics. As Peter notes, not all folks currently making national security games are doing a good job. While Peter points to some strong communities that have sprung up, they are hardly monolithic in how they approach, practice, or assess games. What’s more, the field is so fractured that even the most inclusive of these groups can hardly claim to encompass all the good gamers out there. So then how are sponsor to choice which voices in the professional community to base their standard on?

All of this brings me back around to the need for standard for rigorous design. I absolutely agree that a rigid, “systematized” set of game designs cannot work. But adhering to good research design method and standards of evidence can offer us some basic standards that can be applied to all design types, and are accessible to our sponsors as well as practitioner. This may not in and of themselves be enough to guarantee a great game, but it will prevent many bad ones.

Ellie Bartels on Research Design for Gaming

Slide01

I’ll be giving a talk later today on how I use social science case methodology to think about game design. For those who are not able to attend, I wanted to post both my slides and a brief summary of my talk. This is part of an ongoing research effort, so feedback and thoughts are very much appreciated!

MORS Gaming COP Game Design from Social Science

There has been quite a lot of recent interest in expanding the use of gaming while ensuring that games are rigorous so they have a positive impact.  Traditional instruction on game design, such as NWC War Gaming Handbook or Peter Perla’s Art of Wargaming, stresses the need to make design choices in a thoughtful way in order to achieve game objectives, but does not provide much specific help translating objectives into choices about game roles, rules, and environments. More tools to help gamers think through design choices and communicate the potential impact of these choices on findings can help bridge this gap.

Recent work by other wargamers has discussed tools to apply more rigorous techniques to analyzing game results (see work by Wong and Cobb, Vebber, and Ducharme). However, as I discussed in an earlier post, some recent work conflates how structured the problem examined by the game is with how structured an approach is used to guide game design and analysis. Gaming is well-suited to examining unstructured problems, but to be done rigorously, it needs to be done in a structured way.

The goal then should be to find techniques for structured study of unstructured problems. Vebber and Wong and Cobb both use types of narrative analysis as one such approach, but there is also a role for a more generalized approach that might be useful for more types of games.

To that end, I propose a revision to the traditional design process based on case study methods from the social sciences.  While gaming and social science have been in dialog in national security analysis circles for the past several years, there is still not a well-developed collection of work connecting the two fields. However, because social scientists work on similar types of problems, it is worth considering what we gamers might be able to learn about structuring research and analysis.

Case study methodology is a particularly promising area of social science research design to tap into for gamers. Like gaming, case studies are used to study fairly unspecified problems, so are useful for theory creation and variable identification, as well as theory testing. Case study methods are also designed to focus on the mechanism that connects causes and effects, and are able to document complex causal relationships. As a result, case study methods are easier to apply to the type of unstructured problems we game than more quantitative techniques are.

I argue that we can often think of games as analogous to single case studies that look at variation over time or in comparison to a counterfactual in order to identify the mechanisms that link potential causes to outcomes of interest. While the findings of these approaches are not considered as strong as paired case studies (which are more commonly used in social science research as a result), they have a robust history of producing insights that advance our understanding of complex political, military, and social problems.

Slide16

Applying the logic of case study research design then allows us to apply best practices from case study design to the development of games’ purpose and objectives; concepts; selection of scenario setting; definition of scenario, rules, and roles; and data collection.  I review some initial thoughts in this presentation, including the need to:

  • Identify common game objectives, such as pattern analysis and variable identification, which can provide ways to categorize games. This can allow us to develop best practices for tackling similar design problems even when games address different problems for different clients.
  • Require designers to explicitly state their understanding of the problem being gamed and how that hypothesis shapes what issues are highlighted or ignored in game design.
  • Encourage designers to clearly define input and outcome variables of interest, particularly the role of player decisions. Designers should also think through what confounding variables may appear in a game design, and how they might shape what can be concluded from the game.
  • More carefully select the scenario setting for games based on what type of analysis is being performed.
  • Consider how inevitable logistical limitations shape the testing environment of games, and how these limits should scope the applicability of game findings.
  • Better tailor data collection to strengthen analysis.

Each of these areas offers potential avenues for further development of more detailed best practices and techniques.

PAXsims thoughts on Ducharme on COA analysis gaming

Earlier this week, Devin and ! both listened to a great talk by Naval War College’s Dr. Doug Ducharme for the MORS Wargaming Community of Practice on best practices for wargaming in support of Course of Action (COA) analysis. This is second of three posts: the first summarized Doug’s talk, and the third will have some thoughts from Devin.

I found Doug’s presentation, as well as the discussion that followed his talk, to be very insightful and thought provoking. It was particularly useful that Doug offered concrete guidance for game designers to improve their practice. The suggested best practices mirror well with my own experiences, and serves as a useful set of guidelines for new gamers. However, there were two points that I want to explore more: Doug’s distinction between educational and analytical gaming, and his distinction between free and rigid adjudication.

Doug argued that all games are experiential. What differentiates educational and analytical games is whether the goal of the game is to change the participants, or to change our base of knowledge. This definition is related, but somewhat different from what I’ve used in my own work. In past work, I’ve defined the types of game purposes using the 2×2 below:

Untitled

As a result, I tend to think of analytical games as seeking to gain a better understanding of a problem, while education games seek to make people better able to solve similar problems in the future. I need to think more about how the distinction Doug points to fits into this model.

Doug’s definition also suggests to me a somewhat troubling fact: the majority of events that are run to improve US strategy today are actually focused on improving decision makers’ future capacity. On one hand, I think gaming can provide excellent educational value and professional development. On the other, I don’t want that to come at the expense of thinking though strategy and plans to make them as robust as possible. I left Doug’s talk hoping that the comment made by another participant that “all games are both educational and analytical” is right!

The second point I want to tease out a bit more is Doug’s definition of adjudication methods. The talk, and the discussion after, clarified for me something that has been bothering me about how gamers talk about adjudication for a long time. A lot of discussion around gaming for analysis argues that the more rigid the system of game is, the more analytical it is. As a qualitative/mixed methods person, this rush to quantification always rubs me the wrong way, and I think this talk gave me a new way to frame why it bothers me.

I think that most of the time when gamers talk about free or rigid methods, we are actually conflating two different ideas. The first concept is a decision made by the game designer about how structured a technique to use to capture and analyze data about adjudication. Here, we can think about a spectrum that ranges from very loose adjudication, where rulings are made with few restrictions (and likely little documentation), to a very rigid system with detailed protocols for documentation and adjudication. The second concept deals with how specified of a model is used to generate the outcomes of player decision. Unless a game designer misses something in their research, this factor is limited by the state of knowledge on the issue being gamed. In some cases, we may have a very concrete and detailed theory of what should happen, but other times our models of cause and effect are less well developed, and we are left to deal with some pretty underspecified models.

While I do think that it is easier to establish structured adjudication rules when we have a well specified theory behind our adjudication, I don’t think the two concepts are necessarily the same. For example, one participant on the call referenced matrix gaming, which can provide a great deal of structure to game adjudication, even when causal models behind adjudication are fairly nebulous.

Treating the two design criteria like they are connected, or even the same, lets us get away with under-designing games when we are dealing with complicated poorly defined issues. For example, often “free” method games relay on expert judgment for adjudication, who make determinations about the effects of player action without providing much more justification then their credentials. However, by having less structure in the adjudication, game designers often give themselves a pass from looking carefully at what mental models experts are using to determine outcomes. As a result, we end up not ever really knowing how specified the model that drove the action of the game actually was, producing enviably nebulous and unsatisfying post-game analysis.

I’d argue that game designers should treat structured approaches to adjudication as critical to good game design. Then, even when the underlying models are underspecified, games can contribute to clarifying the models that do exist, and over time, to increasing model specificity. This is a concept that has been discussed with regard to wargaming emerging issues, but I think it needs to be applied much more broadly.

This is a topic that a lot of my recent work has focused on, and I’m due to speak to the MORS COP on the topic next month. I’m hoping to be able to share some of my thoughts here in advance of that presentation. As a result, even more than usual, I’d love folks’ feedback on these ideas!

Ducharme on COA analysis wargaming

Earlier this week, Devin and Ellie both listened to a great talk by Naval War College’s Dr. Doug Ducharme for the MORS Wargaming Community of Practice on best practices for wargaming in support of Course of Action (COA) analysis. This is the first of three posts: the first summarizes Doug’s talk, and the second and third provide some thoughts from Ellie and Devin.

Wargaming is the recommended technique in military doctrine for analyzing COAs during the joint operations planning process’s 4th step. In actual practice, restrictions on staff time, skills, and commander involvement can all critically compromise the ability of the military to actually follow through on this. Doug states that he has seen an increase in the attention paid to these games in the last few years. However, he stated that there is not enough work done to document what gaming methods do and do not lead to successful COA analysis.

To set up his discussion of COA analysis gaming best practices, Doug started by defining gaming (using Peter Perla’s often-cited definition), and discussing how games differ from one another. He established that games can be defined along two axes: 1) whether the game has an educational or analytical purpose, and 2) whether the game examines concepts or capabilities. In this model, COA analysis is defined as being educational and conceptual.

Doug noted that with increased interest in COA analysis games, there has also been interest in incorporating other analytical techniques to support COA analysis. In particular, leveraging campaign analysis techniques has become more popular. Doug used his two-by-two to show why this can be an uncomfortable melding. In Doug’s model, campaign planning is an analytical technique, focused on capabilities. This places it in the opposing quadrant to the educational, concept-focused purposes of COA analysis gaming.

He then moved on to lay out five best practices for COA analysis gaming:

  1. While doctrine suggests several methods for COA analysis, it does not offer strong guidance about how to select techniques. Given that games, by definition, are focused on decision making, Doug recommends defaulting to the critical events method which focus analysis on decisions and their potential impact.
  2. Doug argued that the use of an active red cell is critical to COA wargaming. He specified that the cell’s objective should be to improve the COA, not to “win” the game, and that there should be a facilitator in the cell who can remind participants of this goal if they go off track. He also has found it helpful to keep the red cell to a roughly equal size with blue, and staff it with both intelligence officers and planners. These strategies create an active, but not overly competitive, red that can provide a strong critique of the COA.
  3. Doug argued that rather than defaulting to a format of sequential moves with alternating action by red and blue, COA wargaming moves should ideally be made simultaneously to better mirror reality. If turns must be sequenced, game designers should determine who ought to have initiative based on the scenario in play, rather than defaulting to a blue first move.
  4. Doug described adjudication options as a plane, with one axis running from move-step to running time, and the other axis from a free to a rigid method of adjudication. He argued that even when using relatively free methods of adjudication, having a structured process to evaluate player decisions is important. He also argued that most COA Analysis games have “open adjudication” with fairly move-step time, and fairly free adjudication methods. He also tied this point back to his earlier discussion of the difference between COA Analysis and campaign analyses, which have much more rigid adjudication rules.
  5. Finally, Doug stressed the importance of providing clear criteria for evaluating COAs in advance. Doing so is critical to determining how to assess the COA’s strengths and weaknesses. This then naturally leads into the next step of JOPP, COA comparison, where pros and cons are discussed.

Doug ended his talk by arguing that if we are looking to add rigor to the COA analysis process, it would be better to focus on approaching games with an analytic mindset rather than trying to incorporate campaign planning tools that may not be the right fit. He provided a few examples the use of Analysis of Competing Hypothesis, and Analytic Hierarchy Process as tools to strengthen COA analysis games to show how post game analysis can also strengthen findings.

Rubel and an introduction to mixed-method gaming

Several weeks ago, well-respected wargamer Barney Rubel posted a short article critiquing SecDef Hagel’s call to improve gaming as a means of enhancing innovation. The piece is a thoughtful review of many of the strengths and pitfalls of gaming and is well worth a read. However, I was struck by how many of Rubel’s arguments are old hat within the gaming community. For example those readers who followed up on Rex’s suggestion to read Crisis Games 27 Years Later (which I also highly recommend) will recognize many of Rubel’s arguments.

While these limitations are important to be aware of, too often gaming is either written about in glowing terms by outsiders to the field (for an example, see Dave Anthony’s poorly-received Atlantic Council event from this fall) or are cautious warnings by experienced gamers like Rubel noting the limits of the field. While professional conferences and publication feature a bit more nuance, they too can fall often fall into the same patterns.

While both points are important to consider, neither is very useful in thinking about what can be done to make gaming better. However, in the last two years or so I’ve seen more and more work that explores new techniques, and attempts to capture how they did (or did not) improve the ability of games to meet their objectives.

My goal is to use some of my time here at PAXsims highlighting these new techniques. My hope is that doing so will give a better sense of some of the areas where the field has made improvements, and help circulate new practices for feedback.

To start out this discussion I want to talk about one of the areas I am most excited by – the use of new, qualitative techniques that compliment quantitative techniques and can be used in tandem to produce richer analytic results.

Wargaming has often been something of a little brother of operations research. As a result, often the instinct of gamers (and DoD analysts more broadly) is to reduce problems down to be as specific, and ideally quantitative, as possible. As a result, many gamers have preferred to look at problems that can be reduced down to combat performance tables and other probabilistic ways of generating results. Such a strategy reduces many of the pitfalls of subjectivity raised by Dean Rubel. However, these methods do so by stripping out much of the complexity and indeterminacy that makes problems challenging in the first place, and as a result can greatly limit gaming’s utility.

An alternative to this approach relies on qualitative analysis that may offer less specific findings, but allow complex problems to be approached directly. Good qualitative analytic tools require a structured research design in advance of the game, and careful data collection that can allow the game to be analyzed after the fact. More and more often, gamers are discovering new techniques from these traditions that can contextualize finding to offer rigorous, useful analysis without artificially simplifying problems.

To my (political-scientist-trained) mind, this distinction mirrors the division between quantitative and qualitative approaches in political science. While sometimes treated as antagonistic approaches, a great deal of excellent recent work has been produce that leverages techniques from both approaches. Using “mixed method” or interdisciplinary approaches can allow one method to buttress the weakness other the other. They can also be used iteratively as a way to drive research. (For those interested in more context about how this plays out in political science, I highly recommend this paper by Kai Thaler that discusses the application of a mixed methods approach to the study of political violence.)

I believe that this same approach can also be used to improve the quality of games by effectively leveraging quantitative and qualitative techniques to build better analysis.

One of the most important precepts of mixed methods is that not all techniques are appropriate to all questions. To my mind, questions about what potential causes are linked to specific outcomes are best handled qualitatively. Questions about the size of the effect are best handled through quantitative means. Questions about categories into which events might be divided (or to put it another way how similar or different events are) may be suitable to either approach, depending on what types of quantitative and qualitative data are available.

When determining which approach is most appropriate, I also tend to look at what type of data can be collected and use that as a guide to selecting a method. If you are interested in the impact of an economic policy that would naturally be measured in dollars, or weapons performance that can be measured in rounds fired per minute, quantitative is the way to go.

If, on the other hand, you are interested in the process of decision making between groups of people where the output is spoken words or policy decision there is often not a natural quantitative proxy. In these cases I think you are often better off using qualitative techniques that can be applied to analyze data like transcripts of dialogue, descriptions of interpersonal behavior, and records of what decisions were made at what points in time.

Multi-methods approaches to gaming are still in their infancy. In part, this is because the field lacks a strong understanding of what qualitative approaches are available and what problems they are appropriate to in a gaming context. As a result, I believe a necessary step to build up to the full potential of mixed method games is improving our qualitative analytic practices. There will be more on this point from me in the next year!

%d bloggers like this: