National Defense University is currently seeking a Wargaming Fellow and an Assistant Wargaming Fellow to “Supports and manages the research, design, development, planning, and execution of national security focused experiential learning initiatives (including national-level education exercises and simulations, conferences, symposia, and workshops) conducted by the Center for Applied Strategic Learning.” Both positions require US citizenship and obtaining a Secret clearance, and the deadline for applications is May 16. Additional details at the links above.
SAIC is currently seeking several candidates to support the USINDOPACOM K. Mark Takai Pacific Warfighting Center on Oahu, Hawaii. All require a Secret clearance, and most require the ability to obtain a Top Secret or TS/SCI.
In February, the United States Institute of Peace, the Quincy Institute for Responsible Statecraft in Washington, and the Sejong Institute in Seoul conducted a series of three interrelated peace games exploring confidence-building for the Korean Peninsula.
The peace game exercise employed three hypothetical interconnected scenarios that progressively and cumulatively moved toward a final and comprehensive peace settlement on the Korean Peninsula. The overarching purpose was to examine U.S., North Korean, South Korean, and Chinese responses to mostly conciliatory measures from the other sides and, in the process, encourage diplomatic risk-taking and uncover new challenges and opportunities that have been obscured by the current real-world stalemate. The scenarios were prescriptively established to advance exercise objectives, but participants’ agreements and positions from preceding scenarios were incorporated into subsequent ones as much as possible. The participants were provided with each scenario at least 12 hours before the start of that particular phase of the peace game.
Participants included 16 experts on security policy related to the Korean Peninsula and Northeast Asia, including former diplomats, policymakers, academics, and think tank analysts. The participants were assigned to play the role of negotiators on four teams: the United States, the Democratic People’s Republic of Korea (DPRK, or North Korea), the Republic of Korea (ROK, or South Korea), and China.
The Militainment Game Design Competition 2022 offers a unique opportunity to emerging game developers to showcase their creativity and skills. We are inviting game developers (teams or individuals) to submit game ideas that foster broader public interest for wide ranges of military occupations (see resource list below). The delivery of the proposed game ideas is open and could be in the form of mini-games, multi-player role-playing games, strategy games, among others.
The core requirements for the game template include realistically portraying military characters as confident, disciplined, and focused individuals, who remain eager and passionate to engage in humanitarianism to assist people and communities during natural disasters, such as flood, forest fire, among others. It is also imperative to include racial, age, gender, and religious inclusivity among the military characters.
Game developers are to submit a video, including closed captions, of their designed game as well as a proof of original concept.
The top three teams shall receive contracts equivalent to the amounts below to further develop their proposed game into a functionable prototype.
All games “meaningful digital component” (so no manual games). The deadline for submissions is May 15. Further details at the link above.
The following article was written by Steven Wagner, Senior Lecturer in International Security, Brunel University London. An earlier version was presented to the International Studies Association annual meeting in March 2022.
In this paper, I share and discuss a simulation I run for my module on Intelligence History: Failure and Success, which is part of our MA in Intelligence and Security studies. My goal here is to share some evidence I have collected about the pedagogic value of the exercise, and the after-action report which students submit (and are assessed on). I will publish some drafts of the briefing papers along with this report.
As a DPhil student, my supervisor, Rob Johnson, ran a similar kriegspiel for staff college and he kindly lent me his briefing materials which helped inform my own simulation design. As a postdoc at McGill, Rex Brynen encouraged me to consider an exercise like this in the style of a matrix game. Rex regularly uses this medium for teaching, assessment, and research, and is a true champion of this community. He and Maj. Tom Mouat were very helpful in providing feedback during my work designing this simulation. Their Matrix Game Construction Kit, which my department kindly financed, was central to my game design as well. Our print service made up some nice maps and tokens for us too.
The scenario I run is drawn from my research. It is based on the 1929 riots in Palestine, also known as the “Buraq Revolt”. This was a major watershed for the Zionist-Palestinian conflict, as well as for British authorities who had ruled the country since the First World War. In August 1929, following nearly a yearlong campaign by both the Muslim and Jewish communities to arouse support for the defence of the Western or “wailing” wall in Jerusalem, communal violence broke out. Palestinians saw the moment as a popular revolt – the first effort since 1921 to resist British colonial policy which they feared would lead to their displacement by Jewish colonists. The Jewish community of Palestine saw this as a merciless attack by mobs against defenceless communities who had lived in Palestine long before the Zionist movement began to prepare for the rise of a Jewish state. Recent research has shown that it in fact strengthened ties between Zionist colonist institutions and those older non-Zionist Jewish communities.[i] The government was caught off-guard: most administrators and the chief of police were away on holiday. Investment in defence had declined for years, and the police were a tiny force of limited capability. The RAF – in charge of regional defence – were called in aid of the civil power to help quell the revolt. Understrength itself, it brought in troops from Egypt and Malta, as well as a battleship, cruiser and aircraft carrier. Within a week, 133 Jews had been killed, mainly by Palestinians, and 116 Palestinians were killed, mainly by British forces.
My simulation is set up with four or five teams: The Supreme Muslim Council, led by the grand mufti Amin al-Husayni, later notorious for his association with Nazi Germany. The Jewish community is represented by the dominant Labour Zionist party which controls nearly all national institutions – including secret militias. In some iterations of the game I have had a team for right wing “revisionist” Zionist party which played an antagonising role before the conflict although I have since removed them. Then there is the Air Officer Commanding, which controls the RAF, Transjordan frontier force, regional intelligence collection units, and if needed, reserves from abroad. Finally, and lastly in the order of play, we have the Palestine Government, a British colonial government with no legislative arm which controls the police, its criminal investigation department, the post office and censor, and other arms of civil administration.
The simulation begins on the day which violence spilled over. The game simulates a crisis of colonial security amid popular uprising and communal violence. What makes my design a bit different is that teams propose both an “action”, as in a usual matrix game, and a “query” for their intelligence sources and informants. The latter looks different for each team, as they each have organised these quite differently and for varying purposes. None the less, queries could consist of questions decisionmakers might ask of their experts based on data they would already plausibly have, or they can task them for collection and analysis. The goal is for them to explore the relationship between intelligence and decision making, policy, and in some instances, tactics.
This scenario presents each team with a set of trade-off dilemmas. If the Jewish community fights back, their secret arms stores are exposed to British and Palestinian scrutiny. If they complain to the British or embarrass them, they risk alienating their main patrons. If the Muslim community overtly supports rioters and revolutionaries, they risk imprisonment, loss of their official jobs, or worse. If they fail to at least secretly support revolution, then they risk popular support amongst Palestinians. If the RAF appears too weak, its decade of “Air Control” could come to an end. If it cracks down too hard, it could damage British policy interests. The list goes on.
Historians tend to avoid counterfactuals. As a researcher, it is exciting to use this exercise to explore the utility of simulation as a means of thought experiment. I am not trying to prove that X could have happened in real life, but rather, to highlight the conditions and variables which were critical to the unfolding of the simulation, as compared to real life. In other words, I am curious: was there a way to prevent revolt or the declaration of an emergency? Could Palestinians have found a way to exploit the moment and bring an end to the Zionist policy? What historical forces stood in the way of each?
In the latest iteration of this simulation, I skipped the emergency phase of the scenario and instead focusing the students on the phase between the restoration of civil authority and the arrival from London of a commission of inquiry – so it was less focused on the map, armed forces, arms stores, and violence, and more focused on subterfuge and diplomacy as each team tried to create favourable conditions and collect supportive witnesses who might testify in front of the commission. Many also tried suppressing evidence which harmed their interests. I also had to reconfigure the teams accordingly. I have attached both briefings.
This has also made for an interesting mode of assessment. Originally, I had intended to assess the gameplay as a means of evaluating student analytic capability. Ie, since matrix games require convincing argument, I could reward students for their success in that process. However, this is hard to document. The rules for assessment in both UK higher ed, and at Brunel, mean that this would be a tricky thing to assess fairly and accurately, unless I record the entire eight-hour simulation.
So, I instead asked students to keep a game diary based on a template I provided. Before the simulation they meet in teams and plan their strategy and opening moves. The diary template asks them each turn to evaluate how their team is making decisions, sticking to policy & strategy, as well as to interpret the scenario as it unfolds. They have only their experience and my master narrative to draw-upon for this. They are told to keep their diaries secret until after the game.
After the game, they sit together and compare notes. As a group they create a brief reflection on their achievements and failures, noting specifically where intelligence impacted decisions, security, etc. Then, they are asked to present individual “After-Action Reports”. They share and compare perspectives here. They must compare and find areas of agreement and contradiction in their diaries. I ask them to identify what details they chose to record and ignore compared to their teammates, and to explain it. For example, how could it be that two to four of you sat through the same events on the same team, but produced such different accounts or interpretations of the game. Can this be explained as an analytic bias? Is it a matter of perspective? Or did you disagree about things like priorities, strategy? Etc. I have also asked them to identify the key conditions for success and failure.
Perhaps it would be most beneficial to illustrate the pedagogical value of the AAR by showing some student responses. In their AAR’s, students show off both their historical literacy and analytic skills as they reflect upon their contributions, successes, failures, and compare the narrative to true events. Some students give a cursory answer which dwells on my questions as though it were a list. However, the best and brightest offer some of the most interesting and sober accounts of self-reflection I have seen. They really reveal during this exercise who I would hire as an analyst. I note here in particular that women and especially women from racialised minority groups have shared some of the most important insights I have come across. For example:
I think some differences in our approaches as events may have sometimes been because of our ethnic and religious backgrounds. I found that I focused a lot on society and faith – highlighting the literacy rate of Palestinians [at that time], Friday prayers for Muslims (which was echoed by a team member who is from the Middle East) and giving suggestions about the capabilities of the Jewish groups on a Saturday given that it was their holy day, whereas others who may not be so religious and come from European backgrounds did not consider these factors initially. I think this was a strength in our group as we were able to give different approaches which the rest of our team may not have thought of, and this broadened our range of approaches.
Reflections like this show the importance of broadening the way we educate analysts. It needs to go beyond structured analytic techniques and other social-scientific skills. Students and future analysts must understand people, and how the world works. They need to be able to collaborate like this and feel confident to share this kind of perspective. This is another example from last year:
I recall one episode in the latter part of the game where [X] had a mathematical argument on the effect of one outcome. My interpretation of the game was closer to that of an art… I believe this difference is an outcome of our own backgrounds. [X] being an expert in cyber-security and myself being a military officer. We had relatively few disagreements on how to interpret, rather, we had to spend more time to find consensus on which actions which align with our strategy.
That example illustrates how students with varied experience and coming from different disciplines learnt to communicate and work well together. Another intelligence professional from the armed forces compared her professional experience with wargaming with the classroom exercise. She led the RAF team and had expected the government to declare an emergency early. Yet the government team did everything they could to prevent this outcome – holding out for days. She felt she had misled her team by planning for an early emergency. She also offered interesting criticism, saying that she expected adjudication to eliminate implausible orders and that, after some were allowed, her plans had been disrupted. I agreed, and this is perhaps one of the core challenges in running a historically-based scenario. The drive for realism is a major constraint on students’ creativity, but it is also necessary for the pedagogical design.
However, my favourite reflections tend to dwell on explanations for failure. This one comes from a team representing the Muslim community’s leadership, often accused in real life of planning the revolt.
Our intelligence queries became progressively more difficult to decide upon because of the state of emergency declared by the government. It meant that we were censored and persistently spied upon by the other players[’ characters] which meant that it was a struggle to [successfully draw results from our queries]. We therefore had to rely heavily on our movements instead… we ended up making poor decisions which put [us] at risk.
Here, the student is describing how, during the last day of the emergency, they ordered direct support for revolutionaries and rioters and exposed the Palestinian leadership to danger. Secrecy was vital to their long-term political survival. The students panicked but also failed to separate their interests as players (who knew the game was ending) from those of their characters (who had to live the rest of their lives). Thereby, they were more prone to risky decisions. Another student commented on the same situation:
I had nothing better to suggest so although I didn’t think it was a good idea, I went along with the move. I think this can be reflective of the craft of intelligence in real life scenarios. We encountered a large failure to due our approach to events under pressure. We were not vocal enough when we felt the move was not suitable and I think this scenario highlights that disagreements should be highlighted when dealing with intelligence matters and we should not just go along with everyone else.
However, even though they made stupid decisions in-game, this level of honesty and self-awareness earned them good grades. Analytic staff who can demonstrate this quality, I am confident, will be less prone to systematic error and bias. I think it is also worth highlighting here the value of exposing students to lessons like this in a scenario, before they are on the job. This requirement for self-reflection also makes for easy grading criteria: it clearly separates A’s from B’s.
Another student remarked: “my most significant personal takeaway is that there is no “winning “ and “losing”, just a trade-off of outcomes based on the available information you have and how you are going to use that to go forward. They added that although they hadn’t worked in intelligence, they were struggling in the module to understand the storied history of intelligence failure. They concluded that the simulation highlights the complexity of intelligence, its tendency to change rapidly, and for decisionmakers not to always be preoccupied with it. Previously, before our course, they struggled to understand how, “if you have all the information, how could you make strategic mistakes”? I doubt any of my students are still confused about that after this module, and especially the simulation.
Moving forward, I will try to address problems of scale with the game. We typically never complete the first phase of the scenario before our eight-hour allotment is complete. Some students have suggested short weekly adjudication sessions online, spread across the year to achieve that. I am also grappling with the glut of detail in the game briefings. Students remark that they are complex. I designed it from my own research so that the details are precise and accurate, and thereby it could offer as many realistic parameters as possible during simulation. However, this means that, for the students, the briefing materials are long and require lots of preparation and background reading compared to other assignments. So, I am looking for ways to simplify things, or even for new scenarios which are simpler.
I welcome your feedback and questions. I hope I have also been able to share something helpful from my teaching practice which has broader implications for intelligence education and pedagogy: That the simulation gives fodder for students to practice skills such as empathy (with their characters, and each other); as well as honest self-reflection and an appreciation for perspective. I’ve tried to teach them how historical and intelligence analysis overlap, including issues of sources, and an appreciation that bias is part of the analytic process. I think the scenario helps teach them how to spot such biases, and to grapple with them in a team setting.
The Wargaming Network is pleased to announce the second lecture in our 2021-2022 public lecture series on wargaming. The theme for this year is evaluating and assessing the impact of wargaming on individuals and organizations and will feature speakers who have made important new contributions to wargaming assessment. The lecture will take place online on 12 May, 17:00-18:30 BST. Please register for the lecture here to receive the log in details for the online event.
This lecture will focus on the enhancement of evaluation usefulness as a possible avenue to increase impact, built around Ralf Beerens’ PhD research which seeks to improve the usefulness of disaster response evaluations with respect to their contribution to disaster risk management (available online via: https://portal.research.lu.se/en/publications/improving-disaster-response-evaluations-supporting-advances-in-di). The dynamic disaster response environment in which his research took place, and the challenges it both poses and faces, resembles that of wargames. Overall, this research shows that to gain maximum benefit from disaster response evaluations, the outcomes must be systematic, rigorous, evidence-based and actionable. This is also challenging as this creates a dilemma around the so called ‘rigor-relevance gap’ which refers to the hurdle of simultaneously delivering practitioner relevance and scholarly rigour.
There will be a mixture of scholarly rigour and practitioner relevance by introducing and discussing various approaches, concepts, processes and models such as the research design strategy, design science and evaluation descriptions. This is combined with insights into the Dutch Crisis Management system and practical experiences (with evaluation) as well as key research findings that can be transferred to wargames. This lecture will propose some ways forward and open a conversation regarding how to manage both the process and the products of an evaluation and possible scientific and practical contributions, in order to optimise its usefulness for a range of purposes and users. In general the session is aimed at enhancing our understanding of the role(s) of evaluation in dynamic and complex environments such as disaster risk management and the transfer of these insights to wargames, keeping in mind that it is not the evaluation itself that leads to improvement; it is the use of the evaluation that can lead to improvement. Evaluation should be seen as a means to an end.Dr. Ralf Beerens is a senior researcher at the Netherlands Institute for Public Safety (NIPV) and is also a senior lecturer for the Institute’s Master in Crisis and Public Order Management (MCPM). In September 2021 he received his Ph.D. from Lund University, Division of Risk Management and Societal Safety, where he remains affiliated as a visiting research fellow. In this research he focused on disaster response (exercise) evaluation. He remains particularly interested in the evaluation of the operational performance of (international) emergency response organizations, teams or modules during exercises and crises, which also reflects his professional experience as an evaluator.
Aimpoint Research, a global strategic intelligence firm specializing in agri-food, is looking for a(war)game analyst:
Aimpoint Research® leverages military-style wargaming techniques to disrupt our clients’ daily thinking patterns and explore complex issues across a wide range of topics within the agri-food value chain. Our team regularly collaborates with C-Suite industry leaders to anticipate challenges, take advantage of opportunities in the marketplace, and ensure that they have the tools needed to understand issues beyond the headlines. With customized intelligence reporting and our roster of industry experts, we deliver challenging and innovative experiences that provide our clients with competitive advantage.
As a Wargame Analyst you will work within a multi-functional team to design, develop, execute, and report on wargames which include, but are not limited to, table-top exercises, manual simulations, and other serious games normally executed in-person. Aimpoint Wargames may be stand-alone events or embedded within a larger portfolio of client-focused research efforts.
The position requires at least one of the following:
1-3 years practical experience developing analytical wargames within the Department of Defense or a widely recognized defense contractor or think tank
Successfully completed a college-level wargaming design course
Successfully completed an industry standard wargaming or gamification certification at the Journeyman level or higher
This position is in Columbus, Ohio. Further details here.
This post was written for PAXsims by Robert Domaingue. Before retiring from the U.S. State Department, Robert Domaingue was the lead Conflict Game Designer in the Bureau of Conflict and Stabilization Operations. He now works with local organizations to utilize serious games for solving community problems.
Serious games are used to provide insights into complex problems. They help decision makers and staff test assumptions, examine strategies, and determine deficiencies in planning. Many different government departments, businesses, and organizations utilize serious games to provide a safe environment to learn from failure. These organizations can improve the design of their serious games by incorporating principles from experiential learning.
Experiential learning focuses on the learning that emerges from concrete experience and the reflection and application of that experience. A frequently cited model of the experiential learning cycle comes from David Kolb’s 1984 book Experiential Learning. He proposes a cycle that begins with Experiencing, → moves to Reflecting, → to Generalizing, → to Testing, → and starting over again with new Experiencing. The learner proceeds through all steps in order to make sense of the experience and apply the insights. There are similar earlier models from John Dewey: Observation, → Knowledge, → Judgement, → more Observation; and Kurt Lewin: Concrete Experience, → Observation and Reflection, → Formation of Abstract Concepts and Generalizations, → Testing Implications of Concepts in New Situations, → and continuing the cycle with new Concrete Experience. All of these models highlight the importance of reflecting on the nature of the experience for general learning to occur. Furthermore, John Dewey felt that experiencing something served as a linking process between action and thought. But not all experiences lead to learning. In his 1938 book Experience and Education, Dewey wrote “Any experience is miseducation that has the effect of arresting or distorting the growth of further experience (p25).” This is an important point I will build on.
One problem with these models is with the very first step of identifying the experience. It implies that we actually understand that we are having an experience – that we “see it”. E.M. Forster said that the only books that influence us are the ones we are ready for. Likewise, we may only see what we already know. We may not identify the experience because we don’t have the awareness to make sense of the experience, or our prior conceptual models block us from seeing the experience as it is.
A way to highlight this act of “not seeing” what is in front of us is to explore two psychological experiments that examine “functional fixedness.” Functional fixedness refers to not seeing the potential novel uses of something because of your narrow prior category of the object in your mind. The classic “candle experiment” gave subjects a candle, a box of thumb tacks, a bulletin board, and asked them to attach the candle to the bulletin board. Most people tried to use the thumb tacks to stick the candle to the board, which doesn’t work very well. A second group of subjects was given the same instructions and materials, with one small change. This time the tacks were removed from the box and placed next to the empty box. While it was a small change, it was large enough for people to see a new way of solving the problem. Subjects in the second group saw that if they tacked the box to the bulletin board they could then place the candle inside the box. When the box was full of tacks the subjects’ functional fixedness prevented them from seeing other uses for the box.
Another experiment involved giving subjects a problem to solve in which a length of string could be used in the solution of the problem. The string was hung from a nail on the wall, and most people figured out to use the string as part of the solution to the problem. Other groups were given the same instructions, but this time the same string on the same nail was used to hold a picture. In this case no one thought to use the string to solve the problem. The subjects’ functional fixedness on the string as part of the picture prevented them from seeing it as a resource to solve the problem. The blinders of our categories prevent us from seeing what is there.
How do we overcome “not seeing”, and what is the impact on serious game design? When utilizing the experiential learning models to guide our game design we should change the first step to “Identifying the Experience”. People do not necessarily understand the nature of the experience that they are expected to reflect upon and draw lessons from. The game designer must have a clear idea of the nature of the experience that the game will be providing to the participants. This does not mean that the experience needs to be clearly delineated for the players at the start of the game. It can be, but there are times when ambiguity and uncertainty are valuable features of the game. In these cases, the nature of the experience needs to be highlighted in the debriefing session at the end of the game. Here the facilitator can direct attention to how the players made sense of the experience and what they got out of exploring that experience. It is very important to look at the assumptions that players operated under as to what were viable and nonviable approaches to solving the problem.
Players bring prior experience and preconceived ideas with them to the game, and a novel experience that challenges pre-held beliefs may not even be seen. The game designer must be aware of the dangers of misinterpreting the nature of the experience by the players. It could easily lead to learning the wrong thing from the experience. If, however, the facilitator with the help of the players can identify examples of functional fixedness that occurred when approaching the problem, then they have identified fruitful topics to develop additional games around. These new iterative games could provide breakthrough thinking for approaching the problem. The insights are so valuable that the game facilitator must be continually searching for opportunities to explore them when they arise. Spending the time designing and playing serious games can be enormously useful for organizations if sufficient attention is given to framing the experience and guiding the learning that results from exploring that experience.
The 2022 PlaySecure conference will take place online on 15-18 June.
Play Secure explores the overlaps between play and security. Finding and looking at ways that games can be used in modelling real life scenarios to help in decision-making, anticipating upcoming issues, or in discovering new ways that systems of all types can be manipulated.
From D&D-styled incident response exercises to sessions on the psychology of play in creativity. Four interactive days of talks, games, and workshop sessions devoted to play and security.
Global and online-first, community focused, with a wealth of content on security, gaming, and the areas in between; you won’t find anything else like this.
Non-exhaustive examples are: * Tabletop incident pre-enactments as attacker, defender, and stakeholder teams * CTFs * Threat modelling card games * How to find the fun in Security by Design * Security Poker * What can MMORPGs can teach us about security and business crisis management? * How a board game can teach network security and DDoS mitigations? * How can gamification be made to work, and how can it fail? * Anything that brings together play and security… we’d love to see what you come up with…
The conference website and call for papers can be found here. The deadline for proposals is May 13.
The Military Operations Research Society is offering a three day online course on designing tactical games on 3-5 May 2022.
In this class, we will focus on building tactical games. Such games require us to represent the details of battle. Whether we do this using computer or manual techniques, it demands no small degree of simulation. We need to simulate the interaction of forces, the effects of human factors and technology, and the effects of the environment on combat. We also need to understand how tactical elements are commanded, and how to incorporate representations of command into our games. Any good wargame strives to produce realistic adjudications and outcomes, but the realism of tactical games is tested even more stringently because the players can more easily relate game mechanics and adjudication to their own, personal, experiences.
All of this can make designing tactical games different—and even more challenging—than designing operational or strategic games. This class will examine some of these challenges and their possible solutions in both theoretical and practical terms.
We will address the subject according to the different combat domains: ground, naval, and air. For ground combat we will discuss how good design must address basic concepts such as mission, time, space, forces, and command relationships. How do you bring all these variables together to create a realistic tactical environment for players to engage in ground warfare? We will review the development of different ways of representing ground combat based on a wide range of commercial and professional games and explore future challenges and innovative approaches.
Naval and air tactics are even more technically complex and interactive, involving systems from space to cyber and beyond. Games must represent not only putting ordnance on the target, but also the entire kill chain from identification to battle damage assessment. We will also explore requirements for gaming ground tactics primarily using manual games. Although these sorts of games lend themselves to digital simulation, digital simulations can limit designer and player creativity in the game design and execution processes. We will focus on designing exploratory games—games to create or test new tactics, weapon systems, or operational concepts. Our discussion of naval and air games will focus on the mid-to-high tactical level—more concerned about formations of multiple units and systems rather than individual ships or aircraft. This will allow us to examine games that incorporate multiple tactical options for the players and integrate the joint kill chain.
Participants will be able to influence the topics and detail covered depending on their interests and desires.
For example, we can go beyond traditional ground, naval, and air to delve into less common types of tactical games, such as tactical special operations games, requiring the representation and simulation of actions by individual operators. As part of these, we expect to draw from concepts in miniatures gaming to examine the challenges of micro-detailed games. We could consider as well the tactical issues in emergency response, cyber operations, technology assessment, humanitarian assistance, and disease response.
The course will be taught by Ed McGrady, Peter Perla, Phillip Pournelle, and Paul Vebber. Additional details and registration at the link above.
Our wargame’s advisors came from a variety of backgrounds and experiences, including United States military officers, representatives from NATO countries, two experts on internal Russian decision-making, and a retired Ukrainian colonel with experience on the Ukrainian general staff. The second iteration’s most significant change to gameplay was a switch from each turn representing a single day to three-month turns. This was done to allow us to play out a full year of combat operations within the time allotted to complete the wargame. Lengthening the game turn duration required a higher degree of adjudication abstraction than our previous wargame, but it proved essential to enabling players to look at broader operational and strategic considerations over the duration of a protracted conflict.
After applying expected geostrategic and operational developments over the remainder of this year and into the start of 2023, we determined the Russians reached an operational culmination well-short of their maximal objectives. Given the combination of Ukraine’s proven will and its capabilities in a defensive fight, the prospects for Russian forces in heavy urban combat proved daunting. By the end of the summer, Russia no longer possessed the forces to pursue major simultaneous objectives nor the combat power to conquer a major city. All was not rosy for the Ukrainians, who lacked the combat power to go on the offensive and eject Russia from the occupied territories. With neither side able to achieve decisive military effects in the offense, without exception, the combined teams predicted that without a negotiated settlement the war is headed toward an indefinite stalemate.
The ramifications of such an outcome are immense. First, of course, is the toll in human suffering, as losses mount on both sides, and the refugee crisis remains unalleviated for a year or more. For the United States, a stalemate means that the ad-hoc defense-related resupply arrangements require systemization and the establishment of a quasi-permanent logistics infrastructure. Ukraine’s future success also requires the establishment of training centers that can regenerate Ukraine’s frontline combat power and allow these forces to reenter the fight.
As we conducted the wargame, the surprises came fast and furious. The first was we entered the wargame with a flawed assumption about Russia’s prospects. Initially, we assessed that over the next four months the weight of the Russian force would gradually wear down Ukraine’s military and allow for a complete occupation of the country. After conducting open-source analysis to develop a current operating picture and assessing losses since the start of the war, the team agreed to fast forward one month and assume the collapse of Mariupol, Sumy, and Konotop. The wargamers were then tasked to determine the major operational movements for the summer 2022 campaign, using as the key decision how Russia would employ the maneuver forces freed up by these successes and the option to employ forces held in reserve. In weighing and then employing the wargame to test courses of action, it rapidly became clear that Russia lacks the combat power to collapse the Ukrainian military this summer.
Another surprise for the wargame was the validation of how national leaders’ political objectives trounce the best military advice provided by generals. As the summer campaign played out, the “generals” (wargamers) were forced to decide how best to employ military forces, and shift combat resources, including strategic reserves, to accomplish objectives. Political requirements dominated military decision-making, as the expert military advice on future operations was overruled in favor of seizing objectives deemed more politically important. In this case, our Vladimir Putin ordered spectacular victories were necessary to sustain his own power, repeatedly saying that the postwar condition of the army was of small consequence.
This is the kind of narrative most people imagine when they think of military war games—scenes in the bowels of the Pentagon, units fighting digitally on electronic maps, commanders pondering their next step in a fast-moving crisis. Victory in the simulation, so the popular imagination goes, shows how to win a real-life conflict. Defeat in a war game, on the other hand, is an acknowledgement that any actual conflict will likely be lost.
Contrary to the popular imagination, however, this is not how war games work. Rarely is a war game designed to predict the future or develop a single definitive strategy. Instead, a war game helps military planners and analysts explore and understand a complex problem, regardless of the outcome. Win or lose, the purpose isn’t to define a strategy for the U.S. military but to help it better understand the capabilities it has, what it can already do, and what it needs.
Whether it’s Taiwan or any other potential conflict, the scenario is rarely the focus of the war games we at CNA design for the U.S. Defense Department. Instead, war games are about better understanding how the U.S. military can build deterrence, what technology gaps could hobble its forces, how an adversary’s capabilities might evolve in response to U.S. capabilities, and how all that might impact what Washington should invest in today. Fundamentally, war games strive to explore and distill the fundamental nature of the problem itself—which rarely leads to definitive scenarios or solutions.
In fact, using war games to craft a clear-cut strategy is impossible. Done right, war games are a plausible method of providing a brief and limited glimpse into a possible future—a single future in a multiverse of possibilities. Trying to imitate victory in a war game, on the other hand, means trying to align both sides’ future decisions in a complex conflict with the scenario that played out during the game. Obviously, these decisions are numerous and mostly beyond one’s control.
What worked in a single war game has limited utility—it worked against a specific adversary making a specific set of decisions using a specific set of game rules that may or may not accurately reflect the world. Failure, on the other hand, doesn’t require the game to be a perfect simulation. We often hear complaints from players that our war game rules make the adversary “10 feet tall.” But it is better to stress U.S. forces more than to give the adversary too little credit and not stress U.S. forces enough. Stressing the capabilities of the U.S. forces to their breaking point from all sides allows analysts and researchers to identify vulnerabilities and what might be needed to fix them.
So, in a war game, pay no attention to who won or lost. War-gaming is about the process, not the result—and analyzing that process is what will allow the U.S. military to turn losing into winning.
You can read the full article at the link above. For more on wargaming Taiwan, see Drew Marriott’s 2021 summary of recent Taiwan wargames here at PAXsims.