The 17th NATO Operations Research and Analysis conference will be held on 30-31 October 2023 at the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland (with some hybrid options).
This year’s conference theme is “Changing character of defence and deterrence: the power of analysis”. Collective defence is at the heart of the Alliance and deterrence is a core element of its overall strategy to prevent war, protect Allies, maintain freedom of action and uphold its values. NATO faces the most complex security environment since the end of the Cold War. Innovations, such as autonomous weapons systems, are changing warfare. Shifts in the global balance of power, such as the rise of China, are challenging the Alliance’s values. And aggressions, such as Russia’s actions against Ukraine, are threating the security of Allies. These major developments, along with the new Strategic Concept, underscore the need for the Alliance to ensure that its deterrence and defence remains credible and effective. The theme reflects the long-standing practice of Operations Research and Analysis in Defence, tackling ongoing challenges faced by the Alliance and looks to the future to bring new methods to old challenges or well-established methods to future challenges.
Paper proposals are due by 15 April 2023. Registration will open in July. The conference will be open to representatives from all NATO Nations, NATO Bodies, NATO Agencies, Australia, Austria, Finland, Ireland, Japan, New Zealand, Sweden and Switzerland.
The US Marine Corps’ Training and Education 2030 report is now out, and it foresees a major role for wargaming in professional military education:
Wargaming is a proven technique to examine warfighting concepts, train and educate leaders, explore scenarios, and assess how force planning and posture choices affect campaign outcomes. Our wargaming scenarios will incorporate the full array of all-domain capabilities, ensuring our leaders can understand the meaning risks and opportunities presented. We must ensure that the outcomes of our wargames feedback into conceptdevelopment, with a focus on validation or appropriate adjustments to the concepts.
MCU and Training Command use wargaming to familiarize students with evolving Marine Corps concepts and train decision-makers in fighting a thinking enemy. The new Marine Corps Wargaming & Analysis Center, now under construction at Marine Corps Base Quantico, will substantially increase our capacity to conduct wargames and campaign analysis. The close working relationship and physical proximity of the Marine Corps WarfightingLaboratory’s (MCWL) Wargaming Division and MCU’s Krulak Center has already enabled mutually supporting and beneficial relationships that advance our force design wargaming and experimentation needs, while simultaneously enhancing the training and education of our leaders against a peer threat.
28. NLT 1 July 2023, TECOM will implement a plan for wargaming at resident PME programs in order to ensure students can wargame realistic scenarios at the appropriate classification level and remain current on operational matters while assigned to formal learning courses.
The folder also contains an Index organized by year, with titles and authors, of ALL presentations including those for which we are missing the files. The Index has links to those files which we do have.
If you ever gave a presentation to the Wargaming CoP, please check the index, and if your presentation is missing please email it to me (email@example.com) and I will add it. Many thanks.
Bill Simpson’s “Compendium of Wargaming Terms” is now hosted on the Georgetown University Wargaming Society Webpage (thanks to Sebastian Bae and William Simpson) under “Wargaming Resources“. This is the Dec 2022 version and is the most recent one that Bill has edited. This document will be updated annually, and you can propose additions, deletions and edits using a form on the Compendium landing page.
Purpose and Description: Since there is no single agreed-upon set of wargaming terms, this compendium is an unofficial collection that attempts to gather and post as broad a collection of terms and definitions as possible. Its purpose is to inform gamers of the variety of terms and definitions in use rather than to impose a single set of rigid definitions.
This unofficial collection was originally assembled by Bill Simpson, a GS-13 Wargaming Specialist, with 22+ years of experience at Wargaming Division, Marine Corps Warfighting Laboratory June 1992 to October 2015, and at the Center for Naval Analysis as a Senior Wargaming Specialist January 2017 to January 2019. He continues to work on updates along with a small group od volunteers.
The opinions contained in the Compendium are those of the compilers alone, they do not reflect official policy of any organization.
Here are two recent items on (war)gaming climate change that may be of interest.
More than 30 individuals participated in a Climate Change Wargame co-hosted by the Center for Excellence in Disaster Management and Humanitarian Assistance and the Office of the Under Secretary of Defense for Policy Arctic and Global Resilience team. The wargame, “Ho’okele Mua” or “Navigating the Future,” was designed by The Center for Naval Analyses to address various scenarios in which the U.S. Indo-Pacific Command can best prepare for strategic and operational climate change impacts in the region.
Several images from the game can be seen at DIVIDS. UPDATE: As Aaron Danis points out, there’s also a press release for the event here.
Second, there’s this recent GUWS talk by Ed McGrady:
Climate change games are often a welcome break from our natural focus on games of war and destruction. However they present significant challenges to the aspiring designer. These challenges can be divided into those of mechanics, science, and culture. But, wait, a lot of these challenges may not be what you expect! The challenge with mechanics is being able to represent in the game everything you need to represent in order to allow the players to address climate issues. It’s a lot. The challenge with science is not that you do not have it, rather its the large abundance of science you do have, your ability to distill it down into something manageable, and the need to get disparate climate change experts to agree on something. Finally, the culture of climate change advocacy, politics, and processes does have a huge impact on your ability to design the game. But not because of climate deniers, rather the culture of the climate science and response community can itself present challenges. This can even extend to your own workforce. All of these challenges can be overcome, but for those of us seeking to build simulation games, vice “toy” or “educational” games, these challenges can present a big barrier to successful climate change game design. This talk will discuss each of these issues, from the perspective of someone who has had to address them, and overcome them (sometimes surrender to them), in multiple climate simulation games. When possible I will offer solutions, at least solutions I have found useful.
Of course, the Georgetown Wargaming Society has sponsored and is sponsoring many, many wargaming talks of interest, so you should check out their website.
Connections US 2023 will be held at the National Defense University (NDU) in Washington, DC on 21-23 June.
In order to provide the widest possible range of panelists and topics to Connections 2023 attendees, the Connections interdisciplinary wargaming conference is seeking proposals for presentations from all interested parties. Our conference theme for 2023 is “Next Generation Wargaming Tools and Methods” and we would especially welcome any presentations that touch on some aspect of this topic. However, relevance to the conference theme is in no way a requirement and we will fully consider any presentation relevant to other dimensions of wargaming.
You’ll find full details here. The deadline for submissions is March 3, 2023.
Using fictitious country names in hypothetical scenarios is widespread in experimental international relations research. We survey sixty-four peer-reviewed articles to find that it is justified by reference to necessary “neutralization” compared to real-world scenarios. However, this neutralization effect has not been independently tested. Indeed, psychology and toponymy scholarship suggest that names entail implicit cues that can inadvertently bias survey results. We use a survey experiment to test neutralization and naming effects. We find not only limited evidence for neutralization, but also little evidence for systematic naming effects. Instead, we find that respondents were often more willing to support using force against fictitious countries than even adversarial real-world countries. Real-world associations may provide a “deterrent” effect not captured by hypothetical scenarios with fictitious country names. In turn, fictionalization may decrease the stakes as experienced by respondents. Researchers should therefore carefully explain rationales for and expected effects of fictitious country names, and test their fictitious names independently.
In Table 2 below you can see that respondents were more willing to use military force against “Celesta,” “Drakhar,” or “Minalo” than they were either a friendly real country (Canada) or a hostile one (Iran).
The research here focuses on survey responses, not serious game play. However the findings may have some interesting implications for strategic-level wargames using fictional country names, which may be more prone to escalation than similar games using real countries.
Interestingly, the authors also suggest that the more “real” a country sounds, the less fictionalization effects are evident:
Our results suggest that the more clearly fictitious a country name, the easier to condone attacking it—fictionality and its perceived costlessness can therefore embolden respondents to provide more aggressive responses.
These results point to the relevance of perceived realistic-ness: the more “real” a country name sounds to respondents, the weaker the fictionalization effect. In particular, there seems to be a deterrent effect associated with realistic-ness, for example, of being able to imagine more easily the consequences associated with attacking Iran, especially bar any additional information that “fills out” the scenario.
The explanation they suggest for this is deterrence: respondents are better able to imagine the costs of an attack when the survey question asks about a real country rather than a fictional one. However, there may also be an empathy factor here—it’s easier to imagine killing and maiming actual Iranians or Canadians than it is “Minalans,” “Brakharis,” or “Celestians.”
In professional wargames, it is sometime necessary to use fictionalized countries, usually because of political sensitivities. In experimental games there may also be a desire to exert better control of key variables than is possible using a real-life settings. Both reasons apply, for example, to a recent series of NATO experimental wargames that examined Intermediate Force Capabilities in a fictional conflict between the Illyrian Federal Republic and Hypatia (the latter backed by Organization for Collective Security).
If Majnemer and Meibauer’s findings do indeed expand beyond international relations survey research to wargaming, there are several implications. One is the need to provide game participants with a rich and realistic fictional environment and to work hard to promote narrative engagement. Another is the need to caveat experimental findings, especially as they relate to use-of-force decisions but possibly other things as well, such as risk aversion or casualty sensitivity more broadly.
CNN Academy is a journalism training program run by CNN in collaboration with university programs around the world. In December, more than eighty of those students, together with a number of their instructors, travelled to Abu Dhabi to take part in an five day intensive news-gathering simulation. Although simulation has been used in journalism programmes before, this was an industry first in terms of scope, scale, and complexity.
As with most educational simulations, the intent here was to challenge participants to put to work the knowledge they had acquired in their studies in a “safe to fail” environment. We didn’t make it easy, either.
This wasn’t the first time I had supported journalism training using simulation methods, but those past efforts were an ancillary to a simulation largely designed for other purposes.
Below I’ll discuss the setting and scenario for the simulation, the simulation mechanisms we used, and some of the key lessons learned. There will be a few things I won’t reveal, however—we want to keep them a secret for future iterations! I was the primary simulation designer and game controller. CNN staff also contributed to the design (notably Alireza Hajihosseini, John Sanders, and Mohammed Abdelbary), and most of the roles in the simulation were played by CNN journalists. Jim Wallman (Stone Paper Scissors) codirected the simulation. The simulation was hosted at the Yas Creative Hub of twofour54, and we also made use of their Kizad movie production backlot.
Setting and Scenario
There were several important considerations in establishing the setting and scenario for the simulation. We decided early on that we wanted to use a fictional country. One reason for doing this was to allow us the freedom to craft a narrative that would fully engage a broad range of journalism skills. We also wanted to avoid an Orson Welles “War of the Worlds” -type situation where something in the simulation somehow leaked into the real world and generated confusion or concern.
The problem with a fictional country, however, is providing sufficient detail and depth to be useful and believable. Fortunately, we already had one such country setting available: a fictional conflict-affected country that had been used in my peacebuilding course at McGill for almost two decades. A tremendous amount of historical, political, economic, and cultural information had already been produced for this over the years, both by me and by generations of McGill students. That setting was modified and updated—McGill students will be pleased to know the civil war there is now finally over—for use by CNN Academy.
As for the precise scenario on which participants would be reporting, we needed something that was dramatic enough that it would credibly attract global media attention. We decided on a major environmental disaster. This had multiple elements to it: the immediate disaster, and its associated human and environmental cost; the broader social, political, and economic ramifications; and the complex web of crime, corruption, and politics that had allowed it to happen. This was not a simple plot or easy to unravel, and students had to use a broad range of investigative techniques to fully understand what was going on.
Everything about the scenario, setting, and simulation structure was written into a 24 page “master scenario guide,” which was updated as necessary as new elements were added.
Students arrived in Abu Dhabi having taken part in CNN Academy webinars and other instructional content, but with no information on the simulation other than that there might be one. It’s fair to say that none of them anticipated how intense it would be. We immediately grouped them into teams of four or five students and threw them in the deep end: they were told there was breaking story and a forthcoming press conference to cover, given initial details about the situation, and provided with a detailed country brief. They only had a short time to get to know their team, consisting of students from two or three different journalism programmes, as well as read up the country where they had just been “sent” to report. Then they started news-gathering.
Participants were also given access to a team email address and to a Twitter-like social media platform populated by a constant stream of fictional social media posts about the disaster, mixed in with actual news items about the rest of the world harvested in real time from CNN and other media feeds. About four hundred of the social media posts had been pre-scripted and pre-timed before the simulation, but others were injected live while it was all going on. This assured that there were new potential developments regarding the story almost 24 hours a day. The teams also received both scripted and live emails during the sim, and could “reach back” to their producers for advice and information. Both the email and social media servers were closed so they couldn’t leak into the real world.
On the first four days (Monday-Thursday) students participated in five simulated press conferences and many one-on-one interviews. The various spokespersons and interviewees—more than two dozen in total—were played by CNN staff, as well as myself and Jim Wallman. Other online characters might interact via email or social media direct messages.
Each role had a role briefing written up, detailing the character’s identity, personality, motivation, and information, along with key talking points. All of our roleplayers had been provided with this in advance. In addition, I also held a series of online orientation session via Zoom for the simulation staff in the weeks running up to the simulation.
In any event, CNN journalists turned out to be terrific improvisational actors! Quite apart from their acting skills, all were well aware of the challenges in covering press conferences or interviewing sources and were able to use their professional experience to keep students on their toes. Teams that did a particularly good job of conducting interviews might be given additional information or contacted later with news tips.
Particularly memorable was a trip to the affected area—represented in this case by twofour54 Kizad movie backlot, much of which is constructed to look like a war-torn city. Here they were paired up with CNN photojournalists and were free to roam about and interview the “local inhabitants.” It was a remarkable experience.
All of this simulation activity over the first four days was interspersed with a series of lectures on various aspects of modern journalism, including newsgathering best practices, mobile storytelling, commercial operations, and the art of the spectacle.
On Thursday students were expected to submit a pitch to their producer for a video report on the disaster. This took the form of a full “paper edit” of their proposed piece, including script and visuals. In addition to whatever video they had shot themselves or had been shot for them on location, we provided additional B-roll to use in these reports. No one got much sleep at this point.
The top six submissions were given feedback, access to studio facilities, and an editor the next day to produce their report. The rest of the participants had a chance to relax and see some of the sights of Abu Dhabi. After lunch we all reassembled to screen the semi-finalist videos and announce a winner.
It all went very well—better than expectations. No major mishaps were encountered. All of the tech (John Sanders) and logistics (Shivon Watson) ran brilliantly. The CNN folks were enthusiastic and engaged, as well as being terrific roleplayers. Maitha Khalifa and her team at the Yas Creative Hub were outstanding hosts and their facilities were top-notch.
A post-event participant survey indicated a very high evaluation of CNN Academy experience, the acquisition of relevant skills, engagement, and willingness to recommend the experience to others.
There were a great many teachable moments during the simulation. Some of the ones that most stood out to me were:
The pressure of the simulation caused some students to lose sight of the importance of soft skills. For all the changes in the media brought about by rapidly changing information and communication technologies, “people skills” remain at the center of good journalism. Journalists need to understand those they are reporting on and develop a rapport. They need to treat traumatized populations with sensitivity. They need to develop sources. They need to listen carefully as well as ask questions. They need to be able to follow leads in new directions, especially when an interview reveals new information. They also need to be able to tell a complex story in a way that is interesting and understandable to their audience. Technology changes some of the ways this is done, but most of these skills would have been immediately recognizable to a good reporter a century ago.
Teamwork is essential. Every team consisted of a mix of experiences, expertise, language skills—not to mention gender and national origin. The teams that did best worked hard on collaboration, information management, tasking, and generally getting the best out of everyone in a harmonious fashion.
The simulation also highlighted the importance of fact-checking and research. Not everything students were exposed to was true. Politicians and others spun the story in ways that made them look good, and all of the interviewees filtered their comments through their own perspectives and beliefs. Locals residents didn’t always know exactly what was going on,. There were lots of rumours online. And whenever you have more than eighty students talking amongst themselves they going to accidentally generate their own rumours through a sort of broken game of “telephone.” The best teams verified what they heard, and didn’t just run with it.
Media ethics matter. We sprinkled a few ethical challenges in the simulation (I won’t say what they were in case we reuse them). A few fell for the traps!
Although we did a short debrief at the end of the simulation (including a reveal of the full “plot” and how various elements could be discovered), and although the accompanying journalism professors were constantly providing advice and feedback to their students, it would have been nice to have had more time for this. CNN Academy plans to post a series of debrief “blog posts” for students to the CNN Academy hub in the near future to build on the immediate feedback they received in Abu Dhabi.
For other coverage of the CNN Academy simulation, see:
Wargames that simulate combat between the United States and China near Taiwan can provide useful insight about potential military challenges. However, analysts should be wary of repurposing the same games to explore political questions such as those related to deterrence, escalation control, alliance politics, and war prevention or termination. Asymmetries in the information requirements for political versus military topics make it exceedingly difficult to design games to explore both in a rigorous manner. Paradoxically, the deliberate falsification of facts in peacetime offers the best hope of painting a more vivid and convincing portrait of a situation that would actually confront policymakers in wartime.
Wargames featuring conflict between China, the U.S., and Taiwan have taken the Washington, D.C., area by storm in the past two years. The U.S. military has held classified wargames on the topic. The Center for Strategic and International Studies (CSIS) held 22 iterations of such a scenario, and other think tanks such as the Center for a New American Security (CNAS), CNA, and RAND have held their own wargames on the topic as well. The appeal of wargames is not hard to figure out: They provide a vivid and dynamic simulation of armed conflict. The China-Taiwan war scenario is especially appealing because the U.S. and China are locked in a rivalry and are also equipped with large, advanced, and powerful militaries. What would happen if the two fought is an inherently fascinating question. The U.S. military advantage is fading, and China’s military is growing stronger. But how the two might fare against each other in combat is unclear. Wargames offer the possibility of exploring such critically important topics, whether as part of a research design or as critical context for creative discussion.
The results of the games have generated several key findings. The most obvious and compelling lesson is that combat between U.S. and Chinese military forces would probably be immensely destructive. In the CSIS game, the U.S. lost 200 aircraft, 20 warships, and two aircraft carriers.Attacks on cyber and space infrastructure are not uncommon. U.S. missiles may strike China’s homeland. Both sides might escalate to the threat, or even use, of nuclear missiles, as happened in at least one CNAS game. Analysts have also noted the military implications for operational topics such as the importance of massing forces, adequate munition stores, and the vulnerability of surface ships on the modern battlefield.
But for many, these lessons are not enough. The frightening results of such simulations naturally raise deeper questions of a fundamentally political nature, such as: How can such a war be avoided? If it can’t be avoided, how can escalation in such a war be controlled? What can the U.S. do to deter China from attacking Taiwan? How long can Taiwan successfully hold out against such an attack? Which allies will support the U.S. in such a war? These political considerations permeate the news accounts of the wargames. In the CSIS event, for example, participants debated whether the pre-positioning of marines on Taiwan prior to war would be “too provocative” or not. Players also debated whether China should attack Japan or not. Military decisions regarding escalation also carried significant political considerations, which may or may not have been debated at the game. In one game, for example, players for the U.S. sideauthorized missile strikes on Chinese ports.
Political decisions on the initiation or escalation of war are immensely important. Yet they are also extremely difficult to answer owing in part to the dearth of reliable data. After all, a U.S.-China war remains, thankfully, completely hypothetical. That leaves virtually no firsthand information with which one can answer such questions. Wargames, and the scenarios that underpin them, have sometimes been used to explore such questions. Since they incorporate many facts about relevant combatants, wargames offer the possibility of exploring political as well as military dimensions of war through a structured, analytic method.
Yet analysts should be wary of trying to use wargames designed for military questions to analyze political questions. The analysis of political topics has fundamentally different information requirements than those for military ones. Wargames that support analysis of military decisions do not necessarily support analysis of political decisions in the same situation, and vice versa.
He further argues that efforts to game future crises by tweaking the status quo are inherently problematic:
One way to get around this problem is to incorporate as much of the current world situation as possible into the game scenario and make only those changes needed to introduce conflict. A game designer could create a scenario that depicts U.S.-China relations largely as they exist today and then inject some crisis near Taiwan to begin the war. This is, in fact, the most commonly used method to build “realistic” scenarios for wargames. But a scenario set in wartime that hews to facts as they exist in peacetime introduces a serious analytic error.
The problem is that, by definition, many factors in a peacetime situation favor peace—factors that can be numerous and diffuse. A scenario based on a contemporary, nonhostile relationship between two countries implies many incentives to avoid hostilities. A main reason why the U.S. and China have not gone to war over Taiwan, after all, is because they have many compelling reasons to favor peace. What exactly about the current situation favors peace remains in debate, but candidates include mutual economic interdependence, the presence of nuclear weapons, relatively modestthreat perceptions, and involvement in shared multilateral institutions. Injecting a “trigger event” such as a crisis related to Taiwan does not resolve the structural incentives for peace. Instead, it merely creates an artificial and unconvincing driver of war. Scenarios that aim to explore political topics in wartime but share considerable continuity with peacetime situations are thus inherently contradictory—they depict a situation with as many structural incentives for peace as one that favors war. This contradiction helps explain why so many wargame scenarios strike participants as implausible and unbelievable.
His answer is to build models of crisis escalation that build on historical examples:
A better approach to wargames would be to model the political assumptions for a hypothetical wargame on the experiences of countries that have actually gone to war. As mentioned earlier in this piece, the deliberate falsification of facts in peacetime offers a good model for what might actually happen in wartime and how policymakers would likely react. After all, the most realistic and relevant facts that confront decision-makers in a war are not those that typify situations in peacetime, but those that typify situations in wartime. The very act of envisioning a war situation that does not exist requires the imaginative visualization of a world radically different from a peacetime status quo.
For such historical data to be useful, it should be as rigorous and scientifically derived as possible. The best resource for scenario designers that aim to replicate realistic and relevant facts and incentives for political decisions lies in the historical experience of countries in analogous situations.
This, of course, is what many international relations scholars do: attempt to create generalized and testable hypotheses from historical data. There are, I think, a number of challenges to this approach too. After all, generalizations are simply tendencies and not iron laws of causality, specific contexts matter, and historical analogies are often misleading because of very different circumstances.
However, good IR scholarship can offer insight into is what sort of factors (political, economic, and otherwise) might shape escalation decisions, and we can then try to model those much as we might model the factors that shape combat outcomes. Certainly the social science here is far from settled or definitive, but the mere process of constructing models forces us to make explicit our assumptions about the way the world (or an adversary) works for further discussion, research, and refinement.
In general we know that subject matter experts are not necessarily very good predictors of the future (in fact, they’re quite poor at it), in part because a tendency to be cognitively over-attached to favoured paradigms. We also know that intelligence communities often outperform other forecasters, not so much because of access to classified material (although that can be a factor) but also because recruitment tends to prioritize cognitive characteristics associated with better forecasting performance, and because a well-developed analytical process emphasizes training and methodologies to check cognitive biases while encouraging constructive challenges to assumptions and interpretations. As an academic who has worked as an intelligence analyst (and assessed the predictive accuracy of other analysts), these skills are NOT ones they teach in political science (or international relations or security studies) graduate school. My impression is that they are even less present in most PME programmes.
Heath ends his piece with an important warning about the dangers of hubris and the value of humility:
Even with such improvements, however, humility about what we can achieve is required. Re-creating hypothetical war situations based on the experiences of past wars will be imperfect at best and carry their own flawed assumptions. Carrying out different iterations with slightly different assumptions could help mitigate some of these limitations. Yet even in the most optimal case, we can at best aspire to craft a crude simulacra of the incentives and factors leaders might confront in a hypothetical situation that will carry all sorts of unimaginable complexities. Given the stakes involved, even an imperfect and partial approach offers a potentially significant improvement over current methodologies for defense planners, analysts, and decision-makers alike who seek to explore political questions in wartime.
China’s leaders have become increasingly strident about unifying Taiwan with the People’s Republic of China (PRC).1 Senior U.S. officials and civilian experts alike have expressed concern about Chinese intentions and the possibility of conflict. Although Chinese plans are unclear, a military invasion is not out of the question and would constitute China’s most dangerous solution to its “Taiwan problem”; it has therefore justly become a focus of U.S. national security discourse.
Because “a Taiwan contingency is the pacing scenario” for the U.S. military, it is critical to have a shared, rigorous, and transparent understanding of the operational dynamics of such an invasion.2 Just as such an understanding was developed concerning the Cold War’s Fulda Gap, so too must analysts consider the Taiwan invasion scenario. This understanding is important because U.S. policy would be radically different if the defense were hopeless than if successful defense were achievable. If Taiwan can defend itself from China without U.S. assistance, then there is no reason to tailor U.S. strategy
to such a contingency. At the other extreme, if no amount of U.S. assistance can save Taiwan from a Chinese invasion, then the United States should not mount a quixotic effort to defend the island. However, if U.S. intervention can thwart an invasion under certain conditions and by relying on certain key capabilities, then U.S. policy should be shaped accordingly. In this way, China would also be more likely to be deterred from an invasion in the first place. However, such shaping of U.S. strategy requires policymakers to have a shared understanding of the problem.
Yet, there is no rigorous, open-source analysis of the operational dynamics and outcomes of an invasion despite its critical nature. Previous unclassified analyses either focus on one aspect of an invasion, are not rigorously structured, or do not focus on military operations. Classified wargames are not transparent to the public. Without a suitable analysis, public debate will remain unanchored.
Therefore, this CSIS project designed a wargame using historical data and operations research to model a Chinese amphibious invasion of Taiwan in 2026. Some rules were designed using analogies with past military operations; for example, the Chinese amphibious lift was based on analysis of Normandy, Okinawa, and the Falklands. Other rules were based on theoretical weapons performance data, such as determining the number of ballistic missiles required to cover an airport. Most rules combined these two methods. In this way, the results of combat in the wargame were determined by analytically based rules instead of by personal judgment. The same set of rules applied to the first iteration and to the last iteration, ensuring consistency.
Based on interviews and a literature review, the project posited a “base scenario” that incorporated the most likely values for key assumptions. The project team ran that base scenario three times. A variety of excursion cases then explored the effects of varying assumptions.3 The impact of these varying assumptions on the likely outcome is depicted in a Taiwan Invasion Scorecard (see Figure 8). In all, 24 iterations of the game mapped the contours of the conflict and produced a coherent and rigorously derived picture of a major threat facing the United States.
The invasion always starts the same way: an opening bombardment destroys most of Taiwan’s navy and air force in the first hours of hostilities. Augmented by a powerful rocket force, the Chinese navy encircles Taiwan and interdicts any attempts to get ships and aircraft to the besieged island. Tens of thousands of Chinese soldiers cross the strait in a mix of military amphibious craft and civilian roll- on, roll-off ships, while air assault and airborne troops land behind the beachheads.
However, in the most likely “base scenario,” the Chinese invasion quickly founders. Despite massive Chinese bombardment, Taiwanese ground forces stream to the beachhead, where the invaders struggle to build up supplies and move inland. Meanwhile U.S. submarines, bombers, and fighter/attack aircraft, often reinforced by Japan Self-Defense Forces, rapidly cripple the Chinese amphibious fleet. China’s strikes on Japanese bases and U.S. surface ships cannot change the result: Taiwan remains autonomous.
There is one major assumption here: Taiwan must resist and not capitulate. If Taiwan surrenders before U.S. forces can be brought to bear, the rest is futile.
This defense comes at a high cost. The United States and Japan lose dozens of ships, hundreds of aircraft, and thousands of servicemembers. Such losses would damage the U.S. global position for many years. While Taiwan’s military is unbroken, it is severely degraded and left to defend a damaged economy on an island without electricity and basic services. China also suffers heavily. Its navy is in shambles, the core of its amphibious forces is broken, and tens of thousands of soldiers are prisoners of war.
You will find the full report at the link above, including its recommendations for US,Taiwan, and allies. The launch event was livestreamed on YouTube, and can be found below. Stacie Pettyjohn (CNAS) makes a particular good point about the value of multiple organizations undertaking multiple, different games (in both the public and classified spaces) to enhance the robustness of overall findings.
Analysis of the literature related to wargaming identifies a requirement for the perception of immersion and engagement in wargaming. The references generally indicate that the computer is less able to facilitate collective engagement than a manual system; however, there is as yet little empirical evidence to support this. There are also suggestions that players perceive manual games differently to a computer wargame. An experiment, derived from the previous analysis, was performed to address the research question: Is there a discernible difference between the levels of players’ engagement in computer wargames versus manual wargames? The experiment provides empirical evidence that there is a difference in players’ engagement with a computer wargame compared to a manual game, in particular with the manual game providing greater engagement with other players. Hence, if engagement between players is to be encouraged and regarded as an important aspect of a wargame for defense applications, then this provides evidence that the manual approach can indeed be better.
The approach taken was to have students play two different but similar wargames—one a manual boardgame, the other a digital wargame—and then survey them about their engagement across several categories.
The two games were chosen to be as similar as possible in scale, scope, complexity, and length while team sizes were also the same in both cases and team members seated similarly closely together, with the main difference being people being individually seated at a PC in the computer case and seated round a table in the manual case. The test subjects were available and willing samples of those people taking these four courses. Each course ran each game once, with two courses running manual first and two running computer first. Questionnaire response was optional, and sometimes on different days, so that although the same people were playing both games, a paired analysis was not possible because not everybody responded to both. There is also a risk of non-response being indicative of non-engagement.
The students “varied across serving and retired military officers and other ranks, defense-related civilians, and non-defense-related civilians.” The manual game was “a very simple introductory tactical military game” developed at Cranfield University. The computer wargame used was CONTACT, “a computer-based wargame developed in the United Kingdom and used by UK MoD and several overseas military nations” used here as “a simple introduction to a computer-based military wargame, so that only a limited set of its functionality is exposed and used.”
The results showed slightly the manual game reported somewhat higher levels of personal engagement and much higher levels of engagement with others. (The “experience” row actually shows the number of respondents, 45 and 34 respectively).
The authors note some limitations to their study. Not everyone completed a questionnaire for both games and there is no way of knowing whether the “missing” responses might have systematically biased the results. They also used different games, so it is possible that the game designs were a significant factor in student evaluations. It might also be noted that while engagement is generally a desirable characteristic of all serious games, it is possible to be so engaged and so eager to win that players may potentially learns the wrong lessons—something that Anders Frank has referred to as “gamer mode.”
To date academic research on digital vs manual gameplay has largely been focused on hobby games (for example, some excellent research on the effects of automating the board game Pandemichere and here). Much of it has also been by digital enthusiasts. Smith, Ringrose, and Barker have performed a service by focusing attention on wargaming in particular.
We would be thrilled to have you join us for our inaugural semi-annual members’ meeting on Saturday, January 21st of 2023. The meeting will be held in-person at IDA Headquarters in Potomac Yard, VA.
Members will hear presentations and progress a progress report, financial report, and presentations from each of WWN’s committee leads. We will also share an update on fundraising and engage the group in filming a pitch video for prospective donors and WWN members.
In vision casting for 2023 we will discuss WWN’s collaboration with RAND on Hegemony, a prospective collaboration with the U.N., and the process of building a Board of Trustees.
We will also, of course, play a wargame together!
RSVP to let us know that you are coming, and stay tuned for additional details via email!
If you are a foreign national, please let us know that you plan to attend the event by January 17th.
If you are a US citizen, please complete the form by January 20th.
The Military Operations Research Society has launched a new MORSJournal of Wargaming, edited by Dr. Ed McGrady (Adjunct Senior Fellow at the Center for New American Security) and Dr. John Curry (Senior Lecturer in games development and cybersecurity, Bath Spa University).
The MORS Journal of Wargaming is the premier research publication for articles on the art, practice, and science of professional gaming and related fields. It is peer-reviewed and broad-based. Our goal is to advance the field of professional games, which we define as games played by those with a professional stake in the subject of the game.
While the title of the Journal is “wargaming”, we do not limit discussion of professional games by either their type or purpose. Topics can range from education to analyses, and games can range from board games to conference-scale policy games. Articles do not have to involve a defense or military subject. The Journal seeks any and all articles that develop the art and science of professional games, to include articles on the design, development, production, play and analysis of games. We welcome articles on how a game integrates narrative into its design as well as articles analyzing the statistical outcomes of a series of educational games. Submissions that describe the play and results of a particular game are also welcome; we refer to these as Game Reports.
Submissions should be clear and in plain English, logical and well argued, with supporting references and specifics on game design, outcome, or analysis. Articles can include suggestions for further reading.
The Journal will be published online bi-annually but may expand depending on demand and numbers of submissions.