PAXsims

Conflict simulation, peacebuilding, and development

“Wargaming doesn’t work”

Earlier this month the Washington Post published a lengthy, two-part article on planning and execution of Ukraine’s 2023 offensive (here and here). In these they explore differences between the US and Ukraine over how narrow the primary focus of the planned offensive should be, issues regarding weapons supply and training, the challenge of breaching defensive positions and fortifications, and the effect of UAVs, mines, plentiful ATGMs, air power, and other weapons systems on the modern battlefield.

The articles also address wargaming. According to the Washington Post,

The sequence of eight high-level tabletop exercises formed the backbone for the U.S.-enabled effort to hone a viable, detailed campaign plan, and to determine what Western nations would need to provide to give it the means to succeed.

“We brought all the allies and partners together and really squeezed them hard to get additional mechanized vehicles,” a senior U.S. defense official said.

During the simulations, each of which lasted several days, participants were designated to play the part either of Russian forces — whose capabilities and behavior were informed by Ukrainian and alliedintelligence — or Ukrainian troops and commanders, whose performance was bound by the reality that they would be facing serious constraints in manpower and ammunition.

The planners ran the exercises using specialized war-gaming softwareand Excel spreadsheets — and, sometimes, simply by moving pieces around on a map. The simulations included smaller component exercises that each focused on a particular element of the fight — offensive operations or logistics. The conclusions were then fed back into the evolving campaign plan.

Top officials including Gen. Mark A. Milley, then chairman of the U.S. Joint Chiefs of Staff, and Col. Gen. Oleksandr Syrsky, commander of Ukrainian ground forces, attended several of the simulations and were briefed on the results. …

Ukrainian officials hoped the offensive could re-create the success of the fall of 2022, when they recovered parts of the Kharkiv region in the northeast and the city of Kherson in the south in a campaign that surprised even Ukraine’s biggest backers. Again, their focus would be in more than one place.

But Western officials said the war games affirmed their assessment that Ukraine would be best served by concentrating its forces on a single strategic objective — a massed attack through Russian-held areas to the Sea of Azov, severing the Kremlin’s land route from Russia to Crimea, a critical supply line. …

The rehearsals gave the United States the opportunity to say at several points to the Ukrainians, “I know you really, really, really want to do this, but it’s not going to work,” one former U.S. official said.

At the end of the day, though, it would be Zelensky, Zaluzhny and other Ukrainian leaders who would make the decision, the former official noted.

Officials tried to assign probabilities to different scenarios, including a Russian capitulation — deemed a “really low likelihood” — or a major Ukrainian setback that would create an opening for a major Russian counterattack — also a slim probability.

“Then what you’ve got is the reality in the middle, with degrees of success,” a British official said.

The most optimistic scenario for cutting the land bridge was 60 to 90 days. The exercises also predicted a difficult and bloody fight, with losses of soldiers and equipment as high as 30 to 40 percent, according to U.S. officials.

The numbers “can be sobering,” the senior U.S. defense official said. “But they never are as high as predicted, because we know we have to do things to make sure we don’t.”

U.S. officials also believed that more Ukrainian troops would ultimately be killed if Kyiv failed to mount a decisive assault and the conflict became a drawn-out war of attrition.

But they acknowledged the delicacy of suggesting a strategy that would entail significant losses, no matter the final figure.

“It was easy for us to tell them in a tabletop exercise, ‘Okay, you’ve just got to focus on one place and push really hard,’” a senior U.S. official said. “They were going to lose a lot of people and they were going to lose a lot of the equipment.”

Those choices, the senior official said, become “much harder on the battlefield.”

The Washington Post goes on to note Ukrainian dismay with wargaming as a military planning tool.

On that, a senior Ukrainian military official agreed. War-gaming “doesn’t work,” the official said in retrospect, in part because of the new technology that was transforming the battlefield. Ukrainian soldiers were fighting a war unlike anything NATO forces had experienced: a large conventional conflict, with World War I-style trenches overlaid by omnipresent drones and other futuristic tools — and without the air superiority the U.S. military has had in every modern conflict it has fought.

“All these methods … you can take them neatly and throw them away, you know?” the senior Ukrainian said of the war-game scenarios. “And throw them away because it doesn’t work like that now.”

There are several important take-aways here.

Wargaming can fail—and fail badly—when either data or embedded models are wrong. Perhaps, as some Ukrainians suggest, the wargame failed to take account for the dramatic increase in ISR capabilities that tactical drones can provide, or their use as a weapon system? Or perhaps game assumptions about breaching operations were based on less dense defences, a less skilled defender, or a better-trained and better-equipped attacker? Or perhaps the problem was that Russia’s real world response differed from the way RED responded in games and simulations?

Wargamers like to say that wargames aren’t predictions, and it is certainly true that no wargame nor series of wargames can fully address all assumptions and all choices, and hence explore all of a given problem space—even when the underlying data and models are correct. However, in many ways this caveat is also an evasion. Wargames are often asked to anticipate likely outcomes or responses. That is indeed a form of prediction, even if it is highly contingent and rests on sometimes shaky foundations. And military decision-makers may treat wargames as a sort of crystal ball, regardless of whatever caveats are attached.

The fact that the offensive didn’t unfold as the wargames suggested also may have nothing to do with the wargames—which might have been excellent—but rather divergence between the plan that was gamed and the plan that was executed. We have no way of knowing, for example, whether Ukraine would have been more successful if it had concentrated its forces in a single major thrust as the US and UK apparently preferred. (It should be noted that Ukraine’s multi-prong approach was also driven by political concerns—which a wargame may well not have addressed.) After all, it’s hardly an uncommon human response for a user to blame their tool for disappointing results.

Finally, it is possible that all of these factors, and others beside, may have been at work. Professional wargaming is full of historical lore about the successes and failures of major wargames, recounted in presentations, conversations, and conferences: the gaming of the Schlieffen Plan (1914), Pearl Harbour (1941), and Midway (1942), the interwar wargaming of the US Naval War College, the work of the Western Approaches Tactical Unit during WWII, or the problems encountered by Millennium Challenge (2002). In almost all cases, the reasons for success or failure were far more complex than the lore suggests, or the success or failure was much less clear and absolute than makes for a good story.

Rarely does either uncritical (war)gaming evangelism or knee-jerk methodological cynicism illuminate what happened, why, or what we can do better. Instead, what is required is a long and detailed examination (internal, and inevitably classified) review that accounts for complexity and multi-causality. One hopes that the Ukraine wargames of 2022 will receive just such a thoughtful, sober, and constructive review. There is likely much to learn.

7 responses to ““Wargaming doesn’t work”

  1. Timothy Smith 18/12/2023 at 11:45 am

    Rex, thanks & well said. Agree that the wargame models (tabletop and KORA) might actually have been valid for their extremely important intended use. But it’s also possible that the model understated the degree/effects of the pervasive, timely recon provided by the ‘halo of flies’ — UAVs — over the battlefield, and that massive concentration could simply have incurred massive losses. C4ISR is hard to represent in face-to-face tabletop simulation, and KORA’s ‘Synthetic Wargame and Crisis-Response System’ might not have been modified sufficiently or in time. I hope the modellers are learning and adjuisting their models. Very hard to learn while trying to use one’s analytics for decision support, but in times of rapid change, Argyris’s ‘double-loop’ learning is the only way to propel Perla’s ‘cycle of research’.
    (P.s. KORA: from TWP’s link. And ‘Halo of Flies’ — Alice Cooper (! — yes I know Skunk Baxter would be a more credible source).

  2. Rex Brynen 18/12/2023 at 9:01 am

    @Dr. M – Indeed, that’s why I noted “The fact that the offensive didn’t unfold as the wargames suggested also may have nothing to do with the wargames—which might have been excellent—but rather divergence between the plan that was gamed and the plan that was executed. We have no way of knowing, for example, whether Ukraine would have been more successful if it had concentrated its forces in a single major thrust as the US and UK apparently preferred.” However, if a client (with extensive battlefield experience) feels that the games don’t accurately model battlefield dynamics, surely that’s a good reason to go back and look at the game in the light of subsequent operational data?

  3. Dr. M 18/12/2023 at 7:27 am

    I’m seeing something slightly different here. Though I haven’t read the full article, it seems like the war games in fact succeeded it demonstrating that the course of action the Ukrainians ultimately chose would not likely work.

    “But Western officials said the war games affirmed their assessment that Ukraine would be best served by concentrating its forces on a single strategic objective — a massed attack through Russian-held areas to the Sea of Azov, severing the Kremlin’s land route from Russia to Crimea, a critical supply line. …

    The rehearsals gave the United States the opportunity to say at several points to the Ukrainians, ‘I know you really, really, really want to do this, but it’s not going to work,’ one former U.S. official said.

    At the end of the day, though, it would be Zelensky, Zaluzhny and other Ukrainian leaders who would make the decision, the former official noted.”

    In this context, the Ukrainian official’s statement the war gaming “doesn’t work” was was used as a justification to ignore the results of the war games conducted prior to the operation. Conversely, had the Ukrainians given more credence to the war game’s results, they could have chosen a course that may have produced better results.

  4. Rex Brynen 17/12/2023 at 8:55 pm

    @Cole – Agreed, although I still think it is important to ask: (1) why the games were misleading (if they were–it’s not clear to me they were), and (2) why the Ukrainians (or at least some of them) didn’t think they were very helpful.

    I think both questions help you refine your wargames and their underlying models. More broadly, I think it should be standard practice to revisit major games at a much later date for a sort of post mortem, to see whether the analysis done at the time stood the test of time, whether something important was missed, and so forth.

  5. rjayroland 17/12/2023 at 6:20 pm

    Hi Rex. I am sure you will agree with me that wargaming (as in a simulation) does work, but it is based on a number of criteria that include the choice of the simulation and its AAR capability, the simulation provider, the user, user training and the data. Without having insight into how each of these criteria were applied no plausible criticism or recommendation is possible.

    Given there was a substantial diversion between the wargame results and the actual combat results it’s knee jerk reaction that the value of “wargaming” is very low. What say you?

    Jay

    Ronald “Jay” Roland, Ph.D. M&S SME | WARGAMES | AI Prompt Engineer 33 Castro Road Monterey, CA 93940 Mobile: 831.402.8607 RJayRoland@gmail.com

    PS: The larger text, should I use it, does not imply shouting. Rather, it is intended to make the text easier for me to read. Thanks.

  6. miquelramirez 17/12/2023 at 5:56 pm

    This was a very insightful article. Thanks for sharing.

    I completely agree that in order to “fix” a problem one needs first to diagnose and determine what are the inputs that lead to the observed issue. We do this software and hardware platforms all the time, and wargames – which are eminently hybrid systems that integrate human decision makers with a variety of software systems – are no different to this.

  7. Cole Petersen 17/12/2023 at 5:44 pm

    All models are wrong, but some models are useful. Instead of asking why the wargaming didn’t predict the outcome of the Ukrainian offensive, we should be asking what useful observations wargaming provided to help refine the model for future iterations.

Leave a comment