PAXsims

Conflict simulation, peacebuilding, and development

The BRRRT problem in scenario design

The US Air Force has been faced with loud and frequent criticism for its decision to retire the (old, slow, ungainly, heavily armed and armoured, and much beloved) A-10 “Warthog” ground attack aircraft, and replace it in the close air support role with the (stealthy, modern, very expensive, fragile, and problem-plagued) F-35 Lightning II. Consequently it decided earlier this year to run head-to-head tests of the two aircraft, in order to determine whether the switch will create any gaps in USAF CAS capabilities. The decision has been welcomed by A-10 supporters. However, as War on the Rocks has reported, it has also generated criticism that a head-to-head comparison of CAS performance may not be a fair or appropriate test.

What does all this have to do with analytical gaming? The answer is to be found in the fundamental importance of the situations or scenarios selected for the test.

It is quite easy to design a scenario in which the A-10 shines: in a situation where the US enjoys air superiority and the enemy has no modern air defences; in hilly terrain, where the Warthog’s ability to fly low and slow to acquire targets and avoid anti-aircraft fire are maximized; facing small calibre ground fire, when its armour can prove a life-saver; against targets for which the aircraft’s famous 30 mm GAU-8 Avenger rotary cannon (known for the loud BRRRRRT noise it makes when it fires) is particularly effective.

It is equally easy to design a scenario where the A-10 cannot effectively function, and the F-35 is the superior platform: in contested airspace;  in a modern air defence environment,  especially in flat terrain, where stealth is important; in a situation where sensor integration and digital communications capability is essential.

Similarly, one could factor in a requirement to operate from an austere forward airstrip (advantage: A-10), a need to operate as an ISR platform or in an air combat role (advantage: F-35), current airframe life expectancy (advantage F-35), operating costs (advantage: A-10), and so forth. In short, the outcome of the test—much like the result of a wargame or other analytical game—is heavily influenced by the way the scenario and its inherent challenges are structured. Constructing a “fair” test is a difficult task, especially in an institutional or political context where the game’s sponsor may be looking to vindicate a particular concept, platform, or approach.

I’ve run into this same issue recently as some colleagues and I consider the design for a possible crisis game that would explore the impact of a particular diplomatic and military approach. Running the game once would hardly tell us much about the effects of “Strategy X” since we would have no baseline against which to compare it. Moreover, this would risk becoming an exercise in confirming the prior views of the participants.

We therefore really need to run two games, much as in the A-10 vs F-35 tests—one using current strategy, and another where the new strategy is adopted instead. This in itself generates new challenges: do we run it twice with the same players, thereby creating a risk that the first game contaminates the second? Or do we run the two games with separate players, creating a risk (common in crisis games) that the outcomes are due more to idiosyncratic differences between players and teams than the differences in strategic approach?

Also—and here we get back to the scenario design question—how do we pick a scenario for the game? It is easy to imagine situations in which the Old Strategy might work better, and others where the New Strategy would seem to have an advantage. If we go ahead with the game we’ll have to try to construct a scenario which seems fair to both approaches, and which allows us to tease out the differences between them when applied to particular problems and situations. Of course, doing so will necessarily engage our own preexisting views of these strategies and their respective strengths and weaknesses. This cannot be avoided it, but prior awareness of this pitfalls should help to reduce the risk of unconscious bias or designing a game that recreates our own preferences and presumptions. Our findings will certainly have more weight if we are seen to have been fair to both approaches and designed a test that was not biased in one direction or another.

If the game we are working on goes ahead later this summer you will certainly read about it in PAXsims. In the meantime it will be interesting to see how the A-10 vs F-35 tests are conducted, what the results are, and how fair they are seen to be by the committed CAS platform partisans on both sides.

One response to “The BRRRT problem in scenario design

  1. Peter Perla 09/05/2016 at 6:32 pm

    Rex,
    Billy Mitchell anyone? This sounds similar in some ways to the Mitchell omingf the old German battleships. Now, like then, a “fair test” is virtually impossible because the opposing camps have such different definitions of fair. If you want to include all the various elements you describe in your piece (such as terrain, air defenses, targets and other things) and somehow objectively score each aircraft, the likely result is that neither is effective enough to provide the CAS the Army needs or at least wants in all situations. The USAF problem is that they are unwilling to spend money on A-10s that they would rather spend on fast flyers like the F-35. And th are unwilling to turn the A-10s over to the Army (an idea proposed once, though I don’t know if the Army is still interested in it.) A sinke test scenario is never going to be fair because the isagreements over the issues reflect a fundamentally different value system. The fighter mafia rightly values high performance against a sophisticated opposition air defense system. The CAS champions value firepower on target once the sophisticated fighters etablish air superiority. Both are necessary. But if we cannot afford both, how do you proceed? Devising competing scenarios to be fair in such a situation seems unlikely. Ulimatly, it is a politicl issue, not really an analytical one. Just as Mitchell’s test proved little more than the fact that close aboard bomb explosions could sink an unmanned battleship, any test competition between the two aircraft is most likely to prove only which side was most effective at dictating the conditions of the test.

    Cyniclly yours,

    Peter

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: