The US Air Force has been faced with loud and frequent criticism for its decision to retire the (old, slow, ungainly, heavily armed and armoured, and much beloved) A-10 “Warthog” ground attack aircraft, and replace it in the close air support role with the (stealthy, modern, very expensive, fragile, and problem-plagued) F-35 Lightning II. Consequently it decided earlier this year to run head-to-head tests of the two aircraft, in order to determine whether the switch will create any gaps in USAF CAS capabilities. The decision has been welcomed by A-10 supporters. However, as War on the Rocks has reported, it has also generated criticism that a head-to-head comparison of CAS performance may not be a fair or appropriate test.
What does all this have to do with analytical gaming? The answer is to be found in the fundamental importance of the situations or scenarios selected for the test.
It is quite easy to design a scenario in which the A-10 shines: in a situation where the US enjoys air superiority and the enemy has no modern air defences; in hilly terrain, where the Warthog’s ability to fly low and slow to acquire targets and avoid anti-aircraft fire are maximized; facing small calibre ground fire, when its armour can prove a life-saver; against targets for which the aircraft’s famous 30 mm GAU-8 Avenger rotary cannon (known for the loud BRRRRRT noise it makes when it fires) is particularly effective.
It is equally easy to design a scenario where the A-10 cannot effectively function, and the F-35 is the superior platform: in contested airspace; in a modern air defence environment, especially in flat terrain, where stealth is important; in a situation where sensor integration and digital communications capability is essential.
Similarly, one could factor in a requirement to operate from an austere forward airstrip (advantage: A-10), a need to operate as an ISR platform or in an air combat role (advantage: F-35), current airframe life expectancy (advantage F-35), operating costs (advantage: A-10), and so forth. In short, the outcome of the test—much like the result of a wargame or other analytical game—is heavily influenced by the way the scenario and its inherent challenges are structured. Constructing a “fair” test is a difficult task, especially in an institutional or political context where the game’s sponsor may be looking to vindicate a particular concept, platform, or approach.
I’ve run into this same issue recently as some colleagues and I consider the design for a possible crisis game that would explore the impact of a particular diplomatic and military approach. Running the game once would hardly tell us much about the effects of “Strategy X” since we would have no baseline against which to compare it. Moreover, this would risk becoming an exercise in confirming the prior views of the participants.
We therefore really need to run two games, much as in the A-10 vs F-35 tests—one using current strategy, and another where the new strategy is adopted instead. This in itself generates new challenges: do we run it twice with the same players, thereby creating a risk that the first game contaminates the second? Or do we run the two games with separate players, creating a risk (common in crisis games) that the outcomes are due more to idiosyncratic differences between players and teams than the differences in strategic approach?
Also—and here we get back to the scenario design question—how do we pick a scenario for the game? It is easy to imagine situations in which the Old Strategy might work better, and others where the New Strategy would seem to have an advantage. If we go ahead with the game we’ll have to try to construct a scenario which seems fair to both approaches, and which allows us to tease out the differences between them when applied to particular problems and situations. Of course, doing so will necessarily engage our own preexisting views of these strategies and their respective strengths and weaknesses. This cannot be avoided it, but prior awareness of this pitfalls should help to reduce the risk of unconscious bias or designing a game that recreates our own preferences and presumptions. Our findings will certainly have more weight if we are seen to have been fair to both approaches and designed a test that was not biased in one direction or another.
If the game we are working on goes ahead later this summer you will certainly read about it in PAXsims. In the meantime it will be interesting to see how the A-10 vs F-35 tests are conducted, what the results are, and how fair they are seen to be by the committed CAS platform partisans on both sides.