Last week I was invited to participate in a demonstration and playtest of the Rapid Campaign Analysis Toolset in Ottawa. RCAT has been developed by the (UK) Defence Science and Technology Laboratory and Cranfield University, and is intended as flexible, low-overhead wargaming system for military planning and analysis.You’ll find more on RCAT here and here (from Connections UK 2013), here (Falklands war operational commanders test, via the LBS blog), and here (in conjunction with a digital simulation, again from LBS).
Defence Research and Development Canada are interested in seeing whether RCAT might be used to help refine the scenarios used for capability-based planning within the Department of National Defence. These scenarios aren’t based on current events, nor are they meant to represent actual planned operations. Instead they are intended to be broadly representative of the sorts of missions that the Canadian Armed Forces might be called upon to perform. They are thus intended to provide the Joint Capability Planning Team with plausible problems that might be addressed by military means, enabling the identification and validation of various military capabilities.
To this end, the visiting RCAT team (Colin Marston of Dstl, Jeremy D. Smith from Cranfield University,and Graham Longley-Brown of LBS) had developed a version of RCAT that addressed an existing force development scenario—specifically, a hybrid warfare scenario that explored the ability of Canadian forces to operate as part of a larger coalition in a complex conflict environment running the gamut from high intensity combat to later stabilization operations.
RCAT design process
I headed up the Red Team, and proceeded to throw every plausible curve I could think of at both the Blue and Green players and the RCAT system itself. The sessions were very much a participatory seminar on the game’s design, as we discussed how RCAT modelled various kinetic and non-kinetic effects, how the system might be modified, and the extent to which it might offer insight into scenario design and capability issues. To this end, we gamed a few turns of everything from major campaign moves (days/weeks/months), through to tactical/operational vignettes (hours)—the former including one major surprise by me, and the latter including a very successful urban operation and airborne insertion by my opponents.
RCAT turn sequence (with apologies for the creases).
What impressions did I draw from all this?
I was impressed with RCAT. It is flexible and easy to understand, and can be easily modified (even during a game) to address issues and needs as they arise. The military outcomes all seemed highly plausible. I thought the combat components worked better than the stabilization model, but then again the scenario was a challenging one. Moreover the political, social, and economic dynamics of stabilization are, in my view, much more complicated and much less well understood than the art and science of conventional military operations.
RCAT’s design lends itself to both training and analytical use—and possibly both at once. Many professional wargamers would suggest that analytical and training games are quite different things, and one should design a game to serve either one purpose or the other. I certainly accept that a game’s experimental design might be compromised by training requirements, and vice-versa. However, I do think there are cases where one can get two (simulated) bangs for one (very real) buck. Because of its elegant design it is easy to imagine RCAT being run as part of professional military education, while analysts use player behaviours to explore research questions of interest.
Game design and playtest sessions can themselves generate useful experimental data. The usual practice with many analytical wargames is the develop the game, playtest it to identify shortcomings, and refine the design. Having done this, the final wargame is conducted—and only then is data systematically recorded regarding the research question being examined. However, our RCAT discussions, although intended simply as introduction and game development sessions, themselves produced substantive findings relating to both scenario development and future Canadian Forces capability requirements. This suggests that we need to think about more systematically identifying insights generated by game design processes.
Scenario designers need to think seriously about politics. There were a few times in the force development scenario we were using where politically-appropriate behaviour by scenario actors threatened to compromise the ability of the scenario to fully explore the intended research questions. While RCAT is certainly not a role-playing or negotiation game, the adversarial (and coalition) nature of game play did force players to think critically about their interests and motivations.
Game facilitation skills matter—a lot. The RCAT team knew exactly when to play the rules-as-written, and when to tweak the system on the fly to best model the unfolding situation. They also had the wisdom and experience to keep the game flowing despite potential distractions (including incessant comments and suggestions from me!)—and, conversely, also knew when to slow things down to allow for a deeper-dive or extended discussion.
Such facilitation skills are not necessarily intrinsic to all wargamers. Indeed, if anything they’re more common among role-playing gamers, especially experienced dungeon/gamemasters, than among “grognard” conflict simulationists. That, however, is a PAXsims post for another day.