Conflict simulation, peacebuilding, and development

Playtesting RCAT


Last week I was invited to participate in a demonstration and playtest of the Rapid Campaign Analysis Toolset in Ottawa. RCAT has been developed by the (UK) Defence Science and Technology Laboratory and Cranfield University, and is intended as flexible, low-overhead wargaming system for military planning and analysis.You’ll find more on RCAT here and here (from Connections UK 2013), here (Falklands war operational commanders test, via the LBS blog), and here (in conjunction with a digital simulation, again from LBS).

DSTL.pngDefence Research and Development Canada are interested in seeing whether RCAT might be used to help refine the scenarios used for capability-based planning within the Department of National Defence. These scenarios aren’t based on current events, nor are they meant to represent actual planned operations. Instead they are intended to be broadly representative of the sorts of missions that the Canadian Armed Forces might be called upon to perform. They are thus intended to provide the Joint Capability Planning Team with plausible problems that might be  addressed by military means, enabling the identification and validation of various military capabilities.banner.jpg

To this end, the visiting RCAT team (Colin Marston of Dstl, Jeremy D. Smith from Cranfield University,and Graham Longley-Brown of LBS) had developed a version of RCAT that addressed an existing force development scenario—specifically, a hybrid warfare scenario that explored the ability of Canadian forces to operate as part of a larger coalition in a complex conflict environment running the gamut from high intensity combat to later stabilization operations.


RCAT design process


I headed up the Red Team, and proceeded to throw every plausible curve I could think of at both the Blue and Green players and the RCAT system itself. The sessions were very much a participatory seminar on the game’s design, as we discussed how RCAT modelled various kinetic and non-kinetic effects, how the system might be modified, and the extent to which it might offer insight into scenario design and capability issues. To this end, we gamed a few turns of everything from major campaign moves (days/weeks/months), through to tactical/operational vignettes (hours)—the former including one major surprise by me, and the latter including a very successful urban operation and airborne insertion by my opponents.


RCAT turn sequence (with apologies for the creases).


What impressions did I draw from all this?

I was impressed with RCAT. It is flexible and easy to understand, and can be easily modified (even during a game) to address issues and needs as they arise. The military outcomes all seemed highly plausible.  I thought the combat components worked better than the stabilization model, but then again the scenario was a challenging one. Moreover the political, social, and economic dynamics of stabilization are, in my view, much more complicated and much less well understood than the art and science of conventional military operations.

RCAT’s design lends itself to both training and analytical use—and possibly both at once. Many professional wargamers would suggest that analytical and training games are quite different things, and one should design a game to serve either one purpose or the other. I certainly accept that a game’s experimental design might be compromised by training requirements, and vice-versa. However, I do think there are cases where one can get two (simulated) bangs for one (very real) buck. Because of its elegant design it is easy to imagine RCAT being run as part of professional military education, while analysts use player behaviours to explore research questions of interest.

Game design and playtest sessions can themselves generate useful experimental data. The usual practice with many analytical wargames is the develop the game, playtest it to identify shortcomings, and refine the design. Having done this, the final wargame is conducted—and only then is data systematically recorded regarding the research question being examined. However, our RCAT discussions, although intended simply as introduction and game development sessions, themselves produced substantive findings relating to both scenario development and future Canadian Forces capability requirements. This suggests that we need to think about more systematically identifying insights generated by game design processes.

Scenario designers need to think seriously about politics. There were a few times in the force development scenario we were using where politically-appropriate behaviour by scenario actors threatened to compromise the ability of the scenario to fully explore the intended research questions. While RCAT is certainly not a role-playing or negotiation game, the adversarial (and coalition) nature of game play did force players to think critically about their interests and motivations.

Game facilitation skills matter—a lot. The RCAT team knew exactly when to play the rules-as-written, and when to tweak the system on the fly to best model the unfolding situation. They also had the wisdom and experience to keep the game flowing despite potential distractions (including incessant comments and suggestions from me!)—and, conversely, also knew when to slow things down to allow for a deeper-dive or extended discussion.

Such facilitation skills are not necessarily intrinsic to all wargamers. Indeed, if anything they’re more common among role-playing gamers, especially experienced dungeon/gamemasters, than among “grognard” conflict simulationists. That, however, is a PAXsims post for another day.

One response to “Playtesting RCAT

  1. Graham Longley-Brown 01/12/2015 at 3:06 am


    Thanks for this, and your invaluable comments during the week. We’re up to 30 so far and have yet to consolidate our notes or incorporate Johns Hopkins Todd Kauderer’s! The chance to hold a detailed ‘under the hood’ examination of RCAT in a real context was a wonderful opportunity. Could I please add a points of amplification and one of explanation.

    On the ‘Game design and playtest sessions can themselves generate useful experimental data’ point. The aim of the trip was to demonstrate the utility of RCAT to: 1. Assess a scenario to see if it is fit for purpose; and 2. Identify aspects of future Canadian capability requirements. But we also wanted to reveal the ‘inside of the sausage factory’; the inner workings of RCAT development to prompt discussion and enable skills transfer. Hence we deliberately tabled a partially finished version of an RCAT scenario to develop iteratively during the week with our Canadian hosts and Rex and Todd. This gave us, for the first time, the opportunity to examine the scenario and develop supporting RCAT mechanisms with the customer from the very outset. Usually we will put a near-complete and playtested version on the table for general review only when confident that it will work and just needs tweaking. Working with the customer from the outset with the hood wide open illustrated Rex’s point: within minutes of throwing a map and some counters on the table and assigning people to roles we were identifying both gaps in the scenario provided and capability insights. Hence Rex’s comment that we should formalise the identification of insights generated from the very start of the game design processes itself.

    The minor point of explanation. The comparative weakness of the stabilisation test turn was due to the deliberately half-formed version of RCAT used. We literally cut and pasted the mechanisms from a previous playtest of Iraq 2003-2008 onto the provided Canadian scenario. I don’t think the RCAT mechanisms need a major overhaul (but can always be improved); it was more a case of more (some!) consideration and calibration. Do framework patrols have a positive or negative effect on local population support for Coalition troops? How much activity can an insurgent group conduct in a month-long turn period? These kinds of consideration and calibration were deliberately not done prior to the Canada trip in order to prompt discussion.

    Finally, please excuse the typo in the central green box on the RCAT Turn Sequence. This should read ‘Red Teaming assumptions’, in the sense of challenging assumptions or playing ‘Devil’s Advocate’ – as opposed to the Red Cell, which plays the adversary(s).

    Thanks, Rex!

    Graham LB

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: