PAXsims

Conflict simulation, peacebuilding, and development

Daily Archives: 18/02/2016

Connections North (Ottawa, February 22)

Join_the_Team_RCAF.jpgThis is rather last minute notice—probably because we’ve put it together at the very last minute—but Canadian readers will be interested to learn that Defence Research & Development Canada and PAXsims will be hosting Connections North, a one day miniconference on professional wargaming, in Ottawa on Monday, 22 February 2016. Among the presenters will be Jim Wallman (Past Perspectives), who is visiting the colonies this week.

Ottawa—deliberately located so as to be beyond the immediate reach of an US invasion, and with its scenic Rideau Canal providing strategic mobility for Imperial troops in the event of American aggression—is obviously the perfect place for such a get-together. You can’t be too careful.

Update—further details

We’ll be meeting in a meeting room at the Lord Elgin Hotel (100 Elgin St, Ottawa). The agenda for the meeting is below.

0930 Arrive at meeting location and set-up
0945 General Introductions: Workshop Overview and Objectives
1000 Presentation on DND matrix game trials

Dr. Murray Dixson (DRDC)

1015 Break
1030 Presentation on DND`s Rapid Campaign Analysis Toolset (RCAT) trial

Paul Massel (DRDC)

1100 Gaming the Semi-Cooperative: Peace Operations, HADR, and Beyond

Prof. Rex Brynen (McGill University)

1130 Perspectives on Wargaming

Jim Wallman (Past Perspectives)

1200 Lunch
1315 Round table discussion on the application of wargaming to defence analysis and to support strategic decision making
1400 Demonstration of AFTERSHOCK: A Humanitarian Crisis Game

Rex Brynen (McGill University) and Thomas Fisher (Imaginetic)

1530 Final Points and Conclusion

We hope to see members of the Ottawa wargaming community there, as well as those working on national security issues more broadly and others interested in the use of serious games for education and policy analysis. Pass it on!

Stephen Downes-Martin: The diagram

At the most recent MORS wargaming Community of Practice meeting/teleconference, Stephen Downes-Martin (US Naval War College) presented on the topic of “Adjudication: The Diabolus in Machina of Wargaming.” I wasn’t able to attend since I have a (not-so-serious) game to organize this weekend, but much of what he had to say was based on an important article on the same topic he published in Naval War College Review 66, 3 (Summer 2013).

What was new to me, however, was this very useful diagram summarizing his arguments, which we’re happy to present to PAXsims readers:

Adjudicating Discovery Wargames Stephen Downes-Martin.png

Indeed, I’m almost tempted to make a game of it (“roll d6: on a 4+ your adjudicators believe they already know the answer, and ignore potential insights from the game that run counter to their preconceptions”).

Baranowski and Weir on political simulations: What we think we know, and what we still need to know

upse.jpgThe latest issue of the Journal of Political Science Education 11, 4 (October-December 2015) contains an article by Michael Baranowski and Kimberly Weir (Northern Kentucky University) on “Political Simulations: What We Know, What We Think We Know, and What We Still Need to Know.” It is a very useful reminder of the need for us to be more methodologically sophisticated in examining the issue.

For political scientists looking for creative ways to engage students, simulations might be the answer. The common conception is that because this type of activity offers a unique way to convey information through active learning, student learning will consequently increase. In order to evaluate this claim, we conducted a meta-analysis reviewing relevant simulation articles published in the Journal of Political Science Education from its inception through 2013. This systematic approach examines not just whether simulations prove engaging but, more importantly, whether they are valuable learning tools. We found that the discipline needs to conduct a more rigorous assessment of learning outcomes to move beyond the “Show and Tell” approach to evaluating simulations. Upon reviewing the articles, we are able to identify how a few changes can offer better information about the pedagogical value of simulations.

They are critical of some of the assessment mechanisms used to measure the learning impact of simulations:

The good news is that most of the simulations we examined did employ some sort of empirical evaluation method. However, this is only in a very broad sense and includes essentially any sort of measurement of student engagement and learning, including student reaction papers, course evaluations, exams, and final course grades. As one might reasonably expect, in every instance except one (Raymond 2012, discussed below), the authors concluded that their evidence demonstrated the effectiveness of the simulation to some extent.

Unfortunately, much of this empirical evidence was not as convincing to us as it often seemed to be to the authors. The fundamental problem with exams, final grades, and course evaluations as measures of simulation effectiveness is fairly obvious: It is extraordinarily difficult to isolate the effect of the simulation on student learning and/or engagement. Most of us are familiar with the feeling that a simulation or some other technique really helped students “get it” in a way reading and lectures did not, but general evaluations that do not focus specifically on the simulation itself cannot really tell us if that is the case.

While it is common for instructors to set aside time after a simulation for an in-class debriefing session, it is difficult to carefully evaluate this sort of evidence and even more difficult to convey it with any precision to anyone not present for the debriefing session. This is not to suggest that postsimulation debriefings are without merit as they can provide a wealth of potentially useful information to instructors. But alone they cannot provide sufficient evidence of the success of a simulation.

For the reasons outlined above, we do not consider simulations that solely rely on grades, course evaluations, or impressionistic debriefings to provide much in the way of strong empirical evidence….

Overall, they argue that the evidence on simulation effectiveness is positive, but that more effort is needed to assess this:

Our review confirmed that, while instructors struggle to systematically evaluate simulations, a small but growing body of evidence lends support to the contention that students who participate in simulations do in fact learn more than students not taking part in such exercises.

The literature has done a better job of identifying qualitative ways that students gain from participating in simulations. The fact that students are more enthusiastic about learning increases the likelihood that they might more regularly attend classes, as noted by Gorton and Havercroft (2012). While enthusiasm can only help to engage students, it does not necessarily lead to learning. That being said, rigorous research in which the effects of simulations can be isolated and measured is not as prevalent in the literature as we hope it one day will be. In part, this may be due to the manner in which pedagogical research is designed. While none of the authors we reviewed wrote anything like “I ran this simulation and then thought I should write it up,” some of the studies led us to suspect that is how things happened. While we are glad that the results of these efforts can be shared with the larger community, seeking rigor in the discipline necessitates planning on the part of the instructor to incorporate elements such as pretests and control groups rather than including them as an afterthought.

As Baranowski and Weir note, student surveys and self-reported learning may be a better gauge of how much students have enjoyed the simulation than what they have actually learned (or, for that matter, whether they’ve even learned the right things, since simulations may also especially vulnerable to generating misleading conclusions). They recognize, however, that fully experimental methods—using a control and treatment groups, and random assignment to these—are often not feasible. Certainly I know my POLI 450 students would riot if half of them were told they weren’t participating in the Brynania simulation. However, in the absence of a Control group there’s no reliable way of determining if the opportunity cost of a simulation was really worth it, or whether students would have learned just as much through other more traditional means like lectures, assigned readings, or course discussions.

They briefly discuss some of the problems with pre/post-test assessments of learning, although I think they understate the problems of prompting, sensitization, and consequent bias. The article largely focuses on traditional learning outcomes (knowledge retention, for example), and not necessarily on other learned skills (diplomatic skills, leadership, communication, self-confidence, stress management).

Finally, it seems to me quite possible that simulations articles in general, and those that include explicit attention to assessment mechanisms in particular, are an unrepresentative sample of simulation use more broadly. Almost by definition they are written by instructors with a particular interest in simulation methods, and who might therefore be much more effective at designing and implementing simulations, as well as integrating them into course curriculum.

All-in-all, the piece is a welcome contribution to the political science literature on simulations and learning.