The latest issue of the Journal of Political Science Education 11, 4 (October-December 2015) contains an article by Michael Baranowski and Kimberly Weir (Northern Kentucky University) on “Political Simulations: What We Know, What We Think We Know, and What We Still Need to Know.” It is a very useful reminder of the need for us to be more methodologically sophisticated in examining the issue.
For political scientists looking for creative ways to engage students, simulations might be the answer. The common conception is that because this type of activity offers a unique way to convey information through active learning, student learning will consequently increase. In order to evaluate this claim, we conducted a meta-analysis reviewing relevant simulation articles published in the Journal of Political Science Education from its inception through 2013. This systematic approach examines not just whether simulations prove engaging but, more importantly, whether they are valuable learning tools. We found that the discipline needs to conduct a more rigorous assessment of learning outcomes to move beyond the “Show and Tell” approach to evaluating simulations. Upon reviewing the articles, we are able to identify how a few changes can offer better information about the pedagogical value of simulations.
They are critical of some of the assessment mechanisms used to measure the learning impact of simulations:
The good news is that most of the simulations we examined did employ some sort of empirical evaluation method. However, this is only in a very broad sense and includes essentially any sort of measurement of student engagement and learning, including student reaction papers, course evaluations, exams, and final course grades. As one might reasonably expect, in every instance except one (Raymond 2012, discussed below), the authors concluded that their evidence demonstrated the effectiveness of the simulation to some extent.
Unfortunately, much of this empirical evidence was not as convincing to us as it often seemed to be to the authors. The fundamental problem with exams, final grades, and course evaluations as measures of simulation effectiveness is fairly obvious: It is extraordinarily difficult to isolate the effect of the simulation on student learning and/or engagement. Most of us are familiar with the feeling that a simulation or some other technique really helped students “get it” in a way reading and lectures did not, but general evaluations that do not focus specifically on the simulation itself cannot really tell us if that is the case.
While it is common for instructors to set aside time after a simulation for an in-class debriefing session, it is difficult to carefully evaluate this sort of evidence and even more difficult to convey it with any precision to anyone not present for the debriefing session. This is not to suggest that postsimulation debriefings are without merit as they can provide a wealth of potentially useful information to instructors. But alone they cannot provide sufficient evidence of the success of a simulation.
For the reasons outlined above, we do not consider simulations that solely rely on grades, course evaluations, or impressionistic debriefings to provide much in the way of strong empirical evidence….
Overall, they argue that the evidence on simulation effectiveness is positive, but that more effort is needed to assess this:
Our review confirmed that, while instructors struggle to systematically evaluate simulations, a small but growing body of evidence lends support to the contention that students who participate in simulations do in fact learn more than students not taking part in such exercises.
The literature has done a better job of identifying qualitative ways that students gain from participating in simulations. The fact that students are more enthusiastic about learning increases the likelihood that they might more regularly attend classes, as noted by Gorton and Havercroft (2012). While enthusiasm can only help to engage students, it does not necessarily lead to learning. That being said, rigorous research in which the effects of simulations can be isolated and measured is not as prevalent in the literature as we hope it one day will be. In part, this may be due to the manner in which pedagogical research is designed. While none of the authors we reviewed wrote anything like “I ran this simulation and then thought I should write it up,” some of the studies led us to suspect that is how things happened. While we are glad that the results of these efforts can be shared with the larger community, seeking rigor in the discipline necessitates planning on the part of the instructor to incorporate elements such as pretests and control groups rather than including them as an afterthought.
As Baranowski and Weir note, student surveys and self-reported learning may be a better gauge of how much students have enjoyed the simulation than what they have actually learned (or, for that matter, whether they’ve even learned the right things, since simulations may also especially vulnerable to generating misleading conclusions). They recognize, however, that fully experimental methods—using a control and treatment groups, and random assignment to these—are often not feasible. Certainly I know my POLI 450 students would riot if half of them were told they weren’t participating in the Brynania simulation. However, in the absence of a Control group there’s no reliable way of determining if the opportunity cost of a simulation was really worth it, or whether students would have learned just as much through other more traditional means like lectures, assigned readings, or course discussions.
They briefly discuss some of the problems with pre/post-test assessments of learning, although I think they understate the problems of prompting, sensitization, and consequent bias. The article largely focuses on traditional learning outcomes (knowledge retention, for example), and not necessarily on other learned skills (diplomatic skills, leadership, communication, self-confidence, stress management).
Finally, it seems to me quite possible that simulations articles in general, and those that include explicit attention to assessment mechanisms in particular, are an unrepresentative sample of simulation use more broadly. Almost by definition they are written by instructors with a particular interest in simulation methods, and who might therefore be much more effective at designing and implementing simulations, as well as integrating them into course curriculum.
All-in-all, the piece is a welcome contribution to the political science literature on simulations and learning.