SIMULATIONS AND GAMES: APPLICATIONS
George A. Waller, University of Wisconsin Colleges, Fox Valley
Adam Wunische, Boston College
One of the particular problems that was highlighted at this year’s conference was the difficulty of measuring the effectiveness of simulations for enhancing student learning. For several of the papers presented, success was measured by either student self-assessment or student satisfaction. If students reported that they felt the simulation enhanced their understanding of course material, the simulation was judged to be effective. It was noted that, while this data is important and helps make the case for the use of simulations in political science classrooms, administrators and detractors often need more convincing objective evidence that simulations do, in fact, generate greater student understanding of course content. Discussion on this topic during the paper presentations centered around three things: 1) that the major advantages of simulations include the development of soft skills such as empathy and active learning, which are difficult to measure in the first place; 2) that subjective student assessments of simulation effectiveness may not equate to objective measures of student learning; and 3) that the measurable impacts may be more long term and can only be observed in subsequent classes, beyond the normal timeframe for a posttest. Although presenters and attendees agree that simulations are fun and appear to enhance student engagement and satisfaction, more work needs to be done to explore ways to measure the actual impact of simulations on student learning.
A pivotal question, which was taken up by several of this year’s papers, was when and how simulations might best be utilized. Some types of simulations are perhaps best employed early in a semester or term, while others may be most effective or useful at midterm or near the end of a course. Simulation placement is an important consideration for instructors and should be tied directly to learning objectives for the course and for particular units/modules of the course. One paper proposed the idea that simulations might be more effective for entry-level students, or for students in introductory courses, rather than advanced students who have presumably already developed the soft skills and abilities that simulations are thought to foster. Another paper solved for this by adjusting the rules, roles, and sophistication for higher-level students to ensure the simulation remains challenging. Explaining the purpose of the simulation or game also helps to convince older or more nontraditional students that the games have academic value, increasing willingness to participate and the likelihood of successful outcomes.
It was also noted that increases in length and sophistication can serve as deterrents to both students and instructors. More complex simulations involve a good deal more care in design, planning, and setup by instructors who use them and often require a significant element of “troubleshooting” if things don’t go according to plan. Simulations that take place over multiple class sessions (sometimes even multiple weeks) must be carefully monitored and adjusted when problems arise or when expectations or objectives are not being realized. Some instructors solve for this by dividing simulation work into iterations to give students, and the instructor, time to reset and reflect. Others solve for this by using social networking software to streamline the process of interactions to reduce the instructor’s workload. While longer term, more complex simulations can be very well designed and implemented in some courses resulting in significant student and instructor satisfaction, shorter, simpler simulations can be quite effective for augmenting important course concepts and require less time and fewer opportunities for unexpected developments. In any case, whether to use longer, more complex, simulations or shorter, simpler ones is an important consideration that needs to align with course (or course unit) learning objectives.
Iterations and variety were employed by a number of track presenters. For the longer term extended simulations, the tasks and events in the simulation were spread out over time. For a particular paper presented on a campaign budget activity, events in the activity are spread out through the term and are supplemented by traditional lecture techniques. Other strategies include icebreaker type simulations that are short and simple. These get students, who otherwise might only be accustomed to nonactive teaching methods, to become familiar with the process before being overwhelmed by a resource-intensive, longer, or more complex simulation. These shorter simulations can also serve multiple functions. Students can learn about how multiple iterations of the prisoner’s dilemma change the outcome, while also meeting and working closely with their fellow students thus building the foundations for future group work.
Debriefing is an essential component of simulation pedagogy. Debriefs were mentioned as a way to mitigate some of the possible negative effects of simulations, and also to consolidate student learning. Possible negative effects of simulations include students losing “the game” and being upset or angry about that, or that the simulation fails to achieve the desired learning outcome. Debriefs can highlight the learning opportunities that come from both negative and positive outcomes and what lessons should have been learned. Debriefs can also help shift a student’s focus on personal shortcomings to the actual learning objectives of the simulation. Debriefs should clarify how the simulations are connected to course learning objectives, and what that means for the broader course curriculum.
SIMULATIONS AND GAMES: EVALUATIONS
Joseph W. Roberts, Roger Williams University
Nancy E. Wright, Long Island University, Brooklyn
Simulations and games have long been a key element of the university classroom. These active learning tools are designed to engage and motivate students. Complex topics that may not be as clear in assigned readings are presented in ways that encourage students to think critically, solve problems, and ask deeper questions. The key question is how do we, as educators, know that simulations are doing what we expect them to do? In 2016, as in previous years, the track was a lively mix of discussion and practice. Four critical themes emerged from the discussions: 1) What does success mean?; 2) Context matters for simulations; 3) Tradeoffs of using real versus imaginary simulations; and 4) Rigorous assessment is needed but that does not mean only quantitative assessment.
What Is a Successful Simulation or Game?
If we ask the question “Do simulations work?” we may or may not get a useful answer. In fact, this may not be the best question to ask, because different learning objectives, classroom configurations, and time or other resources, as well as instructor skill and other factors may impact the success of the simulation or game. For Simon Usherwood, the better questions to ask are “How do you design effective simulations?” and “What are effective implementations of simulations?” The key is building pedagogical tools and teaching simulation design to improve learning. Moreover, there is a need to bring the body of literature on teaching and learning to plan and implement high impact learning tools. Both of these questions relate to the real versus imaginary question below. Michelle Allendoerfer discussed the outsourcing of the design process to two upper-level undergraduate students, Tianshan Fullop and Jacob Warwick, in an independent study. The simulation was then used in Allendoerfer’s comparative politics class. This is an incredibly rich opportunity to develop deeper student knowledge of the issues (for both groups of students but particularly the two designers) and to show students collaborative work between professor and students. The success of the simulation must be thought of in terms of learning outcomes. Erin Baumann and John FitzGibbon discussed the use of crisis simulations to teach and approach the issues of effectiveness and motivation from both a perspective on the scholarship of teaching and learning and a perspective of cognitive psychology. For Baumann and FitzGibbon, the design of simulations must work within the broader context of learning outcomes. Including a different and important body of literature enhances the discussion of fidelity (closeness to reality) and systematization (increasing regularity of interactions even in a crisis environment)
Amanda Rosen and Nina Kollars explored ways to implement active learning and simulations in a methods classroom. The traditional laments by students in methods courses include that it is abstract, boring, and difficult. Rosen and Kollars address this by taking a local restaurant’s simple claim to have the best breakfast in town and ask the students to determine where the best breakfast is using the methods of political science. Students operationalize definitions, collect data, analyze the data, and complete a final paper. Rosen and Kollars do not have clear data on the effectiveness of the project save for course evaluations and expressed student interest (see below). There is no one best way to judge effectiveness.
Context Matters
When using simulations in class there are many issues that a professor needs to consider. Who are the students? What do they bring to the table? What type of simulation or game (i.e., low skill versus high skill; long simulations versus short simulations versus games; or in-class vs. online versus hybrid) meets the learning outcome needs of the professor and the students? The participants used different kinds of simulations or games to reach students in different ways. Victor Asal, Josh Caldon, Andrew Vitek, and Susan Bitter demonstrated and discussed a game taking no more than 10 minutes to play, the Running Game. Depending on the classroom or even university, students will have wildly different starting points in their understanding of inequality. This short game is extremely effective at getting students to understand the concepts of inequality and structure particularly in places where some forms of diversity might be more limited. In contrast, Joseph W. Roberts employed a multi-day simulation of the Israel-Palestine conflict. Given the breadth and depth of the issues in the conflict, the simulation is, by necessity, larger and more complex. However, this simulation was extremely effective in the small course (20 students) in which it was used because the number of roles for students was limited. A significantly larger classroom environment would be much more difficult. A third model of simulation size is shown in the paper by Andrew Schlewitz on the Washington Model Organization of American States (WMOAS). Any large-scale simulation of international relations (WMOAS, MUN, Model Arab League, Model EU, etc.) will have a real impact on learners from multiple institutions. With extensive survey data from student participants, Schlewitz showed real learning but in a largely extra-curricular role that supplements rather than supplants coursework.
Gretchen Knudsen Gee showed the unique challenges for professors in larger classes to get and keep students engaged. Simulations allow for greater involvement of students in and across large multi-section courses. Simulations may also provide for some continuity between sections, because though active learning techniques require more confident instructors, there may be a real fear of trying new things. Moreover, Gee’s paper shows that the resources for creating simulations are important to providing more realistic experiences.
Chad Raymond and Sally Gomaa provided a cautionary tale about context. The authors showed the pitfalls of using online tools for simulations in classrooms. In this case, the use of Flash video caused problems because of security settings, removal of Flash from computers, or other issues. Moreover, the original plan for the simulation experiment failed because the planned site was removed from the Internet. When planning a simulation, it is important to have backup plans and to test the systems well in advance.
Tradeoffs of Real vs. Imaginary Scenarios
Most participants agreed that both approaches are valuable in different ways. On the one hand, developing simulations around actual events imparts to students the opportunity and motivation to conduct research outside the classroom in an effort to learn more about the simulation’s assigned countries and events (see Gee, Roberts, Rosen and Kollars, or Schlewitz, for example). On the other hand, those students not as familiar as others with a region of the world where the simulation is taking place may be intimidated, especially if others are familiar. Moreover, focusing on actual events, especially current happenings, can draw students so much into the day-to-day progression of what is taking place that they may overlook the broader significance (e.g., acquisition of negotiating skills or empathy) that is the purpose of the simulation itself.
Nancy Wright combined certain elements of both the real and the imagined, with the former as the case of a project to harness electricity from methane gas in Lake Kivu, Rwanda, and indigenous displacement in the Central African Republic, and the latter as scenarios of the pre-colonial era in each of those countries, which especially in the case of the Central African Republic has very little data available. One of Wright’s key findings is that students can harness facts to place specific issues and events in a larger context, and where data are scarce, students can harness their imaginations to re-create historical situations and then reflect on why they imagine them the way that they do. Wright also points out that understanding students’ preconceived ideas can influence simulation design and operation particularly to counter the tendency to link a country solely to a particular crisis or atrocity.
Rigorous Assessment Does Not Have to Be Quantitative
The increasingly established trend of equating rigor with quantitative assessment is likely to obfuscate the evaluation of rigor in other equally meaningful ways. This is true for two reasons: quantitative analysis cannot explain everything, and it depends on data that may not always be available. There are other ways to assess the value of simulations and games beyond mere quantification. For example, Rosen and Kollars noted that while reliable data actually measuring the effectiveness of games on learning were not available, they did report that course evaluations, often cited as low for methods courses compared to other courses, were consistently high in the methods course that employed several illustrative games, and in fact a significant number of students wished for a second methods course, an outcome attributed to the use of games. Similarly, Roberts used the knowledge domains assessment model (Pettenger, West, and Young 2014) that is based on learning outcomes. By focusing on learning outcomes, the assessment better reflects the goals of the course, though such means of evaluation would not necessarily be counted in the context of traditional empirical assessment.
Nicholas Vaccaro is critical of the experimental and overtly empirical assessment models that are proposed by Baranowski and Weir (2015). Vaccaro notes that their use of “show and tell” infantilizes the process of disseminating useful and helpful pedagogical tools. Description has value and this should not be overlooked. Discussion about potential flaws in experimental design methodologies is critical. Is a pre/post or control/test group model necessary to show learning? Is it fair for one group to engage in high-impact practices and another not? Does the model proposed limit the assessment to environments that can establish two or more test groups? For example, in a small liberal arts university, a course might be taught biennially. This does not lend itself to testing effectiveness of a technique years apart. Ultimately, the issue comes down to the question “Is the medical clinical trial model an appropriate model for social science research?” The general consensus is that it is not always a useful model for research in teaching and learning.
References
Baranowski, Michael K., and Kimberley A. Weir. 2015. “Political Simulations: What We Know, What We Think We Know, and What We Still Need to Know.” Journal of Political Science Education 11(4): 391–403.
Pettenger, Mary, Douglas West, and Niki Young. 2014. “Assessing the Impact of Role Play Simulations on Learning in Canadian and US Classrooms.” International Studies Perspectives 15 (4): 491–508. doi: 10.1111/insp.12063