The following piece was submitted to PAXsims by an anonymous contributor.
Rep Mike Gallagher claims in a recent War on the Rocks piece (with commentary by Rex Brynen and others here), that the US Congress needs to take a trip to the Naval War College to participate in a wargame showcasing Battle Force 2045, the Department of Defense’s recently announced plan for a 500 ship Navy. In order for “Naval advocates in the executive branch … to sell a simplified vision of integrated American seapower to the legislative branch”, he claims, they should participate in a wargame to understand the “assumptions, vulnerabilities, unknowns, and risks being assumed in the absence of change.” But selling concepts is a dangerous place for wargames to tread.
Rep Gallagher acknowledges this, saying that “wargames could be rigged to put a positive outcome in front of lawmakers.” He’s very right. A skilled interpretation of wargames takes experience and understanding its craft. You don’t need to have been in the Wargaming profession very long to see, or at the very least hear, a story of DoD leaders misinterpreting or over-interpreting the results of a wargame in support of their preferred concept or program. But wargames provide valuable insights for those willing to put in the effort. Congress should be wargaming – but at the strategic level, and with representatives from the entire interagency, to understand how best they can legislate, provide oversight, declare war, and wield the power of the purse for the benefit of our nation and its citizens.
Battle Force 2045, like all military plans, concepts, or proposed force structures, should be wargamed (and I’m sure it has been). Wargames, together with the rest of the cycle of research, give the planners, concept builders, and force structure assessors the information that they need to build a better plan, concept, or force structure. But that’s the job of the Department of Defense, not Congress.
Congress needs to be informed about the threats, the risks, and the opportunities afforded by everything that they legislate. When it comes to the military, it’s the DoD’s job to provide them with a clear and accurate articulation of the problem. When I brief the results of a wargame to leaders in the military, I don’t run a wargame for them. I use the insights that we learned in the wargame to provide actionable information relevant to the decisions that those leaders need to make. I don’t run a wargame for them to watch; I run a wargame to help me (and my analysis team) understand the problem, which helps me articulate the situation to those decision makers. If the DoD cannot articulate the situation to Congress and the White House, then the perhaps it is they who need to go back to the wargaming table (and the analytic reports, and the exercise schedule).
What are the problems that Congress needs to understand?
Rep Gallagher and the bipartisan colleagues he references are right in saying that Congress should spend some time wargaming. There are many problems that wargames can and should help understand, not the least of which is the U.S.’s current relationship with China. But my experience via many wargames in recent years, from tactical to operational to strategic, have made one thing very clear: competition and conflict with China will rely on much more than Battle Force 2045 or any other force structure that the U.S. military will propose.
International conflict with peer competitors like China will require a robust response from all the pieces of the federal government. The Department of Defense must clearly be ready to deter, and if necessary defeat, aggression against the US or its interests abroad. The Department of State must be able to negotiate with China and come to a clear understanding about red lines, interests, national objectives, and international relationships. State must also be engaged with our allies and partners, exploring not only issues of access, basing, and overflight for our military, but also economic, social, and (dis)information issues that are critical to the US building a coalition of like-minded nations. The Department of Treasury must be engaged with our allies and partners to ensure our and their domestic security and quality of life, which is critical to supporting national will during a contest with one of our major trading partners. The Departments of Agriculture, Energy, Education, and even Transportation have an opportunity to be engaged in the escalating tensions with other global superpowers.
The DoD spends a good deal of money, and quite a lot of time, Wargaming a conflict with competitors across the globe. But rarely do those wargames include representatives from the interagency for a very good reason: that’s not the DoD’s job. Congress, on the other hand, has the ability to legislate issues surrounding all of these Departments. However, a myopic exploration of any one is likely to give a skewed perception of the importance of that line of effort when. If Congress were to declare war against a global superpower, then they must have a holistic view of the interagency problem and understand the broad ramifications – or at least that there are broad ramifications – of that act. Wargaming is a very effective way to do that.
At War on the Rocks today, Rep. Mike Gallagher (R-WI8) of the House Armed Services Committee argues that the US Navy and Department of Defense need to to a better job of of selling their proposed naval force structure (Battle Force 2045) to members of Congress. The way this could be done, he suggests, is through a wargame:
Naval advocates in the executive branch need to sell a simple vision of integrated American seapower to the legislative branch in order to get budgetary buy-in. This will require the Pentagon to step out of its comfort zone.
This should start with a three-day trip, a short congressional delegation. Regardless of who is president and secretary of defense in 2021, this delegation should occur as soon as possible next year, as it may well be the most important government trip that will occur in the next decade. Pentagon leadership should gather congressional defense leaders, interested members, authorizers, and appropriators in the Mecca of seapower and wargaming at the Naval War College in Newport, RI. Over the course of 72 hours the department should walk Congress through a wargame that demonstrates the forces it needs, and how Battle Force 2045 will deny Chinese objectives in the Indo-Pacific generally and the first island chain specifically. The Pentagon needs to put it all out there: assumptions, vulnerabilities, unknowns, and risks being assumed in the absence of change, for legislators to understand and debate.
This idea of wargaming with Congress should have bipartisan support, if for no other reason than I stole it from Democrats. In an op-ed earlier this year with Gabrielle Chefitz, Flournoy argued that the Pentagon should invite members of Congress to observe its wargames in order to provide them with the context behind its budgetary proposals. This makes a lot of sense to me as a defense authorizer. The standard congressional hearings with the department are important, but are suboptimal forums for candid conversations, as neither members of Congress nor defense officials want to embarrass themselves on television and even classified discussions are frequently limited by time. A three-day wargame at Newport, on the other hand, would give members of Congress a rare glimpse behind the curtain of defense planning, allow members to ask stupid questions without generating negative press, and allow defense leaders to admit their intellectual or doctrinal blind spots without getting fired.
This does not need to be fancy. Congress just needs a map of the Indo-Pacific and a secure room filled with the Pentagon’s smartest people who can explain to members in simple terms the Chinese military threat, the blue force structure and capabilities needed to deter the People’s Liberation Army or defeat it in war should deterrence fail, and a clear understanding of what American allies bring to the fight. Defense officials should walk congressional leaders through how the current force structure in the Indo-Pacific is inadequate and how Battle Force 2045, in concert with the rest of the joint force, will turn an unfavorable military balance around and lead to victory. Armed with the analytical and tactical context behind the Future Naval Force Structure and the 30-Year Shipbuilding Plan, congressional leaders would then be in a position, despite budgetary headwinds, to make tough choices and convince their colleagues and the public to go along with them.
The idea has already received some pushback from those who fear that wargames can overemphasize military solutions to diplomatic problems.
This is a legitimate concern, although it is possible to run policy games on South and East Asia issues that don’t presume military solutions—as we did for Global Affairs canada in our South China Sea game.
A bigger concern, I think, is that of “gamewashing”—that is, designing and running a game designed to reach a preconceived conclusion. This is the issue that Jacquelyn Schneider raises:
Moreover, methodologically, it is simply impossible for a single wargame to “prove” the superiority of a particular force structure or set of defence investments, both because game outcomes depend (or should depend) on decisions made in the game and because you also need to test out alternatives. Did the US Navy emerge victorious in the wargame because of Battle Force 2045, or because of brilliant US game play (regardless of the asset mix), or because the Chinese side played poorly? Did eight nuclear aircraft carriers and six light carriers prove to be the key to victory, or would the US have done even better with fewer aircraft carriers and more investment in submarines, UAVs, or something else? How much advantage is gained from investing in Navy versus Air Force capabilities? Would the asset mix that proves most effective in defending Taiwan also be the most effective in other scenarios? And so on.
What you risk ending up with is wargame theatre—slickly-produced to engage and convince the audience, but telling only one possible story.
All that being said, I do think there is value in engaging legislators (and legislative staff) in games—largely to educate, to build the foundations for cooperation in times of crisis, and to seek their input into the political dimensions of policy analysis.
One approach to deal with the constraint is open adjudication where the players participate with the adjudicators in determining the outcomes of interactions. The wargame becomes a structure within which the participants explore the novel scenario as they decide about novel warfighting concepts. The structure forces decision-making within a competitive environment followed by a cooperative exploration of possible outcomes, and this sequence is repeated as the game progresses.
The requirement can be satisfied by many small games run in parallel where each game is repeated multiple times with game design between each iteration modified by insights generated by the previous iteration. The iterations spawn multiple trajectories and create breadth across the decision and outcome space. Both Matrix Game and map/board based Hobby Game techniques can satisfy these requirements.
Each small wargame has one player per side and one adjudicator who also acts as a data recorder. Each subgame is played many times with players rotating between sides and the adjudication position. Rotating roles is critical for games that explore novel situations, as it forces players and adjudicators to see the situation from different perspectives and be innovative about adjudication. Repetition forces the players to think harder about how to win as they face players who have seen their previous attempts. Whether players stay in the same groups for all the subgames or are shuffled between subgames is an open question. I call this “Swarm Gaming” (not wargaming swarms, that is a different topic).
Ben Stevens is an expert in group facilitation and educationvia non-traditional media, with a growing portfolio in learning game development. He joined LLST as a Project Assistant in September 2020.
Like the rest of society, over the past six months serious gamers have scrambled to move our profession online. New methods have proliferated to adapt our favourite mechanics to online platforms, and even those designers most reluctant to leave behind face-to-face gaming (myself included) have been forced to experiment with this new digital medium.
Shortly before COVID-19 changed the way we work, Imaginetic and Lessons Learned carried out research for Save the Children UK in Kenya, Jordan, and Canada on the potential uses and effectiveness of learning games in humanitarian training. That work feels especially timely now, as our study included an examination of the differences between digital games and face-to-face exercises. The main thrust of the findings will cause many long-time serious gamers to nod in agreement: face-to-face learning games were much more engaging, enjoyable, and effective than their digital counterparts.
But hold on: the reality might be more complicated. In the early days of the lockdown, the Lessons Learned team spent some time gaming out pathways to a best-case digital future. Here are some of the key takeaways we identified.
What Makes a Good Digital Game?
A key recommendation from our pre-COVID-19 research was that, for a learning game to be successful, the form of the game should be dictated by the learning goals. Genre, mechanics, and theme should all mirror function. A game about information flow, micro-frictions within teams, or inter-agency coordination should require players with different perspectives to discuss their actions face-to-face. If we want players to learn empathy for others, they should be emulating the decision-making processes and emotional states of others with as much accuracy as possible. Conversely, an action side-scroller makes for a poor tool to teach about a crisis case study if players are paying more attention to the nuances of the controls and the gaps they have to leap over than to the artificially injected learning moments—assuming they have the skill to pass the obstacles at all.
One corollary of our findings is that digital and tabletop games are fundamentally different learning tools. They do different things well and, similarly, are limited in different ways. As experienced tabletop game designers, we are experts in designing with the strengths and limitations of our medium in mind. We know that fog of war is hard, so if it’s needed we make that a central design feature. We know that buy-in is difficult, and so our games should be quick to set up, quick to learn, and quick to start. In particular, we know that our games should involve people with different points of view collaborating on a plan around a table because that is something our medium does exceptionally well.
But are we keeping these principles in mind as we pivot to the digital environment? For many of us (myself included!), pivoting to digital has simply meant running our tabletop learning games over Zoom. After examining my own experiences, hearing about the experiences of others, and playing a lot of games, I think that to succeed in a digital future we need to get back to basics.
Accepting and Avoiding Digital Limitations
Try playing your favourite board game online, and you’ll quickly notice that the components we use just don’t work as well in the digital space. Moving digital pieces on a digital board feels disconnected. Virtual decks of cards can be confusing. Where can I put my tokens and why? How tall is this stack of cards? Did we shuffle or not? What deck did this draw come from?
This isn’t to say that the same mechanics can’t be used, but we should not assume that the tactile user interface we employ via units, tokens, decks of cards, and dice in a tabletop exercise will translate directly to a computer screen.
Digital games do not allow for the type of fluid, dynamic conversation that we rely on in tabletop learning games. After six months of remote work, we are well aware that Zoom calls and forum threads are less efficient than face-to-face meetings. Of course, that principle is equally true when we are engaged in a serious game. Conversation, debate, coordination, and group goal setting—the bread and butter of our tabletop designs—are all bottlenecked by the limitations of online meetings.
If these classic tabletop features don’t adapt well to the digital environment, is that the fault of the medium itself? Or should we as designers be changing our approach?
Embracing the Digital Environment
Even before COVID-19 curtailed our ability to meet face-to-face, we used the digital medium to communicate in a bewildering assortment of ways: emails, WhatsApp messages, Slack groups, social media, shared documents, video calls, SMS—the list goes on and on. Instead of using these tools as imperfect facsimiles of in-person interactions, why not build our digital game designs around digital communication itself?
What many of these tools have in common is that they are asynchronous. Digital conversations don’t happen all at once. Even an urgent email takes time to draft and revise. The slow pace of digital conversations has a serious impact on one of the most ubiquitous game structures: turns. If each turn requires communication between players, we can expect those turns to play out like an uphill slog through mud. It is becoming clear that our digital game designs might be more effective if we made clever use of asynchronicity instead of struggling against it.
I’ll go one step further: that list of digital communications software gives us an opportunity to exploit as digital designers. We already have a magnificent suite of tools at our disposal that our participants use every day. If we are deliberate about the tools we use to host our designs, we will not have to teach participants the mechanics from scratch. They already know how to send emails, manipulate spreadsheets, and participate in Slack threads. With well-fitted digital designs, we can offload the unfamiliar elements of running the game onto the control team, leaving participants to work in ways which already feel natural to them. Since we know that cards and small components often do not translate to digital space, where we can’t pick them up and look at them, it becomes much easier to present information via email, chat, spreadsheet, PDF, image, webpage, or any of the other myriad digital options. These tools also make digital games fantastic for concealing information. As designers, we have much more control over what players see and do not see in digital space.
Another great opportunity presented by the digital environment is that digital games do not require a physical space. We don’t have to book a meeting room to set up the board. Participants don’t have to meet to debate their strategy or submit their actions. When combined with asynchronous methods of communication, this flexibility gives us the opportunity to build games which run over longer periods of time but require less frequent input: fifteen minutes a day over the course of a week or five minutes out of every hour spread across a three-day conference, all largely run over familiar digital office tools (the archetype of this structure is, of course, PAXsims’ own Brynania civil war simulation).
Digital games are much more easily automated, allowing for much more complicated rules and mathematics. This automation could be as simple as a facilitator copy-and-pasting data into spreadsheets and emailing participants the results at the end of every round. Or it could be as complex as scripting fully automated solo experiences in powerful digital game design tools such as Unity. However, in these cases, we have to be cognizant of the drawbacks of offloading the processes from the player. If a player does not understand what is happening or why, they will struggle to connect with the learning objective. If players do disconnect, the more automated a game, the less opportunity a facilitator has to intervene in order to keep it on track.
Because digital games can be automated, played in shorter chunks of time, and do not take up physical space, they can be much more easily repeated than their tabletop counterparts. The potential for repetition is a major opportunity for all kinds of reasons. We know that repetition is a powerful tool for learning—and how often have we railed against the “n=1 problem” in analytical games?
Back to Basics
Of course, we should not consider this an exhaustive list of the strengths and weaknesses of the digital design environment (nor should we think of that environment as being homogeneous). But I think it’s safe to say that, in making the move from tabletop to digital learning games, we will need to go back to basics in our designs. We need to return to the desired outcomes of the project, and we need to search for new game mechanics that maximize the opportunities of the medium while avoiding its pitfalls. For many of us (again, myself included!), this process is going to seem frustrating and limiting as we grapple with basic problems in ways we have not experienced since early in our careers. In some cases, we may have to completely re-examine our assumptions about what a learning game can do.
It’s clear that, as responsible designers, we can’t just force what we’ve been doing into a new shape. If we want our learning game designs to make the transition online, we need to treat digital learning game design as a new art and invest time in learning how to do it well.
Pete Pellegrino is a retired USN commander and former Naval Flight Officer, currently employed by Valiant Integrated Services supporting the US Naval War College’s War Gaming Department as lead for game design and adjudication and lecturing on game related topics for the department’s war gaming courses. In addition to his work at the college since 2004, Pete has also conducted business games for Fortune 500 companies and consulted for major toy and game companies. The views expressed are those of the author and do not represent the official policy or position of any agency, organization, employer or company.
The various Excel tools mentioned in the lecture can be found here.
The latest issue of War on the Rocks features a piece by Benjamin Schechter (US Naval War College) on wargaming cyber security.
“Wargames can save lives” is axiomatic in the wargame community. But can they save your network? As modern conflict has become increasingly digital, cyber wargaming has emerged as an increasingly distinct and significant activity. Moreover, it’s doing double duty. In addition to its application to national defense, it’s also helping protect the economy and critical infrastructure. Wargaming is a military tool used to gain an advantage on the battlefield. However, it has also found a home beyond national security, frequently used in the private sector. Cyber security straddles the battlefield and the boardroom. As a result, it is not surprising that cyber wargaming is increasingly common across both the public and private sectors. As cyber security concerns intensify, so too does the attention given to cyber wargaming.
Designed well and used appropriately, cyber wargames are a powerful tool for cyber research and education. However, misconceptions about what cyber wargames are, their uses, and potential abuses pose challenges to the development of cyber wargaming.
He offers some useful insight into how to do this well—and some equally useful comments on what to avoid:
Cottage industries have emerged that cater to every type of cyber security need. A variety of contractors, consultants, and specialists offer bespoke cyber wargames, support services, and wargaming tools. Often, they provide valuable services during a time when people are grasping for insights and solutions. Yet there are also potentially troubling challenges and conflicts of interest. Wargame sponsors and participants sometimes lack the social and technical ability to assess the wargame product they receive critically. Alternatively, the need for immediate, easy answers for hard cyber problems encourages problematic cyber wargames. Whatever the source, and there can be many, the potential problems and pathologies with cyber wargames go beyond the purely technical or conceptual.
In a world of new tech, vaporware, and buzzwords, cyber wargames can be used to sell other products, services, or ideas. The marketplace for cyber security may encourage using wargames as a sales pitch, leveraging the emotional and intellectual intensity of wargames for influence. One example is using cyber wargames to create anxiety or fear with “cyber doom scenarios.” While this may be appropriate in some specific instances, more often than not, it’s threat inflation to advance a program, advocate for an idea, or sell a product. This is not a new problem, nor is it limited to cyber or wargaming. Bureaucratic politics and defense procurement raise the specter of ulterior motives in wargames for the Department of Defense. The risks are significant for Fortune 500 companies as well as government agencies.
There’s also the problem of cyber wargames that don’t produce anything of value, either by design or by error. The most meaningless and infamous wargames are BOGSATs (a bunch of guys/gals sitting around a table). Cyber BOGSATs are common. These games may appear promising, with distinguished participants and institutions. But they lack clear objectives or game design leading to no substantial finding or benefit. BOGSATs occur when a wargame is not the best tool for the problem, is window dressing for something else, or is just poorly designed.
Particularly egregious are cyber wargames that actively cause harm by teaching the wrong lessons or creating false knowledge. Unfortunately, this is not a new or uncommon phenomenon. Common causes are ill-designed or unrealistic cyber elements and gameplay, poorly specified cyber objectives, and poor communication. A cyber wargame about a high-intensity conflict where cyberspace operations are consistently and catastrophically effective might lead to some skewed perspectives on cyberspace operations. Alternatively, poorly abstracted networks and computer systems may artificially limit player creativity or instill a false sense of security. Finally, and most fundamentally, they might fail to articulate how cyberspace has been abstracted or will be used within the game. Because cyberspace is synthetic, its representation can vary significantly and in different ways from other domains. In any case, poor design will result in games that fail to meet their objectives. Worse yet, they teach the wrong lessons, skew analysis, or stifle new or innovative ideas. My colleague, Dr. Nina Kollars, and I discuss these and related cyber wargaming challenges and pathologies in an upcoming Atlantic Council article.
You can read the full article link at the link above.
Episode 67 of the CNA Talks podcast addresses the topic of diversity and inclusion in wargaming.
On this episode of CNA Talks, Dr. Chris Ma discusses the Derby House Principles on Diversity and Inclusion in Professional Wargamming with their creators: Dr. Yuna Wong of the Institute for Defense Analyses, Professor Rex Brynen of McGill University, and Sally Davis of the UK Ministry of Defence.
Chris Ma Ph.D directs CNA’s Gaming and Integration Team.
Yuna Huh Wong Ph.D is a defense analyst at the Institute for Defense Analyses (IDA). She is a frequent organizer for the Connections Wargaming Conference series, and co-chaired the 2016 and 2017 Military Operations Research Society (MORS) special meetings on wargaming.
Sally Davis is a senior analyst at the Defence Science and Technology Laboratory, part of the UK Ministry of Defence. She writes software in support of analysis, simulation, and wargaming.
Rex Brynen is professor of political science at McGill University, where he specializes Middle East politics, complex peace and humanitarian operations, and serious games. He is senior editor of the conflict simulation website PAXsims (http://www.paxsims.org).
There’s been many facets of public life that have been touched lately by discussions of diversity and representation in different spheres of public life, and gaming has been no different. From the cancelation of Origins Online to the Twitter mob stalking designer Eric Lang to GAMA’s comms director quitting to the Diana Jones Awards at GenCon, there’s been a non-stop list of game-industry headlines all summer long.
Enter, The Derby House Principles, promoting diversity & inclusion in professional wargaming. Focused on the practitioner community that designs, executes, evaluates, and teaches the art & science of wargaming in the realms of defense & security policy, national defense, emergency preparedness, and the intelligence communities, the Derby House Principles have been endorsed by a wide array of government and government-adjacent organizations.
While the professional wargaming community is not our focus, it is still an area of interest for much of our audience. Some of The Dragoons have worked in both the hobby and professional communities, and some professionals will look to hobby sites like us for information on the current practices of the hobby community, or creative approaches to wargaming events.
With that in mind, we reached out to some folks in the professional wargaming world who were well-positioned to discuss and describe not only their own experiences as under-represented minorities in professional wargaming, but also their thoughts on the operationalization of the Derby House Principles. While neither were officially representing any agency or organization, both Yuna Wong and Sally Davis were have long resumes of experience in the professional wargaming world and their insights made for a fascinating podcast. Rex Brynen also stops by at the start of the episode to discuss the genesis of the principles and their initial spread among the professional community.
This is a pretty long episode folks – well over an hour – but we didn’t want to cut the discussion short.
You can listen to it at the link at the top. Our thanks go out to Brant Guillory for recording and facilitating the discussion, and his strong support for a more diverse and inclusive hobby and profession.
The following piece was written for PAXsims by Thomas Barnett and Lea Culver.
Thomas P.M. Barnett, Director of Research at Creek Technologies, is a NYT/WAPO bestselling author of multiple books on global affairs and US global leadership (e.g., Pentagon’s New Map). He has served in the Office of Secretary of Defense following 9/11, at the U.S. Naval War College as a Senior Strategic Researcher/Professor, and at Oak Ridge National Lab as a Visiting Strategist.
Lea Culver is the Founder/President/CEO of Creek Technologies, a former Army Intelligence Officer, and a doctoral candidate with Franklin University. Creek Technologies specializes in Information Technology and Education Support Services across the government.
Comments are welcome below.
On May 1st, the nation’s war colleges received a brutal – if pre-emptive – failing grade from the Joint Chiefs, who declared that Joint Professional Military Education schools are not producing military commanders “who can achieve intellectual overmatch against adversaries.” Because China increasingly matches our “mass” and “best technology,” the Joint Chiefs argue that America will prevail in future conflicts primarily by having more capable officers. As for those “emerging requirements” that “have not been the focus of our current leadership development enterprise” (e.g., integrating national instruments, critical thinking, creative approaches to joint warfighting, understanding disruptive technologies), please raise your hand when you hear something new.
Brutal and timely.
China’s rising naval power compelled the Joint Chiefs to identify the leadership margin between defeating, or yielding to, the People’s Liberation Army, and they judged the Defense Department’s educational institutions as presently not providing it.
So where does Joint Professional Military Education go from here? The Joint Chiefs of Staff were very clear: comprehensively integrate wargaming into a “talent management system” that produces officers who can “apply our capabilities better and more creatively” than our peer competitors. How comprehensively? Enough for future commanders to hone these skills for “thousands of hours of deliberate practice, pushing cognitive limits and intellectual performance.”
The Chief of Naval Operations’ response? Slot the Naval War College under a new Warfighting Development Directorate established within his office – specifically in Warfighting Development (N7), moving it from its traditional spot in Manpower, Personnel, Training, and Education (N1). The institutional signal here is clear: Forge a far more direct link between education and warfighting – a bridge best captured by wargaming.
True, we have witnessed some bureaucratic waffling since then, most notably in the announced “Education for Seapower” program review by the new Secretary of Navy, but that sort of institutional pushback is to be expected during a tectonic shift. Serious money remains slated for future naval education efforts ($350M annually), and, while that probably will not be enough to stand up the proposed U.S. Naval Community College, it is more than enough for the College to upgrade its wargaming program in response to the Joint Chiefs’ urgent mandate.
The Naval War College annually conducts 50-plus wargames, which is impressive, but these simulations are decidedly platform/network-centric, resulting in “quick-look” reports of high immediate interest only to Office of the Chief of Naval Operations’ sponsors. That is not Newport’s fault: it was simply responding to enduring market demand and the Chiefs just radically redefined that. The good news? The tools, technologies, and techniques that the College now needs to recast wargaming as a learner-centric enterprise are readily available – and at reasonably modest cost.
Since the birth of Network-Centric Warfare in the mid-1990s, defense firms have amassed an impressive array of capabilities under the human performance engineering rubric (oftentimes called human-centric engineering), which addresses the third dimension of modern warfare (see below) – namely, the interface between commanders and that “best technology” (systems) controlling our military “mass” (platforms). While traditional wargaming has amply explored strategy (officer-platform interface) and modern simulations plumb the depths of networked warfare (system-platform interface), human performance engineering truly completes that operational triad by rebalancing attention on the officer/system interface, in turn enhancing individual/team cognitive skills while optimizing command architectures. This is exactly what the Joint Chiefs want: systemic overmatch in cognitive skills and decision-making structures.
This vision mirrors the predominant logic coming out of Silicon Valley on the future of machine learning and artificial intelligence: both are best employed in combination with human decision-making in the so-called centaur model. So, again, China eventually matches us on platforms and systems, but we stay ahead thanks to our officers’ superior command skills augmented by cognitive computing. This is how the Joint Chiefs see Joint Professional Military Education becoming a true “strategic asset” – i.e., our winning edge in future warfare.
Such ambition compels the Naval War College to rebalance its wargaming – long skewed toward problem-centric designs – with a learner-centric emphasis on decision-making competencies. This begins by introducing advanced human performance engineering capabilities to assess officer development.
Yes, the War College has longed structured its wargames to test out competing command-and-control structures. But it has done so to ensure that students know how to use those systems as designed within a single domain context (e.g., surface, sub-surface, air), when what the Joint Chiefs now desire are commanders capable of routinely achieving combined effects across domains (air, land, sea, subsea, cyber, space) – suggesting a “multiverse” of possible command-and-control structures appliedly fluidly across the conflict spectrum. In effect, the Joint Chiefs seek the equivalent of “multilingual” officers capable of creatively commanding across domains. Ambitious yet achievable, this goal requires a sophisticated, orchestrated application of assets and technologies from multiple domains to effect an outcome that would otherwise be impossible within a single domain.
In sum, it is not enough to train officers on how to effectively communicate and coordinate actions in a joint command-and-control environment where the primary decisions involve choosing which tasks (and where and when) to hand off to other services. They need to be able to adeptly select combinations of resource from across all services to achieve those desired effects across all domains.
Instilling this sort of cross-domain ingenuity starts with more effectively data-mining joint exercises. These complex wargames generate troves of human-learning data available for capture and systematic analysis. However, the live and post-game analytic tools currently employed at Newport do not come close to comprehensively processing all available data, resulting in final reports that arrive too late to allow for a rapid and robust game-sequencing that builds upon – and integrates – previous learning and outcomes.
By promising systematic feedback on systemic performance across all three wargaming dimensions (officers, platforms, systems), human performance engineering incentivizes schools to pervasively instrument simulation environments with innovative measurement technologies (right down to player-worn sensors) of sufficient sophistication to decode cognitive processes (i.e., decision making) – applying artificial intelligence not so much to the play as to the players, because that is where “talent management” naturally applies.
In capturing and exploiting wargaming’s big data “exhaust,” Joint Professional Military Education faculty, wargamers, and research staff can “incorporate active and experiential learning to develop the practical and critical thinking skills our warfighters require.” Since human performance engineering expertise is not presently resident at military schools, there must be an infusion of private-sector talent to continuously refresh staff skills, knowledge, and innovation.
For the “Navy’s Home of Thought,” it is time to go big or go home.
The Joint Chiefs’ guidance mirrors what Naval War College researchers have argued for years: namely, the utility of teaching integrated with gaming. The most cogent expression of this was put forth by the 2015 cohort of the Chief of Naval Operations’ Strategic Studies Group, whose work on talent management accurately presaged the Joint Chief’s May mandate to finally move ahead. Now, the addition of subject-matter experts steeped in human performance engineering starts that ball rolling by asking: Which new data can be captured in a wargame? Wargaming professionals can then answer the question: What do we learn from that data? Finally, and in a reach-out to research and teaching faculty, the Naval War College as a whole asks: What should we now teach based on this new understanding?
And yes, this is yet again one of those instances where innovation within the defense community can and should spill over into similar advances across the commercial sector, where the globalization of technologies and capital have largely eliminated the West’s historical advantages over the “Rising Rest.” We either field more creative executives who can tilt that now-level playing field back to our advantage or we learn to consistently lose market shares across an emerging global middle class hungry for consumption. Gamifying our educational systems to instill cross-domain creativity is the way ahead, particularly in processing generational cohorts (e.g., Millennials, GenZs) who have grown up with gaming as a way of life.
By systematically introducing human performance engineering to wargaming, the Naval War College establishes itself as a central repository to shape and ultimately drive future joint exercises across the Defense Department’s Joint Professional Military Education enterprise. America employed similar institutional dynamics to leave the Soviets behind in the Information Age, and this is how we do the same to China in the Age of Artificial Intelligence: moving the goal posts on command performance.
The Naval War College knows how to go big on wargaming, having done so in the past to global effect. It is time to do so again.
Yesterday, Tom Fisher (PAXsims and Imaginetics) and Matt Stevens (Lessons Learned Simulation and Training) spoke about their work on serious game for humanitarian training. If you missed it, the Georgetown University Wargaming Society has posted the video of the event to their YouTube channel.
The following was written for PAXsims by Dr. James Sterrett, Directorate of Simulation Education (DSE), U. S. Army University.
The Directorate of Simulation Education (DSE) at the Command and General Staff College (CGSC), U.S. Army University spent mid-March through early June 2020 to prepare for, and then to conduct or support, three elective courses online using commercial wargames. This article outlines our key lessons learned, and then discusses some details of what we did.
In total, the class events we ran totaled 10 different games, each running from 2.5 to 8 hours, each preceded by at least one 3 hour preparation session. In addition, many of these involved numerous internal trainup sessions with each game, plus many trial runs of many games to assess their suitability for use, or in testing VASSAL modules we built for some of these games. For around 9 weeks, from 30 March through 2 June, we averaged one 3-hour online wargame session a day, for testing, preparation, or classes.
We ran wargames for 3 different courses:
Bitter Woods for the Art of War Scholars Program (2x 4 hour classes)
Aftershock for the Defense Support to Civil Authorities elective (1x 3 hour class)
Eight games for History in Action, which we teach in collaboration with the Department of Military History. (8x 3 hour classes)
Top lesson 1: Online wargaming works, but it’s harder than live. Compared to running wargames live, it requires more manpower, time, effort, and technology from both students and faculty.
Top lesson 2: Success requires scaffolding. Don’t assume students are ready with their technology or that they understand the online engine. Plan for on-call tech support during every class. Plan to explicitly teach both the online engine, and the game itself in that engine.
This is the most surprising outcome to us. Several of us had prior experience with VASSAL and were not very fond of it; we are now converts. VASSAL proved to be simple, reliable, effective, and made lower demands on computing horsepower and networks – and it is free. In addition, it was an easier and more powerful tool to make new game modules for.
(Read the detailed section for a more nuanced view of some of the other options.)
Test your tools in online classroom settings before committing to them.
Our initial impressions of tools were frequently overturned after gaining more extensive experience with them in testing.
Ease of use beats flashy presentation.
The more you can minimize the friction of using the online game tool, the more effort you can put elsewhere. This is why VASSAL became, unexpectedly, our favorite application.
Running a wargame online needs more manpower than running the same game live.
Running wargames live, a skilled facilitator can sometimes run 2 or 3 games. Online, you must have one facilitator per game. When teaching the game, you must have one person doing the instruction while another monitors a chat window for questions and puts them to the instructor at appropriate moments.
In addition, we found we needed to have a separate person as dedicated on-call tech support, every time. Although a few classes did not turn out to need tech support, most did, and dedicated tech support meant that the game facilitators could keep the games running while the students with tech problems got helped.
Running a wargame online requires a higher level of skill across the facilitators than running the same games live in one room.
Running wargames live in one room, one person can be the expert whom the others can rapidly turn to for help. Running online, everyone is necessarily in separate rooms, and even with side-channel communications, the question and answer interchange is much slower. Each facilitator needs to be an expert.
Keeping the game moving is harder online due to the limited communications.
Live, you can see what students are doing. You usually know who is thinking, who is confused and needs help, who is done making a move. Online, you usually have no idea. Is the student silent because they are thinking? Confused and lost? Conferring with their partner? Done but forgot to announce it? Done, and announced it, but failed to activate their microphone or had some technical issue? When do you break in to ask, possibly breaking their concentration and creating more friction?
Everything takes longer online.
Your game is hostage to hardware issues beyond your control.
A bad internet day makes for a bad class day. Students come with widely varying degrees of computer savvy. They also come with widely varying quality of equipment. We had one student whose computer was a low-powered laptop around a decade old, which created frequent technical issues. Another used a Surface tablet, which had no direct technical issues, but the small screen caused usability problems.
Ideally, each participant should have least 2 large monitors.
A reasonably modern computer, preferably with at least one large monitor, and, ideally, with two or more large monitors, definitely worked best. Multiple monitors enabled placing documentation and chat windows on one screen while placing the main game map on the other.
Those with only one monitor, especially if on a small screen, found themselves constantly paging between windows and struggling to manage limited screen space.
Some students and faculty took to using a high definition TV as a second monitor, which worked well.
Technology in More Detail
Ideally, we would have done extensive R&D into both a wargame engine and into a communications solution. However, we rapidly determined that Blackboard, which the Army already had on contract, provided a communications system that was both sufficient for our purposes and that students already knew how to use. While not perfect (the interface for splitting students into small groups can be a pain to use), Blackboard worked well for us. Specific features we came to rely on:
The ability to break students into breakout groups, and to have instructors move easily between breakout groups. Each breakout group was one game. Also, we could easily recall all the breakout groups into one room when it came time to return to group discussion.
Screen sharing to assist in teaching the games. While the shared screens were sometimes very fuzzy (which we worked around by zooming in when details were important), the shared screen allowed us to direct people’s attention to the item currently under discussion. In a perfect world, the game engine itself would provide a means of directing attention.
Multiple chat lines: Direct 1 to 1 chat, alongside breakout room chat, alongside group discussion chat, all at the same time. The major feature we wanted, and did not have, was a direct chat line between any subset of people without creating a new breakout room – so that 3 or 4 people on the same side could coordinate their strategy and tactics, for example. We worked around this by having students use their cell phones.
We spent several weeks testing online game engines, both for running games and our ability to modify or create new games.
As noted above, several of us had prior experience with VASSAL and did not have a high opinion of it. However, those opinions were based on the state of VASSAL in the later 1990s, when it was relatively new. VASSAL has improved a lot in the last 20 years, and those improvements are a great credit to its volunteer coding team.
VASSAL is not the prettiest or slickest engine out there. However, it had several decisive advantages:
Highly reliable, it worked on all the equipment students brought into the classes.
Free, while every other solution required either the instructors, or everyone, to buy software.
Easier for students to learn than other systems.
It was significantly easier for our team to make new or modified modules in VASSAL than in other systems.
Presented the widest variety of ready-to-go games relevant to our courses.
Because it is built from the ground up to support wargames, VASSAL’s standard interaction set is tailored to supporting wargames. The other engines seemed, to us, to have standard interactions best suited to running Euro games or role-playing games (which those other engines chase because those are much larger markets!)
VASSAL doesn’t enforce the rules. We thought this would be a weakness, but when the computer enforces the rules, it prevents the facilitator from fixing mistakes – and with first-time players, it’s very handy to let the facilitator see and do anything they want.
Two key workarounds we used with VASSAL:
Normally only one player can join a specific role. However, if everyone who is going to join that role does so simultaneously, you can pack many players into one role, permitting a small team of students to play the same side while maintaining fog of war. Note that this feature is not officially supported.
Most modules that had fog of war also included a “Solo” player who could see everything, so we used this as a facilitator role. We modified the Triumph & Tragedy module to include this as well. Without the ability to see through the fog of war, the facilitator cannot effectively answer questions and solve problems.
Tabletopia was our initial favorite, with a slick interface and great presentation. Our favorite feature is the ability to see the “hands” of the other players, which makes it really easy to direct attention – “Look at the Blue Hand”. Tabletopia is browser-driven and thus is platform independent, which is a great plus. It is also the only way to play 1944: Race to the Rhine online, which we very much wanted to include in our history course.
However, Tabletopia also had some problems. Running a multiplayer game requires that at least one player has a paid account ($9.99/month), and the Terms of Service for game creation included language that we were wary of. In testing, it was much more difficult to make a new game in Tabletopia than in VASSAL, and essentially impossible to modify an existing game we had not made. We could not figure out how to enforce fog of war in a blocks game in Tabletopia.
The great surprise came when we used it in class. We expected students would find the interface simple. However, students found Tabletopia confusing to use and said they preferred VASSAL. Students with weaker computer hardware or slower internet connections found Tabletopia crashed or refused to start.
While we may use Tabletopia again in order to use the excellent Race to the Rhine, we also know we need to figure out how to work through its issues first.
Tabletop Simulator (TTS) has a very large following, but we wound up bouncing off it. The large number of possible interactions means it also has a large number of controls and possible customizations. We found it confusing, and the physics model got in the way of ease of use as pieces bumped into each other. A friend who likes it admitted it takes at least 10 hours to get comfortable with TTS, which is longer than we can afford to spend for classes. In addition to these issues, TTS is a $20 purchase.
Roll20 is built to support role-playing games. Unlike the other options mentioned here, Roll20 includes fairly robust voice and chat communications. It’s reasonably simple to set up a new game in Roll20 as well.
Roll20 fared well in initial testing, and thus became a strong candidate for running Matrix games. However, in full testing, its communications fell apart under the load of around a dozen people. In addition, we ran into significant issues with allocating permissions to move pieces; as far as we could tell, players needed to join so they were known to the game room, then leave, so the GM could make permissions changes, then rejoin, which seemed like an overly complex dance to go through under time pressure in a class with students.
We suspect that our inexperience with the tool is key in some of these problems and intend to retest Roll20 in the summer. In addition, we know of others who have used Microsoft Teams and Google Sheets to run Matrix games.
No Computer Games – Why?
We avoided computer games for several reasons:
Students would need to buy them, and potentially need to buy many games for one class.
Many games of interest run on only a subset of student computers (only Windows, or only high-end Windows computers, for example).
Each computer game has its own interface to learn, on top of learning the game system, increasing the training overhead needed to get to the learning for the class; this is particularly an issue for our history class.
In many cases, understanding the games’ models is an essential component to learning the wider lessons of the class. In our experience, this is harder to do with computer games, whose models are obscured in comparison to manual games. (This is the price paid for the computer doing the heavy lifting of the model; the payoff of the computer is that it does that work.)
We are not adamantly opposed to computer wargames; we use them in our Simulations Lab during live instruction, and are investigating using them in some courses this fall in DL. However, in the short timeframe we had, the above complications were sufficient to rule them out.
Teaching the Games
In all cases, we learned that it works best to:
Provide a 15 minute introduction to the game at the end of the prior class. Students won’t learn the game from this but the overview helps them learn better from the rules and videos in step 2.
Provide the rules and tutorials as homework. YouTube tutorials were very popular with students, when they existed. Students will not learn the game from these but they will come armed to steps 3 and 4 with a better framework.
Provide a practice session. We routinely ran a practice session the afternoon before class. These lasted 3 hours (the same duration as the class) and included the full teaching script plus playing the game. We warned students that this was partly internal trainup, so they knew to be patient with periodic digressions as we worked out unexpected wrinkles. Because they actually play the game, students learn the game in these. If you control the groups, distribute the students who came to the Practice session across the class day student groups. As time went on, we learned to have internal trainup sessions before the official Practice session, so that our people were ready to run a game on their own in the Practice session.
Teach the game at the beginning of class. We find it always helps to begin by identifying the sides and their victory conditions, because you can tie all the game mechanics in the game back to them.
We establish up front that we will not teach all the details of the game, and thus many of these will pop up as they become relevant. We try to warn people if they are going to hit a special case, and if somebody winds up in a bad position because of a rule not previously explained, we will try to come to a reasonably fair adjustment so they are not unfairly punished by an unknown rule.
Doing all this requires facilitators who are experts on the game, as noted earlier.
We find that putting students into pairs on a given side works well in most cases. Two will tend to plan together, each can compensate for the places where the other finds things confusing, and provide moral support where one sometimes feels confused and alone. Three on a team, however, sometimes means one gets left out.
Teaching the Courses
Bitter Woods for the Art of War Scholars Program
The Art of War Scholars Program is a highly select group of CGSC students who engage in a wider-ranging and academically more rigorous course of study, focused on studying the art of warfighting through a combination of seminars and research focused on the operational and strategic military history of the past century. Each student must write a thesis in the CGSC Master’s of Military Art and Science program.
Dr. Dean A. Nowowiejski, the instructor for the Art of War Scholars Program, wanted the wargame to do three things: introduce the students to wargaming, introduce the terrain of the Battle of the Bulge to students for a follow-on virtual staff ride, and to examine the dilemmas facing the Allied forces in reducing the Bulge.
To support this, we need a game simple enough for new wargamers to play effectively, that covered the Bulge in enough detail to gain an appreciation for the terrain and forces involved, and that could be made to start later in the battle in order to cover the reduction of the Bulge.
We selected Bitter Woods for having the best balance of both a simple system (using only the basic rules) and the ability to run the Battle of the Bulge into January 1945. The runners-up were GMT’s Ardennes ‘44 and MMP’s Ardennes. Ardennes ’44 is more complex and Ardennes is out of print, the latter being a key criterion when we made the selection in January 2020 and expected to run the event live.
In order to highlight the dilemmas in reducing the Bulge, we created a scenario that began on 27 December 1944, and also modified the existing Bitter Woods 22 December ’44 start point to cover the entire map, both accomplished with assistance from LTC William Nance, PhD, of the CGSC Department of Military History. After testing both of these, we concluded that the dilemmas showed up best on 22 December, as Patton’s forces begin to arrive. This start point also made a better set of dilemmas for the Germans, as their offensive is not out of steam on 22 December, leaving them with difficult choices about how to protect their flanks while aiming for victory. We divided the twelve students into three separate game groups that executed simultaneously. We had teams of 2 on each side in each game, and each team was split between a northern and a southern command.
Dr. Nowowiejski told us that the Art of War Scholars students would be prepared, and he proved correct. This group of top-flight students, all very comfortable with technology, had no technical issues. In addition, while we ran the game, LTC William Nance moved through the 3 game rooms, offering both historical commentary and acting as the high command for both sides to ping students with questions about their plans in order to ground those in the wider concerns of their historical counterparts. This left Dr. Nowowiejski free to circulate through the groups, observe the students, and discuss wider points with them.
Dr. Nowowiejski had students discuss their plans and operational assessments with the entire class at the end of each of the two 4 hour classes, for a mid-point and final AAR. As the students in the various Allied and German teams uncorked radically different plans, this provided a chance to compare possible courses of action and outcomes for both sides. Students did find they had more units than they could easily control, but this produced useful discussions on the difficulty of integrating tactics into operations. Overall, Dr. Nowowiejski judged the event “very successful” and hopes to have us run it, live or on VASSAL, next year.
Aftershock for the Homeland Security Planner’s Elective
We have run Aftershock in person several times in the past for Clay Easterling and Joseph Krebs’ Department of Joint & Multinational Operations Homeland Security Planner elective course. Much of the course examines higher level legal and policy issues. Playing Aftershock in the middle breaks this up, and also serves as a reminder of the practical impact of the plans and policies they are discussing. Students regularly name it their favorite part of the course. Now we needed to run it electronically…!
No computer version of Aftershock existed. The designers, Dr. Rex Brynen and Thomas Fisher, readily granted us permission to create a version in VASSAL, and Curt Pangracs of DSE spent around two weeks creating and testing the module in time for the course.
There were 33 students in this elective, divided into pairs for each of the 4 teams in the game, making a total of 4 games run in parallel. Four of us from DSE ran the games, while a fifth stood by for technical support, ensuring the two instructors could circulate between the three sessions to observe and discuss.
We knew that this course tended to have a solid proportion of officers with low levels of experience with computers. Because of this, we set the Aftershock module up with two participant roles: the Facilitator, who controlled everything on the board; and the Observers, who could not change anything, but could see everything and call up the supporting documentation. This matched the way we often run the game in person, where the facilitator can keep the game moving by running the board and presenting the players with the next decision. We figured that with some of the students being less technical, making the students Observers would allow them to concentrate on making decisions instead of trying to puzzle out how to make the game execute their intended course of action.
We had far more technical issues than we expected, possibly because the larger number of students – nearly three times the number in any of our other groups – meant there were more opportunities for problems. As a result, in each of the four games, the facilitators wound up using the backup plan of streaming their VASSAL screen of Aftershock out to some of the students who could not otherwise see the VASSAL screen. This is far from ideal, as those students reliant on the stream could not control the view, and the Blackboard shared screen is often fuzzy, but it was better than not seeing the screen at all.
Despite the technical issues, students found the exercise very useful, and the instructors named it “a highlight of the course”. As one student wrote in their AAR, “Finally a time at CGSC where we are truly talking with one another to get something done and seeing the results of our decision”.
However, a key lesson here is that the event would have gone a lot more smoothly if we had conducted a readiness check at the end of the prior class session, just to make sure that everybody had VASSAL installed, could load the Aftershock module, and could join the online session – and then to help those who could not, so their troubles were fixed before the main event.
History in Action is a joint elective taught by DSE and the Department of Military History, run with the aim of teaching military history through wargaming, and also teaching a better understanding of wargaming through learning the history. Knowledge of history should inform both playing and assessing the game. Equally, playing the game should help better understand the history; while wargaming can’t let you walk a mile in someone’s shoes, it can let you walk ten feet in their socks. In prior years, DSE’s partner in this class was Dr. Greg Hospodor, but he moved away and we now partner with Dr. Jonathan Abel, and were also assisted by LTC William Nance, PhD, when he was available.
To be selected for this course, a game has to pass all of these tests:
It has to be a good game – fun is the hook, though it isn’t the point.
It has to be available for our use (some that pass the other criteria are out of print, or, for online, have no online implementation).
We have to be able to teach and run it within the 3 hour class time while leaving time for discussion.
It must be dripping with history. It has to highlight unique aspects of the historical event it covers, so it both helps teach that history directly, and further helps teach when compared to the other games in the course. This tends to rule out many less complex games because they wind up being functionally generic. For example, if the game system doesn’t help drive home the difference between commanding World War 2 armor divisions and Napoleonic cavalry divisions, or treats the employment of the Roman manipular legion as little different from that of the Macedonian phalanx, then it doesn’t drive the learning we are looking for.
While in past years we tried to sequence the games according to a theme or timeline or the scale of the actions, our test sessions in early April convinced us that we should sequence the games in order of probable complexity to students. While we began with the list of games we use when teaching the course live, but some of them were not available online, while others we would like to use were. We used, in order:
Battle for Moscow (The 1941 drive on Moscow)
Napoleon 1806 (The Jena/Auerstadt campaign)
1944: Race to the Rhine (The Allied drive across France, with a logistics focus)
Drive on Paris (Schlieffen Plan and Plan XVII in 1914)
Strike of the Eagle (1920 Soviet-Polish War)
Triumph & Tragedy (The struggle for Europe, 1936-1945)
Fire in the Lake (Vietnam War)
Nevsky (Teutonic Knights vs Novogord Rus in 1240-1242)
In each 3 hour class, we began by teaching the game, then we ran it in parallel student groups until there were 45 minutes remaining. The next 30 minutes or so were spent in discussions, and the final 15 minutes or so were spent introducing the next game in the class. Between classes, students were assigned material on the history behind the next game, rulebooks and tutorials to learn the next game, and a graded AAR sheet to fill out on the game just played. The AAR sheet asks for paragraph-length answers to these questions:
What was your plan/COA going into the game?
How did your plan/COA work?
How did the game illustrate the specific contextual elements of the period?
Was the game effective in conveying these contextual elements? How or how not?
What did you learn about warfare in the game’s time period? What surprised you?
What specific lessons can you draw from this game to apply to the future?
We were very pleased with student learning in the class. Student AAR papers were full of observations on things they had learned about history, about wargaming, and that they could carry forward to future assignments. As one student wrote in their end-of-course feedback, “more than anything the course provided context and examples that I can use in the future when explaining the challenges at the operational level of warfare”. Success! However, we did have to overcome various issues along the way.
We intentionally began with Battle for Moscow, the simplest game, to ensure we could also teach VASSAL in the same class. This generally paid off, as subsequent games utilized, at most, a few more features of VASSAL each time, and thus the learning curve was well controlled and students seemed comfortable with VASSAL most of the time. This process worked poorly when we jumped to Tabletopia for Race to the Rhine in class session 3, and then back to VASSAL for session 4 and beyond. Some of our issues with Tabletopia likely stem from our assumption that its interface was easy enough to need little direct training, and to the ways in which it is different from VASSAL. Equally, we had a slight uptick in trouble with the VASSAL interface in class session 4, perhaps because the students had been out of touch with it for a time.
We began inviting students to our internal prep sessions once we realized they might be able to attend. Students who had the time to attend these were normally much better versed in the game than their peers. We, in turn, had to recall that those unable to attend the optional prep session should be assumed to have a good reason! We also learned to spread the students who attended the prep sessions across the student groups. Arranging the student teams ahead of time, and publishing them for students, also helped, as some student teams would strategize ahead of the class.
This course charted the middle ground in the level of technical issues. All the students were comfortable with technology, but some had poor internet connections or weak computers, including the roughly ten year old laptop mentioned earlier. This led to those students losing connection to VASSAL or Blackboard. When using Tabletopia, weaker internet connections and weaker computers completely failed. Just as we all have learned that internet meetings go better when everybody turns off their video feed, opting for systems, such as VASSAL, that made less intensive use of network and computing power proved better in practice.
Online wargaming works, but it is more effort than live, because:
Test your technology thoroughly and ensure you have support on hand to run it.
Running wargames online will require a higher level of expertise from all of your facilitators, of technology and the games.
Running wargames online will require more preparation from students, both in learning the game and ensuring their technology is ready.
BoardGameGeek (description) and VASSAL (module) links for all the games mentioned:
You might be sat there thinking, the Derby House Principles look great, but in all honesty our organisation is a bunch of guys and nobody but guys apply to work with us, it would feel hypocritical to sign-up. Here’s a different way to think about it:
By putting out inclusive content—not just the characters and story, but the interface as well—a whole generation of diverse gamers and game-makers will come knocking at your door wanting a peice of the action.
Change begins with making content that says everyone is welcome here.
It’s the simple things, like allowing users to remap the controls in your game, that can make a huge difference.
Microsoft’s approach to disability access is really interesting: There are (approximately) 100,000 people in America with an upper limb deficiency. That’s not a commercially viable market. But six million people break their arm every year in the US, putting them temporarily in the same category. And parents are juggling children and laptops every other second in lockdown, putting them situationally in the same category. When you frame it like that, something that allows you to drive Windows and your Xbox one-handed is a mainstream need.
Disability is mismatched human interactions. That’s all.
So here’s a public service announcement ahead of the Connections 2020 games fair:
The MacOS screen-reader can’t get hold of content in Google docs in safari, so all the distributed wargaming I’ve been doing in the pandemic has been with rules and player stats and shared intent slides that I can’t read.
It can’t be that hard, surely? You have a degree and everything!
Too easy? How about this:
Sure, you can pick your way through it eventually, but do you remember anything you just read? How much gameplay will you miss wading through the mud to check a rule here and there? Could you even decipher that text while you have other players talking in your ear on Zoom?
Pop quiz: what’s provided in the slide deck…?
If you are running a distributed game at Connections please consider including a very simple statement on your sign-up sheet:
Please let us know if you have any accessibility needs so we can figure out what will work for you.
The popularity of miniature wargames (MWGs) has recently been on the rise. We aimed to identify the personality characteristics of people who play MWGs. Whereas the popular media have suspected that fantasy role-playing and war-related games cause antisocial behavior, past research on tabletop role-playing has shown that gamers are creative and empathetic individuals. Previous studies have investigated pen-and-paper tabletop games, which require imagination and cooperation between players. Tabletop MWGs are somewhat different because players compete against each other, and there is a strong focus on war-related actions. Thus, people have voiced the suspicion that players of this type of game may be rather aggressive. In the present study, 250 male MWG players completed questionnaires on the Big Five, authoritarianism, risk-orientation, and motives as well as an intelligence test. The same measures were administered to non-gamers, tabletop role-playing gamers, and first-person shooter gamers.
Their findings? Tabletop wargamers are a lot like other gamers* and don’t fit the anti-social stereotype very well:
In the present study, we analyzed differences in intelligence, risk-orientation, authoritarianism, as well as other motives and personality traits between players of MWGs and comparison samples comprised of people who played other types of games and the general population. When compared with the GP, MWG players reported higher openness, higher extraversion, and lower conscientiousness. The same pattern was found when comparing tabletop RPG players with the GP, suggesting that MWG players and RPG players resemble each other. Both types of gamers also reported more openness than FPS gamers. MWG players and RPG players also reported lower conscientiousness than the GP, which may be surprising as painting little soldiers or familiarizing oneself with complex rule-sets are activities that require exactness and a focus on detail. It is possible that the gamers do not view themselves as conscientious in everyday life, but when they engage in gaming activities, they may be rather thorough. Hence, follow-up studies could compare how gamers describe themselves with respect to their everyday activities and their gaming behavior.
No differences between the groups were found for neuroticism and agreeableness. Thus, gamers cannot be regarded as emotionally unstable or disagreeable
individuals – as some stereotypes claim. With regard to rea- soning ability, all players scored higher than participants from the GP. Results also indicated significant differences with respect to conventionalism, authoritarian submission, and authoritarian aggression such that all three groups of gamers described themselves as less authoritarian than participants from the GP did. Of the groups of gamers, RPG players reported the least authoritarian attitude.
With respect to everyday risk-orientation, MWG players’ self-reports were similar to those of RPG players, and both types of gamers reported less risk-orientation than non- gamers. FPS gamers reported a similar risk-orientation as the GP. Interestingly, MWG (and RPG and FPS) players described themselves as acting in a significantly more risk-oriented way during gaming than in their everyday lives. Apparently, gaming behavior does not transfer to everyday behavior. Alternatively, gaming could actually compensate for everyday behavior (i.e., cautious people might like to take risks in a context where no real danger exists).
Regarding motives, MWG players had higher affiliation values than individuals from the GP and the RPG sample. No differences between MWG players and others were found on the power, achievement, and fear motives. With respect to intimacy motives, MWG players scored higher than RPG players did. Apparently, MWG players appreciate close interpersonal relationships.
To summarize, in line with our second hypothesis, MWG players may be seen as open-minded, empathetic, non authoritarian individuals. The competing hypothesis that described MWG players as war-loving, power-oriented, and irreconcilable was not supported by players’ self- reports.
Further, people will only engage in these games during their leisure time if they experience MWG activities as pleasant. The sample of MWG players was high in openness, intelligence, and affiliation. This suggests that the ludological concept of enjoying a pastime may well describe the background of MWGs. Only people who perceive these complex and sociable games that require strategic thinking as a pleasant pastime will be attracted by these games.
Overall, the stereotypes that gamers are antisocial (DeRenard & Kline, 1990) as claimed by the media from the 1980s and 1990s to the present day (Curran, 2011) were not supported. Instead, the present results fit into the RPG literature that portrays RPG gamers as empathetic and socially skilled (Curran, 2011; Meriläinen, 2012). However, the stereotype of gamers as nerdy and sharp-minded does seem to have a kernel of truth, and because reasoning scores were high in all three samples of gamers. And as reasoning ability is a key predictor of academic and occupational success (Kramer, 2009), MWG players cannot easily be dismissed as acting in a dysfunctional manner.
You’ll notice, however, that all of the subject sample (n=250) is male—underscoring the lack of diversity in hobby wargaming.
The sample group is also German-speaking, leaving open the possibility that their are differences across national gaming communities. Almost one-third of the sample were Warhammer 40K players. While the Warhammer community harbours a significant racist and misogynist subcommunity attracted by the dark dystopian militarism of the 40K game universe, other parts of it are also extremely diverse and open.
In terms of future research, the authors note:
This study provides initial insights into personality differences between MWG players and others. In future investi- gations, it will be fruitful to use experimental or longitudinal designs to draw conclusions about causality and answer questions such as: Can MWGs improve participants’ social skills? Can creativity and intelligence be enhanced by engaging in MWGs? Furthermore, observer ratings or infor- mant reports could be included to provide information beyond self-reports. Another interesting question would be whether personality traits predict certain motives to play MWGs (see Graham & Gosling, 2013). All in all, further psychological and transdisciplinary research in the field of MWGs may help us understand the roles of games and playing in forming psychological attitudes and abilities.
As we showed, MWG players are a distinct sample that has a specific personality pattern. Commanding little soldiers and fighting other gamers with the help of these soldiers seems to be an activity that is preferred by open, unconventional people with a high affiliation motive – and it is even possible that the activity may be suitable for developing social skills such as negotiating. Why not engage in MWGs?
*MWG: miniature wargame(rs) FPS: first-person shooters RPG: role-playing game(rs) GP: general population
Now that we are becoming interested again in the Russian Military, the collection of the “Soviet Military Thought” series of books translated by the US Air Force from Russian might be of interest. I have identified 22 books in the series and can find online texts for all (but two) of those on books.google.com. Some of the PDF versions are badly scanned and although readable by human eye, text search of the files is unreliable.
If you know of additional volumes beyond no 22, or if you have links or access to decent (OCR’d) versions, please respond to this post. Thanks.
I’ll update this list with clean OCR’d versions as I get to them.