PAXsims

Conflict simulation, peacebuilding, and development

Tag Archives: Igor Widawsk

Gaming knowledge-brokering and policy innovation

good-light-bulb-idea-12-light-bulb-ideas-1600-x-992The central theme of last month’s Connections 2015 interdisciplinary wargaming conference was wargaming for innovation (you’ll find a summary here, here, here, and here). Part of the discussion focussed on how games could be used to explore innovative approaches to national security challenges. There was also considerable discussion of how gaming could help develop innovation skills. In some of my own comments I also made the point that, in addition to trying to encourage the original thinkers who generate new and innovative ideas, we also needed to think about equipping all individuals with the communicative, analytic, and bureaucratic skills necessary to move innovation through an often ponderous and unresponsive policy process. After all, it isn’t just about clever ideas, it is also about making clever ideas happen. This was also one of the main conclusions of conference Working Group 1 on educational wargaming, as you’ll see below.

innovation

Little did I know at the time that a week or so before—and about 11,000 km away—Karol Olejniczak (University of Warsaw), Tomasz Kupiec, and Igor Widawskiwere presenting a paper at the annual conference of the International Simulation and Gaming Association in Kyoto, Japan on “Knowledge brokers in action: A game-based approach for strengthening evidence- based policies.”

Public policies need research results in order to effectively address the complex socio-economic challenges (so-called: evidence-based policies). However there is a clear gap between producing scientific expertise and using it in public decision-making. This “know-do” gap is common in all policy areas. Knowledge brokering is a new and promising practice for tackling the challenge of evidence use. It means that selected civil servants play the role of intermediaries who steer the flow of knowledge between its producers (experts and researchers) and users (decision makers and public managers). Knowledge brokering requires a specific combination of skills that can be learnt effectively only by experience. However this is very challenging in the public sector. Experiential learning requires learning from own actions – often own mistakes, while public institutions tend to avoid risk and are naturally concerned with the costs of potential errors. Therefore, a special approach is required to teach civil servants.

This article addresses the question of how to develop knowledge brokering skills for civil servants working in analytical units. It reports on the application of a simulation game to teach civil servants through experiential learning in a risk-free environment. Article (1) introduces the concept of knowledge brokering, (2) shows how it was translated into a game design and applied in the teaching process of civil servants and (3) reflects on further improvement. It concludes that serious game simulation is a promising tool for teaching knowledge brokering to public policy practitioners.

While policy innovation involves more than simply developing evidence and communicating it in the right ways to the right people, such knowledge-brokering is undoubtedly a very important part of the process. In the game that Olejniczak, Kupiec, and Widawskiwere describe:

Participants are divided into 6 groups. Each group manages an analytical unit in a region. Their mission is to support decision-makers with expertise in imple- menting four types of socio-economic interventions. These are: combating single- mothers’ unemployment, developing a health care network, revitalizing a down- town area, and developing a public transportation system for a metropolitan area.

Over the course of the game players have to react to 19 different knowledge needs, often appearing simultaneously in different public interventions. Players have to: (1) contract out studies with an appropriate research design, (2) choose key users of the study and (3) choose methods for feeding knowledge to users.

The choices of players are determined by the resources available to them: the number of staff in their units and the time required to complete each task. Players can be proactive and invest their resources in networking (to discover knowledge needs in advance) or archive searching (to find already existing studies). Players delegate staff members to these tasks. While networking or archive searching it is impossible for that particular staff member to engage in any other activity during the current round (e.g. report preparation).

After each turn, each group receives detailed feedback that includes three ele- ments: (1) a percentage on how well the team matched research designs to knowledge needs and feeding methods to users; the higher the match, the higher are the chances that knowledge will be used by decision-makers, (2) information on the final effect: if a policy actor made a decision based on delivered knowledge or other premises (e.g. political rationale), (3) hints on good research designs, types of users and feeding methods for future turns.

Groups of players compete with each other. Depending on how well they match research designs, users and feeding methods they receive up to 100 points per knowledge need. Teams accumulate points throughout the game and the winning team is the one with the highest score. However, there is also another way to assess players’ performance. Each result for an individual knowledge need (ranging from 0 to 100) is a probability rate that determines what is the chance that the report will be actually used by the decision maker. The algorithm checks, based on this probability, if a particular report will be used by a decision maker and then notes it in a different section of the team score. In effect, every team has two types of score: the first based on accumulation of points throughout the game and the second that informs players how many reports were actually used. The second type of scoring involves a strong element of randomness and luck (a team might succeed even if a report was worth only 20 points – which gives it a 0.2 chance of being used), while the first one reflects how well players can prepare reports. That is why facilitators put more emphasis on the first type of scoring, but at the same time they also remind participants that there is always an element of luck and randomness in decision makers use of reports for policy processes.

I haven’t played it, but it looks brilliant. I strong advise everyone to read the full paper, which was apparently given the Best Paper Award at 46th ISAGA Conference.