PAXsims

Conflict simulation, peacebuilding, and development

Game theory, role-playing, and forecasting

Things are rather busy at the moment with your PaxSims editors, and we haven’t had a chance to post much in the way of simulation evaluations, book reviews, or even news as of late. So that our regular readers don’t get bored, therefore, I thought I would instead flag an interesting article that appeared a few years ago in the International Journal of Forecasting 18, 3 (July-September 2002) devoted to forecasting and conflict situations. In it, Kesten Green describes an experimental assessment of the relative predictive effectiveness of unaided judgement, role-playing simulation, and game theory:

Can game theory aid in forecasting the decision making of parties in a conflict? A review of the literature revealed diverse opinions but no empirical evidence on this question. When put to the test, game theorists’ predictions were more accurate than those from unaided judgement but not as accurate as role-play forecasts. Twenty-one game theorists made 99 forecasts of decisions for six conflict situations. The same situations were described to 290 research participants, who made 207 forecasts using unaided judgement, and to 933 participants, who made 158 forecasts in active role-playing. Averaged across the six situations, 37 percent of the game theorists’ forecasts, 28 percent of the unaided-judgement forecasts, and 64 percent of the role-play forecasts were correct.

You’ll find the full article here. They are interesting findings indeed, and you’ll find much discussion of them by other contributors in the journal (you’ll require an institutional subscription to access it all, however). In particular, George Wright, commenting on Green’s findings, suggests that the immersive and iterative dynamic in role-playing tends to bring out a range of experiential and prior knowledge. It also tends to provide the kind of feedback that improves forecasting accuracy.

The poor showing of unaided judgment might improve, of course, if forecasters were making predictions within their own areas of expertise. However, as Philip Tetlock showed in his 2005 book Expert Political Judgment, experts have a remarkably poor record of accuracy in their predictions—not much better, in fact, than dart-throwing (or wheel-spinning) monkeys.

Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin’s prototypes of the fox and the hedgehog, Tetlock contends that the fox–the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events–is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits–the single-minded determination required to prevail in ideological combat.

Having just spent last summer reviewing the predictive accuracy of one particular group of forecasters—intelligence analysts—it turns out that this group, at least, shows better discrimination and calibration than much of the “expert political judgement” Tetlock examines. Part of this might be the thinking styles that the intelligence community favours. Part of it too, however, is probably the collegial process of producing assessments, as well as the iterative process of refining them.

Bringing this all back to simulation design, it suggests the importance of maximizing feedback, interaction, and iteration in game mechanics. From a forecasting and training perspective alike, a “busy” simulation with a range of interaction experiences may be likely to produce more realistic or accurate results than one which reduces its players to a limited number of rigid turns of strategic decision-making (of the Decision A/Result A1/Decision B/Result B2/Game Over variety).

On a different simulation-related note, the poor performance of game theorists in Green’s experiment may also hold out a cautionary note about our ability to use such models in computational simulation (although in fairness Green is as much testing game theorists as game theory—which may well be quite different things).

Kesten Green also has a follow-up paper on the value of “role thinking” that simulation designers will also find useful. You can find a pre-publication copy of it here.

Leave a comment