I was in New York yesterday for a panel discussion that accompanied the launch of PRIO’s new Forum for Peacebuilding Ethics. During the afternoon, one issue that came up time and time again was the need for practitioners to constantly weigh a complex set of priorities when programming in conflict-affected countries, many which involve thorny sets of moral trade-offs and difficult ethical choices. Indeed the problems that we grapple with are often what social planners sometimes refer to as wicked problems—issues where you never have all the required information, never understand all the causal relationships, where efforts to achieve changes in one area can lead to deterioration in other areas, and where to a large extent each problem is unique. There is no solution, in a mathematical/engineering sense—just a good faith effort to maximize gains and minimize harms.
One of the questions that came up was the training and human resource management implications of all this. Do we do an adequate job of preparing staff for these issues? How does one prepare for the ethical minefield that is peacebuilding? How does one prepare them for the almost-inevitable misjudgments?
Much of the time we train around best practices. There are good reasons for this—after all, we want agencies and their staffs to learn from what has (and has not) worked in the past. But the very language of “best practices” implies that there is an appropriate solution, rather than rather a number of potentially problematic approaches each involving costs and benefits.
Which, in turn, brings me to the Kobayashi Maru.
Science fiction fans will immediately recognize this as a reference to a Starfleet training simulation featured in the movie Star Trek II: The Wrath of Khan. In the Kobyashi Maru scenario, trainees were faced with impossible choices in an unwinnable situation: did they answer a distress call from a damaged frieghter, only to find their ship destroyed? Or did they ignore the call, only to see the freighter destroyed? It was meant to be a test of character, and an evaluation of how would-be officers confronted such dilemmas.
Now, readers who haven’t the slightest interest in Star Trek needn’t worry that this blog post will slip into excessive Trekkism. Rather, it occurs to me that there may be some value in designing peacebuilding/humanitarian assistance operations in which participants are confronted with situations that truly have no good answers. I mean this, moreover, not simply in that they face resource shortages and hence opportunity costs associated with actions (something that the Carana simulation does very well), but rather that no matter what they do, they are forced to confront gut-wrenching moral choices.
- Does one—for example—pull humanitarian workers out of a dangerous area, knowing locals will die? Or does one keep them there, knowing that no matter what security precautions are taken there is a significant risk of staff being killed?
- Do you authorize an airstrike against a high-value insurgent leader, knowing that there is a near-certainty of significant civilian casualties?
- Do you pay “taxes” to a local militia to enable access to a needy population—knowing that the doing so strengthens their capacity to engage in such predatory activities?
- Do peacekeepers fight in to protect civilians from massacre, even if they believe they lack the capability to win and might thereby be slaughtered as well? (Yes, I’m thinking here of Srebrenica, although it could equally be applied to some of the choices that MONUC has made in DR Congo.)
…and so forth. The point would be not so much which particular choice was made, but how it was made—providing an opportunity for participants to reflect on the the moral and practical calculus involved.