Review of: Micah Zenko, Red Team: How to Succeed by Thinking Like the Enemy (New York: Basic Books, 2015). 298pp. USD$26.99 hc.
“Red teaming” is the practice of assuming the role of a potential adversary so as to expose vulnerabilities, stress-test plans, or anticipate some of an opponent’s possible actions. In this very useful book, Micah Zenko explores the application of red teaming in the context of military planning, intelligence analysis, homeland security, and the private sector. In doing so he goes well beyond describing red-teamers and what they do to offer his views on the strengths, weaknesses, and best practices of the approach.
Many readers of PAXsims will be particularly interested in Zenko’s take on military wargaming. One major portion of this chapter of the book is devoted to the infamous Millennium Challenge 2002 wargame, in which Blue’s forces were resurrected following an innovative and devastating surprise attack by Red, and gameplay then resumed along largely scripted lines. (An excerpt from Zenko’s discussion of this was recently published at War on the Rocks, and can be found here.) I’ve previously argued that the shortcomings of Millennium Challenge were a little more complicated than he suggests, and Ellie Bartels has also taken up the issue of wargames and experimental design. More generally, Title X and similar large-scale doctrinal games (such as Millennium Challenge) are not the best examples of truly adversarial gaming to be found in the US Department of Defence. On the other hand, it is clear that many US wargames are not very innovative or challenging, a shortcoming that has been taken up extensively in the past year by both senior officials and the professional wargaming community. Zenko doesn’t address any of this, although in fairness much of it has come since he likely finalized the book manuscript.
Having done both academic and policy work on intelligence assessment, I was also particularly interested in what Zenko has to say about the intelligence community. His focus here, as elsewhere in the book, is on explicit red teaming, wherein analysts are tasked with the devil’s advocate role of producing assessments that challenge conventional interpretive wisdom. His discussion of this is good. However, efforts to counter cognitive closure run much broader than red teaming alone, and include a variety of alternative analytical methods. Moreover, in my own experience some of the most effective red teaming is often not that generated by dedicated red team groups as a stand-alone exercise, but rather the internal debates that occur in a well-managed intelligence shop, where analysts are actively encouraged to assertively challenge their own work and that of their colleagues—regardless of seniority or conventional wisdom—in order to see whether other conclusions are possible from the same (or other) data. The quality and attributes of senior- and mid-level intelligence managers and the institutional culture within the organization are key to making this happen.
Overall, Zenko identifies six sets of best practices for red teams. I would have liked to have seen this discussion a little more deeply grounded in the growing research on predictive judgment, notably from psychology and decision science or predictive judgment—neither Richards Heuer’s classic work nor the the seminal research of Philip Tetlock and the Good Judgment Project on how individuals and groups predict the future are mentioned at all—but the ideas he puts forward are nonetheless valuable ones. Specifically, he argues that: there must be buy-in for the process from above; red-teamers much be outside regular analytical structures so as to maintain objectivity, yet inside enough to be aware and accepted; they must be fearless sceptics who know how to deliver their analyses with finesse and tact; they should be eclectic and unpredictable (“have a big bag of tricks”); senior officials must be prepared to hear bad news (or contrary analyses) and act on them; and one should red team enough, but not so much that it excessively demoralizes and distracts. Finally, he suggests that “the overarching best practice is to be flexible in the adaption of best practices”—a very, very important point indeed.
I equally liked his explicit discussion of red teaming malpractices, although I might have framed some a little differently. He cautions against ad hoc devil’s advocacy that is little more than token dissent; warns against mistaking red team outputs for policy; is critical of irresponsible freelance red treaming; and highlights the dangers of shooting the red team messenger when they deliver contrary views. He also stresses that red teams should inform, but not set, policy—that is, they should be but one input and perspective in the policy process. He concludes by making several recommendations for government, namely that big decisions should be red-teamed; red team efforts should be compiled to enable learning and sharing; red team instruction should be expanded, and military red team methods should be reviewed; and that red-teaming should be made more meaningful, and not simply a rubber stamp.
Overall, this book is a useful survey of the field. While primarily intended to introduce the topic to a general audience, even experienced red-teamers will find Red Team to be of considerable value.