PAXsims

Conflict simulation, peacebuilding, and development

Barzashka: AI, wargaming, and ethical oversight

At the Bulletin of the Atomic Scientists, Ivanka Barzashka notes the many potential contributions of artificial intelligence to wargaming—but also the ethical dangers:

AI’s integration into wargames can subtly influence leadership decisions on war and peace—and possibly lead to existential risks. The current landscape of human-centric wargaming, combined with AI algorithms, faces a notable “black box” challenge, where the reasoning behind certain outcomes remains unclear. This obscurity, alongside potential biases in AI training data and wargame design, highlights the urgent need for ethical governance and accountability in this evolving domain. Exploring these issues can shed light on the imperative for responsible oversight in the merging of AI with wargaming, a fusion that could decide future conflicts.

She raises some important points, consistent with the thoughtful arguments she has made before about academic standards and research ethics in analytical wargaming.


In my view, however, the issue of how one does ethical review—and, indeed, if one should do an ethical review—is perhaps even more complex and fraught.

University-based research involving human subjects almost universally requires ethics approval by an Institutional Review Board (US), Research Ethics Board (Canada), Research Ethics Committee (UK), or similar. However (having served for many years as chair of a REB) such review is almost entirely focused on the protection of human subjects. Except in unusual circumstances, it does not protect anyone from research that might be distorted or put to use for unethical purposes. Indeed, there’s generally no accepted standard for what this might entail. I’ve certainly self-censored research findings because I thought they could be misused. Some of my colleagues, however, might consider such self-censorship a violation of a broader academic commitment to knowledge. After all, aren’t I thereby applying an ethical (and possibly political) filter to my work?

Ethics reviews in universities also provide, at best, minimal protection from research designs that might be “unethical” in that they are intended to produce desired results, and they certainly provide zero protection against misinterpreting data. The academic enterprise assumes that the safeguards here are embodied in peer review processes before publication—but peer review itself is neither perfect nor free from its own biases

Also, very little government policy development is subject to research ethics restrictions or formal oversight. Indeed, it is often specifically excluded from national-level research ethics rules, in part because they would be impossible to apply. How would you apply the rules for academic research to meetings with stakeholders, decision-informing conversations with colleagues, etc? The issue of ethics in intelligence collection are even murkier, yet this usually has even greater effects on national security decision-making than any wargame ever does.

This is not to say that we cannot identify potential dangers and pitfalls, as a way of guarding against them. However, implementing formal oversight (on a case-by-case basis) might be challenging indeed.

The discussion continues.

Comments are closed.