Skip to main content
The Army Lawyer | Issue 4 2021View PDF

Book Review: The Centaur’s Dilemma

Book cover: 'The Centaur's Dilemma'

Book Review

The Centaur’s Dilemma


[Artificial Intelligence (AI)] technology will change much about the battlefield of the future, but nothing will change America’s steadfast record of honorable military service, individual accountability, and our military’s commitment to lawful and ethical behavior. Our focus on AI follows from our long history of making investments to preserve our most precious asset, our people, and to limit danger to innocent civilians. All of the AI systems that we field will have compliance with the law as a key priority from the first moment of requirements setting through the last step of rigorous testing.1

In his book, The Centaur’s Dilemma,2renowned national security practitioner and former U.S. Court of Appeals for the Armed Forces Chief Judge James E. Baker contextualizes the expansive development of AI and its emerging legal structure. As in his previous literary works,3 Baker frames his terms and issues, the most pertinent of which is the “Centaur’s Dilemma,” or “how to reap the benefit of AI for national security purposes without losing control of the consequences.” 4 In other words, the Centaur’s Dilemma discusses how to gain the advantage of rapid AI processing speed while preserving the value of human input and control.

Modeled in chess,5 this “centaur” human-algorithm concept where machines provide processing power and humans provide oversight became widely promulgated in a Department of Defense (DoD) policy speech on keeping a human in or on the loop for weapons using AI.6 Baker’s book is “intended to make AI and the law accessible to national security policy and legal generalists so that they can make wise and strategic decisions about regulating the security uses of AI.”7 Baker notes the broad contours and normative implications of his text as “[identifying] law, or principles of law, that might, do, or should apply to AI by implication or analogy” and proposes looking toward either the law of armed conflict or arms control to provide a framework for AI or using constitutional law as a gap filler.8

To provide a roadmap for his readers, Baker divides the book into two parts.9 The first part “describes AI, its security uses, and risks,” with chapters on the history, components, and potential of AI; relevant military and intelligence issues concerning AI; and the risks of AI to security (such as creating unintended consequences, hardening authoritarian governments, triggering a technology arms race, lowering the cost of conflict—thus generating more conflict, and exposing national security decision-making pathologies).10 Baker also discusses the “central” and normative question of “how, if at all, should we, might we, regulate the national security uses of AI.”11 He looks at the underlying principles behind national security law, constitutional law, statutory authorities, and how they may apply to AI.12 Baker also looks to existing frameworks and how they may apply to AI, including arms control and the laws of armed conflict.13 Baker concludes by reviewing regulatory mechanisms outside the legal process, such as ethical codes of conduct, internal review boards, and corporate social responsibility.14

Baker is an analytical writer in the truest sense of the word in that he provides context for his AI subject matter, such as the underlying historical currents and pending implications for the topic, and clearly frames his terms by noting common characteristics from other competing definitions. He also keeps the audience engaged by noting real-world examples to illustrate his concepts—such as the technology implications of the dispute between Apple and the Federal Bureau of Investigation following the San Bernardino shooting15—and by providing concise explanations of points of constitutional law and statutory analysis, including an in-depth review of the principles of the Youngstown case.16 Perhaps best of all, in takeaways at the end of each chapter, Baker lays out the critical questions that legal generalists and policymakers should concern themselves with while looking toward a normative legal framework concerning AI.

As underlying global affairs push cyber law into a growing, visible, and discussed facet of national security law,17 such a text is extremely timely and may provide normative and historical guideposts for lawyers and policymakers as they navigate new opportunities and threats. For instance, national security scholars have noted that the latest technologies, including AI, have provided an opportunity for Congress and the DoD to address the risk.18 One example is provided by the National Security Commission on Artificial Intelligence, which recently promulgated its report on “winning the artificial intelligence era.”19 Another example is provided by the Joint Artificial Intelligence Center, set to execute the DoD’s 2018 Artificial Intelligence Strategy.20 Thus, this text may guide newly-established bureaucracies as they navigate legal issues and AI.

Judge advocates (JAs) should take particular notice of this book because it provides an introduction to AI and the emerging legal frameworks that are being used and developed around it; but, most importantly, it reinforces the purposes and advancement of national security law as a whole. Baker is able to describe the impact of AI on national security law as “a military force multiplier,” with an emphasis on intelligence cycle.21 He also places AI in the broader changing strategic context, like the advent of the modern battleship, which should enable JAs to better brief and advise commanders and staff personnel on the topic.22 Baker also discerns the three purposes of national security law, which includes “provid[ing] the substantive authority to act, as well as the left and right boundaries of that action,”23 the process of its application,24 and “provid[ing] for, protect[ing], and preserv[ing] our essential legal values.25 These purposes go beyond AI and are relevant for JAs as a normative guide in any legal practice. The book also provides a concise and precise review of constitutional national security law and its connections to AI, which reads like an easy-to-read law school hornbook on the topic; JAs will welcome this review on the topic.26 This will enable JAs to better process national security legal issues and to inform their commanders on this developing topic.

Ultimately, The Centaur’s Dilemma achieves what it sets out to do: it provides a framework for national security policy attorneys and legal generalists, in addition to raising critical questions and potential solutions as AI develops. The book is similar to P.W. Singer’s Wire for War,27 insofar as both texts break down complex topics on AI and make it digestible for a relatively novice audience through easy-to-read prose and real world vignettes straight out of the headlines. Such a book is appropriate for those unfamiliar with the topic or are starting to build their national security law framework, such as recently-commissioned lieutenants at The Judge Advocate General’s Legal Center and School (who trained with the new Cyber Corps direct commissioned officers at the Direct Commission Course). As such, there is no dilemma with reading The Centaur’s Dilemma as a guide to the legal and national security implications of AI. TAL


1LT Rovito is an Operational Law Judge Advocate for the 371st Sustainment Brigade in the Ohio National Guard in Springfield, Ohio.


Notes

1. About the JAIC: The JAIC Story, JAIC, https://www.ai.mil/about.html (last visited July 25, 2021) [hereinafter About the JAIC].

2. James E. Baker, The Centaur’s Dilemma: National Security Law for the Coming AI Revolution (2021).

3. See James E. Baker, In the Common Defense: National Security Law for Perilous Times (2007); W. Michael Reisman & James E. Baker, Regulating Covert Action (2011).

4. Baker, supra note 2, at 4.

5. Andrew Lohn, What Chess Can Teach Us About the Future of AI and War, War on the Rocks (Jan. 3, 2020), https://warontherocks.com/2020/01/what-chess-can-teach-us-about-the-future-of-ai-and-war/ (noting the interplay of chess with AI and human input).

6. Baker, supra note 2, at 4. Human in the loop generally means that a human is the initial decisionmaker for an action. Human on the loop means that the human generally supervises the AI’s use with the action ongoing. See id. at 41. For more on the background of the “centaur,” see Paul Scharre, Ctr. for a New Am. Sec., Autonomous Weapons and Operational Risk: Ethical Autonomy Project ch. IX (2016) (discussing centaur warfighting); Adam Elkus, Man, the Machine, and War, War on the Rocks (Nov. 11, 2015), https://warontherocks.com/2015/11/man-the-machine-and-war/. For more on the policy background, consult Matthew Rosenberg & John Markoff, The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own, N.Y. Times (Oct. 25, 2016), https://www.nytimes.com/2016/10/26/us/pentagon-artificial-intelligence-terminator.html. For more on “human in the loop” or “human on the loop,” see U.S. Dep’t of Def., Dir. 3000.09, Autonomy in Weapon Systems (Nov. 21, 2012) (C1, May 8, 2017); Kelley M. Sayler, Cong. Rsch. Serv., IF11150, Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems (2020).

7. Baker, supra note 2, at 5.

8. Id. at 5–6.

9. Id. at 6.

10. Id.at 5–6.

11. Id. at 7.

12. Id.

13. Id. at 8.

14. Id.

15. Id. at 96–100.

16. Youngstown v. Sawyer, 343 U.S. 579 (1952); Baker, supra note 2, at 134–42.

17. For instance, consider the level of coverage for the 2021 U.S. Cyber Command Legal Conference, which was streamed online. General Paul Nakasone et al., U.S. Army, 2021 U.S. Cyber Command Legal Conference (Mar. 4, 2021). Also consider The Cipher Brief’s symposium on “The Mission to Integrate Artificial Intelligence into the Military’s Future Battle Rhythm,” featuring Lieutenant General Michael S. Groen (Director of the Joint Artificial Intelligence Center), the Honorable Katharina McFarland (Commissioner, National Security Commission on Artificial Intelligence), and Alon Jaffe (Director, National Intelligence Division National Security Group, Microsoft Federa). Lieutenant General Michael S. Groen et al., The Cipher Brief Symposium: The Mission to Integrate Artificial Intelligence into the Military’s Future Battle Rhythm (Mar. 10, 2021), https://www.thecipherbrief.com/column_article/the-mission-to-integrate-artificial-intelligence-into-the-militarys-future-battle-rhythm. Lieutenant General Groen has echoed several normative points of Baker, including at his recent statements at the Yale Special Operations Conference:

It’s not just about tech, it’s about the process, it’s about the function. . . . It’s enormously educational when you really start asking folks, “Okay, how do you actually make that decision. What data do you use? What data should you be using? How is that data presented to you? Could it be presented in a different way. Who actually owns that data?”

Sydney J. Freedberg Jr., Frontline Geek Squads: SOCOM’s Secret Weapon, Breaking Def. (Mar. 8, 2021, 11:47 AM), https://breakingdefense.com/2021/03/frontline-geek-squads-socoms-secret-weapon/.

18. Elaine McCusker & Emily Coletta, Who Will Lead the World in Artificial Intelligence?, C4ISRNET (Mar. 1, 2021, 12:48 PM), https://www.c4isrnet.com/opinion/2021/03/01/who-will-lead-the-world-in-artificial-intelligence/.

19. Nat’l Sec. Comm’n on A.I., Final Report (2021).

20. About the JAIC, supra note 1; U.S. Dep’t of Def., Summary of the 2018 Department of Defense Artificial Intelligence Strategy (2019).

21. Baker, supra note 2, at 30–45.

22. Id. at 46–65.

23. Id. at 70.

24. Id. at 74.

25. Id. at 89.

26. Id. at 95–142.

27. P.W. Singer, Wired for War: The Robotics Revolution and Conflict in the 21st Century (2009).