Tag Bosnia

Predictions

We do like to claim we have predicted the casualty rates correctly in three wars (operations): 1) The 1991 Gulf War, 2) the 1995 Bosnia intervention, and 3) the Iraq insurgency.  Furthermore, these were predictions make of three very different types of operations, a conventional war, an “operation other than war” (OOTW) and an insurgency.

The Gulf War prediction was made in public testimony by Trevor Dupuy to Congress and published in his book If War Comes: How to Defeat Saddam Hussein. It is discussed in my book America’s Modern Wars (AMW) pages 51-52 and in some blog posts here.

The Bosnia intervention prediction is discussed in Appendix II of AMW and the Iraq casualty estimate is Chapter 1 and Appendix I.

We like to claim that we are three for three on these predictions. What does that really mean? If the odds of making a correct prediction are 50/50 (the same as a coin toss), then the odds of getting three correct predictions in a row is 12.5%. We may not be particularly clever, just a little lucky.

On the other hand, some might argue that these predictions were not that hard to make, and knowledgeable experts would certainly predict correctly at least two-thirds of the time. In that case the odds of getting three correct predictions in a row is more like 30%.

Still, one notes that there was a lot of predictions concerning the Gulf War that were higher than Trevor Dupuy’s. In the case of Bosnia, the Joint Staff was informed by a senior OR (Operations Research) office in the Army that there was no methodology for predicting losses in an “operation other than war” (AMW, page 309). In the case of the Iraq casualty estimate, we were informed by a director of an OR organization that our estimate was too high, and that the U.S. would suffer less than 2,000 killed and be withdrawn in a couple of years (Shawn was at that meeting). I think I left that out of my book in its more neutered final draft….my first draft was more detailed and maybe a little too “angry”. So maybe, predicting casualties in military operations is a little tricky. If the odds of a correct prediction was only one-in-three, then the odds of getting three correct predictions in a row is only 4%. For marketing purposes, we like this argument better 😉

Hard to say what are the odds of making a correct prediction are. The only war that had multiple public predictions (and of course, several private and classified ones) was the 1991 Gulf War. There were a number of predictions made and we believe most were pretty high. There was no other predictions we are aware of for Bosnia in 1995, other than the “it could turn into another Vietnam” ones. There are no other predictions we are aware of for Iraq in 2004, although lots of people were expressing opinions on the subject. So, it is hard to say how difficult it is to make a correct prediction in these cases.

P.S.: Yes, this post was inspired by my previous post on the Stanley Cup play-offs.

 

Forecasting U.S. Casualties in Bosnia

Photo by Ssgt. Lisa Zunzanyika-Carpenter 1st Combat Camera Charleston AFB SC
Photo by Ssgt. Lisa Zunzanyika-Carpenter 1st Combat Camera Charleston AFB SC

In previous posts, I highlighted a call for more prediction and accountability in the field of security studies, and detailed Trevor N. Dupuy’s forecasts for the 1990-1991 Gulf War. Today, I will look at The Dupuy Institute’s 1995 estimate of potential casualties in Operation JOINT ENDEAVOR, the U.S. contribution to the North Atlantic Treaty Organization (NATO) peacekeeping effort in Bosnia and Herzegovina.

On 1 November 1995, the leaders of the Serbia, Croatia, and Bosnia, rump states left from the breakup of Yugoslavia, along with representatives from the United States, European Union, and Russia, convened in Dayton, Ohio to negotiate an end to a three-year civil war. The conference resulted from Operation DELIBERATE FORCE, a 21-day air campaign conducted by NATO in August and September against Bosnian Serb forces in Bosnia.

A key component of the negotiation involved deployment of a NATO-led Implementation Force (IFOR) to replace United Nations troops charged with keeping the peace between the warring factions. U.S. European Command (USEUCOM) and NATO had been evaluating potential military involvement in the former Yugoslavia since 1992, and U.S. Army planners started operational planning for a ground mission in Bosnia in August 1995. The Joint Chiefs of Staff alerted USEUCOM for a possible deployment to Bosnia on 2 November.[1]

Up to that point, U.S. President Bill Clinton had been reluctant to commit U.S. ground forces to the conflict and had not yet agreed to do so as part of the Dayton negotiations. As part of the planning process, Joint Staff planners contacted the Deputy Undersecretary of the Army for Operations Research for assistance in developing an estimate of potential U.S. casualties in a peacekeeping operation. The planners were told that no methodology existed for forecasting losses in such non-combat contingency operations.[2]

On the same day the Dayton negotiation began, the Joint Chiefs contracted The Dupuy Institute to use its historical expertise on combat casualties to produce an estimate within three weeks for likely losses in a commitment of 20,000 U.S. troops to a 12-month peacekeeping mission in Bosnia. Under the overall direction of Nicholas Krawciw (Major General, USA, ret.), then President of The Dupuy Institute, a two-track analytical effort began.

One line of effort analyzed the different phases of the mission and compiled list of potential lethal threats for each, including non-hostile accidents. Losses were forecasted using The Dupuy Institute’s combat model, the Tactical Numerical Deterministic Model (TNDM), and estimates of lethality and frequency for specific events. This analysis yielded a probabilistic range for possible casualties.

The second line of inquiry looked at data on 144 historical cases of counterinsurgency and peacekeeping operations compiled for a 1985 study by The Dupuy Institute’s predecessor, the Historical Evaluation and Research Organization (HERO), and other sources. Analysis of 90 of these cases, including all 38 United Nations peacekeeping operation to that date, yielded sufficient data to establish baseline estimates for casualties related to force size and duration.

Coincidentally and fortuitously, both lines of effort produced estimates that overlapped, reinforcing confidence in their validity. The Dupuy Institute delivered its forecast to the Joint Chiefs of Staff within two weeks. It estimated possible U.S. casualties for two scenarios, one a minimal deployment intended to limit risk, and the other for an extended year-long mission.

For the first scenario, The Dupuy Institute estimated 11 to 29 likely U.S. fatalities with a pessimistic potential for 17 to 42 fatalities. There was also the real possibility for a high-casualty event, such as a transport plane crash. For the 12-month deployment, The Dupuy Institute forecasted a 50% chance that U.S. killed from all causes would be below 17 (12 combat deaths and 5 non-combat fatalities) and a 90% chance that total U.S. fatalities would be below 25.

Chairman of the Joint Chiefs of Staff General John Shalikashvili carried The Dupuy Institute’s casualty estimate with him during the meeting in which President Clinton decided to commit U.S. forces to the peacekeeping mission. The participants at Dayton reached agreement on 17 November and an accord was signed on 14 December. Operation JOINT ENDEAVOR began on 2 December with 20,000 U.S. and 60,000 NATO troops moving into Bosnia to keep the peace. NATO’s commitment in Bosnia lasted until 2004 when European Union forces assumed responsibility for the mission.

There were six U.S. casualties from all causes and no combat deaths during JOINT ENDEAVOR.

NOTES

[1] Details of U.S. military involvement in Bosnia peacekeeping can be found in Robert F. Baumann, George W. Gawrych, Walter E. Kretchik, Armed Peacekeepers in Bosnia (Combat Fort Leavenworth, KS: Studies Institute Press, 2004); R. Cody Phillips, Bosnia-Herzegovina: The U.S. Army’s Role in Peace Enforcement Operations, 1995-2004 (Washington, D.C.: U.S. Army Center for Military History, 2005); Harold E. Raugh,. Jr., ed., Operation JOINT ENDEAVOR: V Corps in Bosnia-Herzogovina, 1995-1996: An Oral History (Fort Leavenworth, KS. : Combat Studies Institute Press, 2010).

[2] The Dupuy Instiute’s Bosnia casualty estimate is detailed in Christopher A. Lawrence, America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam (Philadelphia, PA: Casemate, 2015); and Christopher A. Lawrence, “How Military Historians Are Using Quantitative Analysis — And You Can Too,” History News Network, 15 March 2015.

Screw Theory! We Need More Prediction in Security Studies!

Johnny Carson as Carnac the Magnificent; taken from the January 24, 2005 broadcast of The Tonight Show.
Johnny Carson as Carnac the Magnificent; taken from the January 24, 2005 broadcast of The Tonight Show.

My previous post touched on the apparent analytical bankruptcy underlying the U.S. government’s approach to counterterrorism policy. While many fingers were pointed at the government for this state of affairs, at least one scholar admitted that “the leading [academic] terrorism research was mostly just political theory and anecdotes” which has “left policy makers to design counterterrorism strategies without the benefit of facts.”

So what can be done about this? Well, Michael D. Ward, a Professor of Political Science at Duke University, has suggested a radical solution: test the theories to see if they can accurately predict real world outcomes. Ward recently published an article in the Journal of Global Security Studies (read it now before it goes behind the paywall) arguing in favor of less theory and more prediction in the fields of international relations and security studies.

[W]e need less theory because most theory is an attempt to rescue or adapt extant theory. We need more predictions in order to keep track of how well we understand the world around us. They will tell us how good our theories are and where we need better explanations.

As Ward explained,

[P]rediction is deeply embedded in the philosophy of science… The argument is that if you can develop models that provide an understanding—without a teleology of why things happen—you should be able to generate predictions that will not only be accurate, but may also be useful in a larger societal context.

Ward argued that “until very recently, most of this thread of work in security studies had been lost, or if not lost, at least abandoned.” The reason for this was the existence of a longstanding epistemological disagreement: “Many social scientists see a sharp distinction between explanation on the one hand and prediction on the other. Indeed, this distinction is often sharp enough that it is argued that doing one of these things cuts you out of doing the other.”

For the most part, Ward asserted, the theorists have won out over the empiricists.

[M]any scholars (but few others) will tell you that we need more theory. Doubtless they are right. Few of them really mean “theory” in the sense that I reserve for the term. Few of them mean “theory” in the sense of analytical narratives. Many of them mean “detailed, plausible stories” about how stuff occurs.

In light of the uncomfortable conclusion that more detailed, plausible stories about how stuff occurs does not actually yield more insight, Ward has adopted a decidedly contrarian stance.

I am here to suggest that less is more. Thus, let me be the first to call for less theory in security studies. We should winnow the many, many such “theories” that occupy the world of security studies.

Instead, we need more predictions.

He went on to detail his argument.

We need these predictions for four reasons. First, we need these predictions to help us make relevant statements about the world around us. We also need these predictions to help us throw out the bad “theories” that continue to flourish. These predictions will help drive our research into new areas, away from moribund approaches that have been followed for many decades. Finally, and perhaps most important, predictions will force us to keep on track.

But making predictions is only part of the process. Tracking them and accounting for their accuracy is the vital corollary to improving both accuracy and theory. As Ward pointed out, “One reason that many hate predictions is that talking heads make many predictions in the media, but few of them ever keep track of how well they are doing.” Most, in fact, are wrong; few are held accountable for it.

Of course, the use of empirical methods to predict the outcomes of future events animated much of Trevor N. Dupuy’s approach to historical analysis and is at the heart of what The Dupuy Institute carries on doing today. Both have made well-documented predictions that have also been remarkably accurate. More about those in the next post.