Category Research & Analysis

Should Defense Department Campaign-Level Combat Modeling Be Reinstated?

Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller
Airmen of the New York Air National Guard’s 152nd Air Operations Group man their stations during Virtual Flag, a computer wargame held Feb. 18-26 from Hancock Field Air National Guard Base. The computer hookup allowed the air war planners of the 152nd to interact with other Air Force units around the country and in Europe. U.S. Air National Guard photo by Master Sgt. Eric Miller

In 2011, the Office of the Secretary of Defense’s (OSD) Cost Assessment and Program Evaluation (CAPE) disbanded its campaign-level modeling capabilities and reduced its role in the Department of Defense’s strategic analysis activity (SSA) process. CAPE, which was originally created in 1961 as the Office of Systems Analysis, “reports directly to the Secretary and Deputy Secretary of Defense, providing independent analytic advice on all aspects of the defense program, including alternative weapon systems and force structures, the development and evaluation of defense program alternatives, and the cost-effectiveness of defense systems.”

According to RAND’s Paul K. Davis, CAPE’s decision was controversial within DOD, and due in no small part to general dissatisfaction with the overall quality of strategic analysis supporting decision-making.

CAPE’s decision reflected a conclusion, accepted by the Secretary of Defense and some other senior leaders, that the SSA process had not helped decisionmakers confront their most-difficult problems. The activity had previously been criticized for having been mired in traditional analysis of kinetic wars rather than counterterrorism, intervention, and other “soft” problems. The actual criticism was broader: Critics found SSA’s traditional analysis to be slow, manpower-intensive, opaque, difficult to explain because of its dependence on complex models, inflexible, and weak in dealing with uncertainty. They also concluded that SSA’s campaign-analysis focus was distracting from more-pressing issues requiring mission-level analysis (e.g., how to defeat or avoid integrated air defenses, how to defend aircraft carriers, and how to secure nuclear weapons in a chaotic situation).

CAPE took the criticism to heart.

CAPE felt that the focus on analytic baselines was reducing its ability to provide independent analysis to the secretary. The campaign-modeling activity was disbanded, and CAPE stopped developing the corresponding detailed analytic baselines that illustrated, in detail, how forces could be employed to execute a defense-planning scenario that represented strategy.

However, CAPE’s solution to the problem may have created another. “During the secretary’s reviews for fiscal years 2012 and 2014, CAPE instead used extrapolated versions of combatant commander plans as a starting point for evaluating strategy and programs.”

As Davis, related, there were many who disagreed with CAPE’s decision at the time because of the service-independent perspective it provided.

Some senior officials believed from personal experience that SSA had been very useful for behind-the-scenes infrastructure (e.g., a source of expertise and analytic capability) and essential for supporting DoD’s strategic planning (i.e., in assessing the executability of force-sizing strategy). These officials saw the loss of joint campaign-analysis capability as hindering the ability and willingness of the services to work jointly. The officials also disagreed with using combatant commander plans instead of scenarios as starting points for review of midterm programs, because such plans are too strongly tied to present-day thinking. (Emphasis added)

Five years later, as DOD gears up to implement the new Third Offset Strategy, it appears that the changes implemented in SSA in 2011 have not necessarily improved the quality of strategic analysis. DOD’s lack of an independent joint, campaign-level modeling capability is apparently hampering the ability of senior decision-makers to critically evaluate analysis provided to them by the services and combatant commanders.

In the current edition of Joint Forces Quarterly, the Chairman of the Joint Chiefs of Staff’s military and security studies journal, Timothy A. Walton, a Fellow in the Center for Strategic and Budgetary Assessments, recommended that in support of “the Third Offset Strategy, the next Secretary of Defense should reform analytical processes informing force planning decisions.” He pointed suggested that “Efforts to shape assumptions in unrealistic or imprudent ways that favor outcomes for particular Services should be repudiated.”

As part of the reforms, Walton made a strong and detailed case for reinstating CAPE’s campaign-level combat modeling.

In terms of assessments, the Secretary of Defense should direct the Director of Cost Assessment and Program Evaluation to reinstate the ability to conduct OSD campaign-level modeling, which was eliminated in 2011. Campaign-level modeling consists of the use of large-scale computer simulations to examine the performance of a full fielded military in planning scenarios. It takes the results of focused DOD wargaming activities, as well as inputs from more detailed tactical modeling, to better represent the effects of large-scale forces on a battlefield. Campaign-level modeling is essential in developing insights on the performance of the entire joint force and in revealing key dynamic relationships and interdependencies. These insights are instrumental in properly analyzing complex factors necessary to judge the adequacy of the joint force to meet capacity requirements, such as the two-war construct, and to make sensible, informed trades between solutions. Campaign-level modeling is essential to the force planning process, and although the Services have their own campaign-level modeling capabilities, OSD should once more be able to conduct its own analysis to provide objective, transparent assessments to senior decisionmakers. (Emphasis added)

So, it appears that DOD can’t quit combat modeling. But that raises the question, if CAPE does resume such activities, will it pick up where it left off in 2011 or do it differently? I will explore that in a future post.

Studying The Conduct of War: “We Surely Must Do Better”

"The Ultimate Sand Castle" [Flickr, Jon]
“The Ultimate Sand Castle” [Flickr, Jon]

Chris and I both have discussed previously the apparent waning interest on the part of the Department of Defense to sponsor empirical research studying the basic phenomena of modern warfare. The U.S. government’s boom-or-bust approach to this is long standing, extending back at least to the Vietnam War. Recent criticism of the Department of Defense’s Office of Net Assessment (OSD/NA) is unlikely to help. Established in 1973 and led by the legendary Andrew “Yoda” Marshall until 2015, OSD/NA plays an important role in funding basic research on topics of crucial importance to the art of net assessment. Critics of the office appear to be unaware of just how thin the actual base of empirical knowledge is on the conduct of war. Marshall understood that the net result of a net assessment based mostly on guesswork was likely to be useless, or worse, misleadingly wrong.

This lack of attention to the actual conduct of war extends beyond government sponsored research. In 2004, Stephen Biddle, a professor of political science at George Washington University and a well-regarded defense and foreign policy analyst, published Military Power: Explaining Victory and Defeat in Modern Battle. The book focused on a very basic question: what causes victory and defeat in battle? Using a comparative approach that incorporated quantitative and qualitative methods, he effectively argued that success in contemporary combat was due to the mastery of what he called the “modern system.” (I won’t go into detail here, but I heartily recommend the book to anyone interested in the topic.)

Military Power was critically acclaimed and received multiple awards from academic, foreign policy, military, operations research, and strategic studies organizations. For all the accolades, however, Biddle was quite aware just how neglected the study of war has become in U.S. academic and professional communities. He concluded the book with a very straightforward assessment:

[F]or at least a generation, the study of war’s conduct has fallen between the stools of the institutional structure of modern academia and government. Political scientists often treat war itself as outside their subject matter; while its causes are seen as political and hence legitimate subjects of study, its conduct and outcomes are more often excluded. Since the 1970s, historians have turned away from the conduct of operations to focus on war’s effects on social, economic, and political structures. Military officers have deep subject matter knowledge but are rarely trained as theoreticians and have pressing operational demands on their professional attention. Policy analysts and operations researchers focus so tightly on short-deadline decision analysis (should the government buy the F22 or cancel it? Should the Army have 10 divisions or 8?) that underlying issues of cause and effect are often overlooked—even when the decisions under analysis turn on embedded assumptions about the causes of military outcomes. Operations research has also gradually lost much of its original empirical focus; modeling is now a chiefly deductive undertaking, with little systematic effort to test deductive claims against real world evidence. Over forty years ago, Thomas Schelling and Bernard Brodie argued that without an academic discipline of military science, the study of the conduct of war had languished; the passage of time has done little to overturn their assessment. Yet the subject is simply too important to treat by proxy and assumption on the margins of other questions In the absence of an institutional home for the study of warfare, it is all the more essential that analysts in existing disciplines recognize its importance and take up the business of investigating capability and its causes directly and rigorously. Few subjects are more important—or less studied by theoretical social scientists. With so much at stake, we surely must do better. [pp. 207-208]

Biddle published Military Power 12 years ago, in 2004. Has anything changed substantially? Have we done better?

TMCI Agenda

The Military Conflict Institute (TMCI) latest agenda shows me making two presentations on Monday, October 3: “War by Numbers” and “Data for Wargames.” Dr. Shawn Woodford is presenting on Tuesday with “Studying Combat: Where from Here?”

There are also presentations by Rosser Bobbitt, Roger Mickelson, Gene Visco, John Brinkerhoff, Chuck Hawkins, Alenka Brown, J. Michael Waller, Seyed Rizi, Russ Vane and probably a couple of others.

Contact Roger Mickelson for a copy of the agenda at TMCI6@aol.com

Three Presentations

I will be giving two presentations at the October meeting of The Military Conflict Institute (TMCI) and Shawn will be making one presentation there.

On Monday, 3 October, I will be doing a presentation on my book War by Numbers: Understanding Conventional Combat, that is going to published in June/August 2017.  This presentation will describe the book. In addition, I will be discussing four or five other book projects that are on-going or I am considering.

The same day I will being making presentation called “Data for Wargames.” This was a course developed for a USMC White Team for a wargaming exercise.

On Tuesday Shawn Woodford will be presenting “Studying Combat: Where to Go from Here.” As he describes it:

Studying Combat: Where To Go From Here?

With Deputy Under Secretary of Defense Robert Work’s recent call for a revitalized war gaming effort to support development of a new national military strategy, it is worth taking stock of the present state of empirical research on combat. I propose to briefly survey work on the subject across relevant fields to get a sense of how much progress has been since TMCI published The Concise Theory of Combat in 1997. This is intended to frame a discussion of where the next steps should be taken and possibilities for promoting work on this subject in the defense and academic communities.

The Military Conflict Institute (TMCI) Will Meet in October

TMCI logoThe Military Conflict Institute (the website has not been recently updated) will hold it’s 58th General Working Meeting from 3-5 October 2016, hosted by the Institute for Defense Analysis in Alexandria, Virginia. It will feature discussions and presentations focused on war termination in likely areas of conflict in the near future, such as Egypt, Turkey, North Korea, Iran, Saudi Arabia, Kurdistan, and Israel. There will also be presentations on related and general military topics as well.

TMCI was founded in 1979 by Dr. Donald S. Marshall and Trevor Dupuy. They were concerned by the inability of existing Defense Department combat models to produce results that were consistent or rooted in historical experience. The organization is a non-profit, interdisciplinary, informal group that avoids government or institutional affiliation in order to maintain an independent perspective and voice. It’s objective is to advance public understanding of organized warfare in all its aspects. Most of the initial members were drawn from the ranks of operations analysts experienced in quantitative historical study and military operations research, but it has grown to include a diverse group of scholars, historians, students of war, soldiers, sailors, marines, airmen, and scientists. Member disciplines range from military science to diplomacy and philosophy.

For agenda information, contact Roger Mickelson TMCI6@aol.com. For joining instructions, contact Rosser Bobbitt rbobbitt@ida.org. Attendance is subject to approval.

Book Review

I have not posted book reviews to this site, and do not really plan to in the future. But, there was a book review of America’s Modern Wars in the Military Review by Brig. Gen. John C. Hanley, who I am not familiar with. The review ended with a paragraph that I thought was meaningful. He said:

Lawrence’s book shows that reliable outcome estimates are determined through quantitative reasoning. Being able to anticipate the outcomes of any military operation, through reliable means, can greatly assist in strategic and operational level leaders’ decision-making processes. These results are what the book brings to light for military leaders and their staffs. Staff members who develop course-of-action recommendations can use the techniques described by Lawrence to provide quality analysis. Commanders will have the confidence from their staff estimates to choose the best courses of action for future military operations. Logically estimating the outcomes of future military operations, as the author writes, is what U.S. citizens should expect and demand from their leaders who take this country to war.

Anyhow the link to his review is:

Military Review

His review is back on page 131.

 

P.S. Then there was the book review that started:  “An excel spreadsheet masquerading as a book”

History News Network (HNN)

hnn-logo-new

We do have a half-dozen links listed down at the bottom of the right hand column of this blog. One is the History News Network

I have five articles posted on HNN. Two of them being posts from this blog. They are:

How Military Historians Are Using Quantitative Analysis

Did the Pentagon Learn from Vietnam?

Defeating an Insurgency by Air

Did I Just Write the Largest History Book Ever?

Are Russians Really Long-Suffering

Now, they do choose the headlines, and sometimes that gives a different feel to the article. So for example, one of my blog posts was titled “Russian Revolutions.” The exact same article on the HNN is titled “Are Russians Really Long-Suffering.” This apparently got a couple of people up in arms because the article did not talk about all the famines and oppression in Russia and the Soviet Union. It did not, because it was about revolutions, and in particular was about revolutions that succeeded. The famines in the 1890s, 1920s and 1930s did not directly lead to a successful revolution (a point that I think is pretty significant).

The article “Did I Just Write…” is actually a shorter version of an article I posted on the Aberdeen Bookstore website: Long version of “Did I Just Write…” Part of the reason that I wrote that article was to see if someone would come out of the woodwork and post that there was a larger book published (usually these postings start with something like “the author is an idiot because….”). I did not get that for this article. This does sort of confirm my suspicion that this is indeed the largest single volume history book ever written (no disrespect intended for the 11-volumes done by the Durants…which were four million words and 10,000 pages). I wonder if this is something I should submit to the Guinness Book of World Records? Will I get free beer for that?

 

Quote from America’s Modern Wars

On Amazon.com
On Amazon.com

Just to reinforce Shawn Woodford’s point below, let me quote from Chapter Twenty-Four, pages 294-295, of my book America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam:

Many years ago, I had the pleasure of having a series of meetings with Professor Ivo Feierabend. I was taking a graduate course in Econometrics at San Diego State University (SDSU). I decided that for my class paper, I would do something on the causes of revolution. The two leading efforts on this, both done in the 1960s, were by Ted Gurr and the husband and wife team of Feierabend and Feierabend. I reviewed their work, and for a variety of reasons, got interested in the measurements and analyses done by the Feierabends, vice the more known work by Ted Gurr. This eventually led me to Dr. Feierabend, who still happened to be at San Diego State University much to my surprise. This was some 20 years after he had done what I consider to be ground-breaking work on revolutions. I looked him up and had several useful and productive meetings with him.

In the 1960s, he had an entire team doing this work. Several professors were involved, and he had a large number of graduate students coding events of political violence. In addition, he had access to mainframe computers, offices, etc. The entire effort was shut down in the 1960s, and he had not done anything further on this in almost 20 years. I eventually asked him why he didn’t continue his work. His answer, short and succinct was, “I had no budget.”

This was a difficult answer for a college student to understand. But, it is entirely understood by me now. To do these types of analytical projects requires staff, resources, facilities, etc. They cannot be done by one person, and even if they could, that one person usually needs a paycheck. So, the only way one could conduct one of these large analytical projects is to be funded. In the case of the Feierabends, that funding came from the government, as did ours. Their funding ended after a few years, as has ours. Their work could be described as a good start, but there was so much more that needed to be done. Intellectually, one is mystified why someone would not make sure that this work was continued. Yet, in the cases of Ted Gurr and the Feierabends, it did not.

The problem lies in that the government (or at least the parts that I dealt with) sometimes has the attention span of a two-year-old. Not only that, it also has the need for instant gratification, very much like a two-year-old. Practically, what that means is that projects that can answer an immediate question get funding (like the Bosnia and Iraq casualty estimates). Larger research efforts that will produce an answer or a product in two to three years can also get funding. On the other hand, projects that produce a preliminary answer in two to three years and then need several more years of funding to refine, check, correct and develop that work, tend to die. This has happened repeatedly. The analytical community is littered with many clever, well thought-out reports that look to be good starts. What is missing is a complete body of analysis on a subject.

Why Are We Still Wondering Why Men (And Women) Rebel?

Gurr, Why Men RebelThe New York Times published a very interesting article addressing the inability of government-sponsored scholars and researchers to provide policymakers with an analytical basis for identifying potential terrorists. For anyone who has worked with U.S. government patrons on basic research, much of this will sound familiar.

“After all this funding and this flurry of publications, with each new terrorist incident we realize that we are no closer to answering our original question about what leads people to turn to political violence,” Marc Sageman, a psychologist and a longtime government consultant, wrote in the journal Terrorism and Political Violence in 2014. “The same worn-out questions are raised over and over again, and we still have no compelling answers.”

Ample government resourcing and plenty of research attention appears to yield little in advanced knowledge and insight. Why is this? For some, the way the government responds to research findings is the problem.

When researchers do come up with possible answers, the government often disregards them. Not long after the attacks of Sept. 11, 2001, for instance, Alan B. Krueger, the Princeton economist, tested the widespread assumption that poverty was a key factor in the making of a terrorist. Mr. Krueger’s analysis of economic figures, polls, and data on suicide bombers and hate groups found no link between economic distress and terrorism.

More than a decade later, law enforcement officials and government-funded community groups still regard money problems as an indicator of radicalization.

There is also the demand for simple, definitive answers to immediately pressing questions (also known as The Church of What’s Happening Now).

Researchers, too, say they have been frustrated by both the Bush and Obama administrations because of what they say is a preoccupation with research that can be distilled into simple checklists… “They want to be able to do things right now,” said Clark R. McCauley Jr., a professor of psychology at Bryn Mawr College who has conducted government-funded terrorism research for years. “Anybody who offers them something right now, like to go around with a checklist — right now — is going to have their attention.

“It’s demand driven,” he continued. “The people with guns and badges are so eager to have something. The fact that they could actually do harm? This doesn’t deter them.”

There is also the problem of research that leads to conclusions that are at odds with the prevailing political sentiment or run contrary to institutional interests.

Mr. McCauley said many of his colleagues and peers conducted smart research and drew narrow conclusions. The problem, he said, is that studies get the most attention when they suggest warning signs. Research linking terrorism to American policies, meanwhile, is ignored.

However, the more honest researchers also admit that their inability to develop effective modes of inquiry into what are certainly complicated problems plays a role as well.

In 2005, Jeff Victoroff, a University of Southern California psychologist, concluded that the leading terrorism research was mostly just political theory and anecdotes. “A lack of systematic scholarly investigation has left policy makers to design counterterrorism strategies without the benefit of facts,” he wrote in The Journal of Conflict Resolution.

This state of affairs would be problematic enough considering it has been a decade-and-a-half since the events of 11 September 2001 made understanding political violence a national imperative. But it is even more perplexing given that the U.S. government began sponsoring basic research on this topic in the 1950s and 60s. The pioneering work of scholars Ted Gurr and Ivo and Rosalind Feierabend started with U.S. government funding. Gurr published his seminal work Why Men Rebel in 1970. Nearly a half century later, why are we still asking the same questions?

War by Numbers III

The table of contents for the book:

—             Preface                                                                                    6
One          Understanding War                                                                 8
Two          Force Ratios                                                                          15
Three       Attacker versus Defender                                                      22
Four         Human Factors                                                                      24
Five          Measuring Human Factors in Combat: Italy                          27
Six            Measuring Human Factors in Combat: Ardennes & Kursk   40
Seven       Measuring Human Factors in Combat: Modern Wars          55
Eight         Outcome of Battles                                                               67
Nine          Exchange Ratios                                                                  75
Ten           The Combat Value of Superior Situational Awareness        83
Eleven      The Combat Value of Surprise                                           113
Twelve      The Nature of Lower Level Combat                                   135
Thirteen    The Effects of Dispersion on Combat                                150
Fourteen   Advance Rates                                                                  164
Fifteen       Casualties                                                                         171
Sixteen      Urban Legends                                                                 197
Seventeen The Use of Case Studies                                                 248
Eighteen    Modeling Warfare                                                             270
Nineteen    Validation of the TNDM                                                    286
Twenty       Conclusions                                                                     313

Appendix I:   Dupuy’s Timeless Verities of Combat                           317
Appendix II:  Dupuy’s Combat Advance Rate Verities                       322
Appendix III: Dupuy’s Combat Attrition Verities                                 326

Bibliography                                                                                       331

Page numbers are based upon the manuscript and will certainly change. The book is 342 pages and 121,095 words. Definitely a lot shorter than the Kursk book.