Tag U.S. Army Center for Army Analysis (CAA)

TDI Friday Read: Engaging The Phalanx

The December 2018 issue of Phalanx, a periodical journal published by The Military Operations Research Society (MORS), contains an article by Jonathan K. Alt, Christopher Morey, and Larry Larimer, entitled “Perspectives on Combat Modeling.” (the article is paywalled, but limited public access is available via JSTOR).

Their article was written partly as a critical rebuttal to a TDI blog post originally published in April 2017, which discussed an issue of which the combat modeling and simulation community has long been aware but slow to address, known as the “Base of Sand” problem.

Wargaming Multi-Domain Battle: The Base Of Sand Problem

In short, because so little is empirically known about the real-world structures of combat processes and the interactions of these processes, modelers have been forced to rely on the judgement of subject matter experts (SMEs) to fill in the blanks. No one really knows if the blend of empirical data and SME judgement accurately represents combat because the modeling community has been reluctant to test its models against data on real world experience, a process known as validation.

TDI President Chris Lawrence subsequently published a series of blog posts responding to the specific comments and criticisms leveled by Alt, Morey, and Larimer.

How are combat models and simulations tested to see if they portray real-world combat accurately? Are they actually tested?

Engaging the Phalanx

How can we know if combat simulations adhere to strict standards established by the DoD regarding validation? Perhaps the validation reports can be released for peer review.

Validation

Some claim that models of complex combat behavior cannot really be tested against real-world operational experience, but this has already been done. Several times.

Validating Attrition

If only the “physics-based aspects” of combat models are empirically tested, do those models reliably represent real-world combat with humans or only the interactions of weapons systems?

Physics-based Aspects of Combat

Is real-world historical operational combat experience useful only for demonstrating the capabilities of combat models, or is it something the models should be able to reliably replicate?

Historical Demonstrations?

If a Subject Matter Expert (SME) can be substituted for a proper combat model validation effort, then could not a SME simply be substituted for the model? Should not all models be considered expert judgement quantified?

SMEs

What should be done about the “Base of Sand” problem? Here are some suggestions.

Engaging the Phalanx (part 7 of 7)

Persuading the military operations research community of the importance of research on real-world combat experience in modeling has been an uphill battle with a long history.

Diddlysquat

And the debate continues…

Historians and the Early Era of U.S. Army Operations Research

While perusing Charles Shrader’s fascinating history of the U.S. Army’s experience with operations research (OR), I came across several references to the part played by historians and historical analysis in early era of that effort.

The ground forces were the last branch of the Army to incorporate OR into their efforts during World War II, lagging behind the Army Air Forces, the technical services, and the Navy. Where the Army was a step ahead, however, was in creating a robust wartime historical field history documentation program. (After the war, this enabled the publication of the U.S. Army in World War II series, known as the “Green Books,” which set a new standard for government sponsored military histories.)

As Shrader related, the first OR personnel the Army deployed forward in 1944-45 often crossed paths with War Department General Staff Historical Branch field historian detachments. They both engaged in similar activities: collecting data on real-world combat operations, which was then analyzed and used for studies and reports written for the use of the commands to which they were assigned. The only significant difference was in their respective methodologies, with the historians using historical methods and the OR analysts using mathematical and scientific tools.

History and OR after World War II

The usefulness of historical approaches to collecting operational data did not go unnoticed by the OR practitioners, according to Shrader. When the Army established the Operations Research Office (ORO) in 1948, it hired a contingent of historians specifically for the purpose of facilitating research and analysis using WWII Army records, “the most likely source for data on operational matters.”

When the Korean War broke out in 1950, ORO sent eight multi-disciplinary teams, including the historians, to collect operational data and provide analytical support for U.S. By 1953, half of ORO’s personnel had spent time in combat zones. Throughout the 1950s, about 40-43% of ORO’s staff was comprised of specialists in the social sciences, history, business, literature, and law. Shrader quoted one leading ORO analyst as noting that, “there is reason to believe that the lawyer, social scientist or historian is better equipped professionally to evaluate evidence which is derived from the mind and experience of the human species.”

Among the notable historians who worked at or with ORO was Dr. Hugh M. Cole, an Army officer who had served as a staff historian for General George Patton during World War II. Cole rose to become a senior manager at ORO and later served as vice-president and president of ORO’s successor, the Research Analysis Corporation (RAC). Cole brought in WWII colleague Forrest C. Pogue (best known as the biographer of General George C. Marshall) and Charles B. MacDonald. ORO also employed another WWII field historian, the controversial S. L. A. Marshall, as a consultant during the Korean War. Dorothy Kneeland Clark did pioneering historical analysis on combat phenomena while at ORO.

The Demise of ORO…and Historical Combat Analysis?

By the late 1950s, considerable institutional friction had developed between ORO, the Johns Hopkins University (JHU)—ORO’s institutional owner—and the Army. According to Shrader,

Continued distrust of operations analysts by Army personnel, questions about the timeliness and focus of ORO studies, the ever-expanding scope of ORO interests, and, above all, [ORO director] Ellis Johnson’s irascible personality caused tensions that led in August 1961 to the cancellation of the Army’s contract with JHU and the replacement of ORO with a new, independent research organization, the Research Analysis Corporation [RAC].

RAC inherited ORO’s research agenda and most of its personnel, but changing events and circumstances led Army OR to shift its priorities away from field collection and empirical research on operational combat data in favor of the use of modeling and wargaming in its analyses. As Chris Lawrence described in his history of federally-funded Defense Department “think tanks,” the rise and fall of scientific management in DOD, the Vietnam War, social and congressional criticism, and an unhappiness by the military services with the analysis led to retrenchment in military OR by the end of the 60s. The Army sold RAC and created its own in-house Concepts Analysis Agency (CAA; now known as the Center for Army Analysis).

By the early 1970s, analysts, such as RAND’s Martin Shubik and Gary Brewer, and John Stockfisch, began to note that the relationships and processes being modeled in the Army’s combat simulations were not based on real-world data and that empirical research on combat phenomena by the Army OR community had languished. In 1991, Paul Davis and Donald Blumenthal gave this problem a name: the “Base of Sand.”

Validating Attrition

Continuing to comment on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 3 of 7; see Part 1, Part 2)

On the first page (page 28) in the third column they make the statement that:

Models of complex systems, especially those that incorporate human behavior, such as that demonstrated in combat, do not often lend themselves to empirical validation of output measures, such as attrition.

Really? Why can’t you? If fact, isn’t that exactly the model you should be validating?

More to the point, people have validated attrition models. Let me list a few cases (this list is not exhaustive):

1. Done by Center for Army Analysis (CAA) for the CEM (Concepts Evaluation Model) using Ardennes Campaign Simulation Study (ARCAS) data. Take a look at this study done for Stochastic CEM (STOCEM): https://apps.dtic.mil/dtic/tr/fulltext/u2/a489349.pdf

2. Done in 2005 by The Dupuy Institute for six different casualty estimation methodologies as part of Casualty Estimation Methodologies Studies. This was work done for the Army Medical Department and funded by DUSA (OR). It is listed here as report CE-1: http://www.dupuyinstitute.org/tdipub3.htm

3. Done in 2006 by The Dupuy Institute for the TNDM (Tactical Numerical Deterministic Model) using Corps and Division-level data. This effort was funded by Boeing, not the U.S. government. This is discussed in depth in Chapter 19 of my book War by Numbers (pages 299-324) where we show 20 charts from such an effort. Let me show you one from page 315:

 

So, this is something that multiple people have done on multiple occasions. It is not so difficult that The Dupuy Institute was not able to do it. TRADOC is an organization with around 38,000 military and civilian employees, plus who knows how many contractors. I think this is something they could also do if they had the desire.

 

What Multi-Domain Operations Wargames Are You Playing? [Updated]

Source: David A. Shlapak and Michael Johnson. Reinforcing Deterrence on NATO’s Eastern Flank: Wargaming the Defense of the Baltics. Santa Monica, CA: RAND Corporation, 2016.

 

 

 

 

 

 

 

[UPDATE] We had several readers recommend games they have used or would be suitable for simulating Multi-Domain Battle and Operations (MDB/MDO) concepts. These include several classic campaign-level board wargames:

The Next War (SPI, 1976)

NATO: The Next War in Europe (Victory Games, 1983)

For tactical level combat, there is Steel Panthers: Main Battle Tank (SSI/Shrapnel Games, 1996- )

There were also a couple of naval/air oriented games:

Asian Fleet (Kokusai-Tsushin Co., Ltd. (国際通信社) 2007, 2010)

Command: Modern Air Naval Operations (Matrix Games, 2014)

Are there any others folks are using out there?


A Mystics & Statistic reader wants to know what wargames are being used to simulate and explore Multi-Domain Battle and Operations (MDB/MDO) concepts?

There is a lot of MDB/MDO wargaming going on in at all levels in the U.S. Department of Defense. Much of this appears to use existing models, simulations, and wargames, such as the U.S. Army Center for Army Analysis’s unclassified Wargaming Analysis Model (C-WAM).

Chris Lawrence recently looked at C-WAM and found that it uses a lot of traditional board wargaming elements, including methodologies for determining combat results, casualties, and breakpoints that have been found unable to replicate real-world outcomes (aka “The Base of Sand” problem).

C-WAM 1

C-WAM 2

C-WAM 3

C-WAM 4 (Breakpoints)

There is also the wargame used by RAND to look at possible scenarios for a potential Russian invasion of the Baltic States.

Wargaming the Defense of the Baltics

Wargaming at RAND

What other wargames, models, and simulations are there being used out there? Are there any commercial wargames incorporating MDB/MDO elements into their gameplay? What methodologies are being used to portray MDB/MDO effects?