An article appeared this week in the Strategy Page, which while a little rambling and unfocused, does hit on a few points of importance to us. The article is here: Murphy’s Law: What is Real on the Battlefield. Not sure of the author. But let me make a few rambling and unfocused comments on the article.
First they name-checked Trevor Dupuy. As they note: “Some post World War II historians had noted and measured the qualitative differences but their results were not widely recognized. One notable practitioner of this was military historian and World War II artillery officer Trevor Dupuy.”
“Not widely recognized” is kind of an understatement. In many cases, his work was actively resisted, with considerable criticism (some of it outright false), and often arrogantly and out-of-hand dismissed by people who apparently knew better. This is the reason why four chapters of my book War by Numbers focuses on measuring human factors.
I never understood the arguments from combat analysts and modelers who did not want to measure the qualitative differences between military forces. I would welcome someone who does not think this is useful to make the argument on this blog or maybe at our historical analysis conference. Fact of the matter was that Trevor Dupuy’s work was underfunded and under-resourced throughout the 33 years he pursued this research. His companies were always on the verge of extinction, kept going only by his force of will.
Second, they discussed validation and the failure of the U.S. DOD to take it into account. Their statement was that “But, in general, validation was not a high priority and avoided as much as possible during peacetime.” They discuss this as the case in the 1970s, but it was also true in the 1980s, the 1990s and into the current century. In my first meeting at CAA in early 1987, a group of analysts showed up for the purpose of getting the Ardennes Campaign Simulation Data Base (ACSDB) cancelled. There was open hostility at that time to even assembling the data to conduct a validation among the analytical community. We have discussed the need for validation a few times before here: Summation of our Validation Posts | Mystics & Statistics (dupuyinstitute.org) and here: TDI Friday Read: Engaging The Phalanx | Mystics & Statistics (dupuyinstitute.org) and here: TDI Friday Read: Battalion-Level Combat Model Validation | Mystics & Statistics (dupuyinstitute.org) and here: No Action on Validation In the 2020 National Defense Act Authorization | Mystics & Statistics (dupuyinstitute.org) and in Chapters 18 and 19 of War by Numbers.
Nominally, I am somewhat of a validation expert. I have created four+ large validation databases: the Ardennes Campaign Simulation Data Base, the Kursk Data Base, and Battle of Britian Data Base (primarily done by Richard Anderson) and the expansion of the various DuWar databases. I have actually conducted three validations also. This is the fully documented battalion-level validation done for the TNDM (see International TNDM Newsletters Volume I, numbers 2 – 6 at http://www.dupuyinstitute.org/tdipub4.htm), the fully documented test of various models done in our report CE-1 Casualty Estimation Methodologies Study (May 2005) at http://www.dupuyinstitute.org/tdipub3.htm and the fully documented test of division and corps level combat at Kursk using the TNDM (see Chapter 19 of War by Numbers and reports FCS-1 and FCS-2 here: http://www.dupuyinstitute.org/tdipub3.htm). That said, no one in DOD has ever invited me to discuss validation. I don’t think they would really agree with what I had to say. On the other hand, if there have been some solid documented validations conducted recently by DOD, then I certainly would invite them to post about it to our blog or present them at our Historical Analysis conference. There has been a tendency for certain agencies to claim they have done VVA and sensitivity tests, but one never seems to find a detailed description of the validation they have conducted.
I will not be specifically discussing these databases or validation at the Historical Analysis conference, but my discussion on the subject in War by Numbers and in over 40 blog posts on this blog.
Fascinating stuff. This prompts a question that’s been in the back of my mind since your “A Second Independent Effort to use the QJM/TNDM to Analyze the War in Ukraine” post. It seems to me that North American perspectives to Russian threats in Eastern Europe seem to be heavily influenced by the 2015 RAND report “Reinforcing Deterrence on NATO’s Eastern Flank: Wargaming the Defense of the Baltics.” (Also by the 2015 DRAFT Karber Report. I’ve never seen a final version, but plenty of references to the draft – rarely with any corroborating references, but regardless, it’s offers bogeyman anecdotes that have been easy to latch onto, whether they’re actually accurate or not). Anyway, RAND uses some kind of wargame modelling, which gets to my question: Do you know if RANDs methods are at all similar to your Tactical Numerical Deterministic Model (TNDM)? If not, have you ever run something similar to RAND concerning the Baltics?
Let me answer this backwards:
1. We have never conducted an analysis of the situation in in the Baltics. This would take some effort and we are not funded for such.
2. We have discussed this RAND effort before:
Here: https://dupuyinstitute.org/2016/02/05/wargaming-the-defense-of-the-baltics/
and here:
https://dupuyinstitute.org/2016/06/14/wargaming-the-defense-of-the-baltics-the-debate-continues/
3. RAND’s model JICM was inspired by the TNDM. This is discussed by the designer Paul Davis in this International TNDM article here:
http://www.dupuyinstitute.org/pdf/v2n4.pdf
4. I do not think that JICM was used for the RAND modeling effort in the Baltics. The “JICM map” is not detailed enough for a such a tactical analysis. Instead they conducted what looks like an old AH/SPI hex boardgame for their analysis. Not sure what they used for attrition methodology.
Not long ago, I asked Dave Ochmanek about the wargaming methodology RAND used back in 2016 to game a Russian invasion of the Baltics (https://www.rand.org/content/dam/rand/pubs/research_reports/RR1200/RR1253/RAND_RR1253.pdf). At the time, they said they would provide more detail on it, but never got around to it, I guess. Anyway, Ochmanek said that the game used was “home-brewed,” using the standard board game methodology, with counters, hexes, dice rolls, etc. The CRTs were all developed by in-house SMEs for each type of combat, ground, air, space, etc., using classified sources.
As far as combat modeling goes, the Army continues to use JICM for campaign analysis, but the Navy, Marines, Air Force, and DOD CAPE use STORM. Both JICM and STORM use COSAGE 2.0 for ground combat attrition calculation.