Category Modeling, Simulation & Wargaming

The One Board Wargame To Rule Them All

The cover of SPI’s monster wargame, The Campaign For North Africa: The Desert War 1940-43 [SPI]
[This post was originally published on 22 September 2017.]

Even as board gaming appears to be enjoying a resurgence in the age of ubiquitous computer gaming, it appears, sadly, that table-top wargaming continues its long, slow decline in popularity from its 1970s-80s heyday. Pockets of enthusiasm remain however, and there is new advocacy for wargaming as a method of professional military education.

Luke Winkie has written an ode to that bygone era through a look at the legacy of The Campaign For North Africa: The Desert War 1940-43, a so-called “monster” wargame created by designer Richard Berg and published by Simulations Publications, Inc. (SPI) in 1979. It is a representation of the entire North African theater of war at the company/battalion level, played on five maps which extend over 10 feet and include 70 charts and tables. The rule book encompasses three volumes. There are over 1,600 cardboard counter playing pieces. As befits the real conflict, the game places a major emphasis on managing logistics and supply, which can either enable or inhibit combat options. The rule book recommends that each side consist of five players, an overall commander, a battlefield commander, an air power commander, one dedicated to managing rear area activities, and one devoted to overseeing logistics.

The game map. [BoardGameGeek]

Given that a bingo clash review states that to complete a full game requires an estimated 1,500 hours, actually playing The Campaign For North Africa is something that would appeal to only committed, die-hard wargame enthusiasts (known as grognards, i.e. Napoleonic era slang for “grumblers” or veteran soldiers.) As the game blurb suggests, the infamous monster wargames were an effort to appeal to a desire for a “super detailed, intensive simulation specially designed for maximum realism,” or as realistic as war on a tabletop can be, anyway. Berg admitted that he intentionally designed the game to be “wretched excess.”

Although The Campaign For North Africa was never popular, it did acquire a distinct notoriety not entirely confined to those of us nostalgic for board wargaming’s illustriously nerdy past. It retains a dedicated fanbase. Winkie’s article describes the recent efforts of Jake, a 16-year Minnesotan who, unable to afford to buy a second-end edition of the game priced at $400, printed out the maps and rule book for himself. He and a dedicated group of friends intend to complete a game before Jake heads off to college in two years. Berg himself harbors few romantic sentiments about wargaming or his past work, having sold his own last copy of the game several years ago because a “whole bunch of dollars seemed to be [a] more worthwhile thing to have.” The greatness of SPI’s game offerings has been tempered by the realization that the company died for its business sins.

However, some folks of a certain age relate more to Jake’s youthful enthusiasm and the attraction to a love of structure and complexity embodied in The Campaign For North Africa‘s depth of detail. These elements led many of us on to a scholarly study of war and warfare. Some of us may have discovered the work of Trevor Dupuy in an advertisement for Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles in the pages of SPI’s legendary Strategy & Tactics magazine, way back in the day.

TDI Friday Read: Engaging The Phalanx

The December 2018 issue of Phalanx, a periodical journal published by The Military Operations Research Society (MORS), contains an article by Jonathan K. Alt, Christopher Morey, and Larry Larimer, entitled “Perspectives on Combat Modeling.” (the article is paywalled, but limited public access is available via JSTOR).

Their article was written partly as a critical rebuttal to a TDI blog post originally published in April 2017, which discussed an issue of which the combat modeling and simulation community has long been aware but slow to address, known as the “Base of Sand” problem.

Wargaming Multi-Domain Battle: The Base Of Sand Problem

In short, because so little is empirically known about the real-world structures of combat processes and the interactions of these processes, modelers have been forced to rely on the judgement of subject matter experts (SMEs) to fill in the blanks. No one really knows if the blend of empirical data and SME judgement accurately represents combat because the modeling community has been reluctant to test its models against data on real world experience, a process known as validation.

TDI President Chris Lawrence subsequently published a series of blog posts responding to the specific comments and criticisms leveled by Alt, Morey, and Larimer.

How are combat models and simulations tested to see if they portray real-world combat accurately? Are they actually tested?

Engaging the Phalanx

How can we know if combat simulations adhere to strict standards established by the DoD regarding validation? Perhaps the validation reports can be released for peer review.

Validation

Some claim that models of complex combat behavior cannot really be tested against real-world operational experience, but this has already been done. Several times.

Validating Attrition

If only the “physics-based aspects” of combat models are empirically tested, do those models reliably represent real-world combat with humans or only the interactions of weapons systems?

Physics-based Aspects of Combat

Is real-world historical operational combat experience useful only for demonstrating the capabilities of combat models, or is it something the models should be able to reliably replicate?

Historical Demonstrations?

If a Subject Matter Expert (SME) can be substituted for a proper combat model validation effort, then could not a SME simply be substituted for the model? Should not all models be considered expert judgement quantified?

SMEs

What should be done about the “Base of Sand” problem? Here are some suggestions.

Engaging the Phalanx (part 7 of 7)

Persuading the military operations research community of the importance of research on real-world combat experience in modeling has been an uphill battle with a long history.

Diddlysquat

And the debate continues…

Diddlysquat

This blog post is generated as a response to one of Richard Anderson’s comments to this blog post:

Validating Attrition

Richard Anderson used to work with me at Trevor Dupuy’s company DMSI and later at The Dupuy Institute. He has been involved in this business since 1987, although he has been away from it for over a decade.

His comment was: “Keep fighting the good fight Chris, but it remains an uphill battle.”

It is an uphill battle. For a brief moment, from 1986-1989 it appeared that the community was actually trying to move forward on the model validation and “base of sand” type issues. This is discussed to some extent in Chapter 18 of War by Numbers (pages 295-298).

In 1986 the office of the DUSA (OR) * reviewed the U.S. Army Concepts Analysis Agency’s (CAA) casualty estimation process in their models. This generated considerable comments and criticism of how it was being done. In 1987 CAA, with I gather funding from DUSA (OR), issued out the contract to develop the Ardennes Campaign Simulation Data Base (ACSDB). I was the program manager for that effort. That same year they issued out the contract to study Breakpoints (forced changes in posture) which I was also involved in.

So we had the army conducting an internal review of their models and finding them wanting. They then issued out a contract to validate them and they issued out a contract to examine the issue of breakpoints, which had not been seriously studied since the 1950s. This was at the initiative of Vandiver and Walt Hollis.

After that, everything kind of fell apart. The U.S. defense budget peaked in 1989 and the budget cuts started. So, even though the breakpoints study got a good start, there was no follow-on contract. The ACSDB ended up being used for a casual top-level validation effort that did not get into the nuts and bolts of the models. All the dozens of problems identified in the internal DUSA(OR) report resulted in no corrective action taken (as far as I know). Basically, budget was declining and maintaining hardware was more important that studies and analysis.

There was a resurgence of activity in the early 1990s, which is when the Kursk Data Base (KDB) was funded. But that was never even used for a validation effort (although it was used to test Lanchester). But funding was marginal during most of the 1990s, and the modeling community did little to improve their understanding and analysis of combat.

The nature of the missions changed after 9/11/2001 and The Dupuy Institute ended up focused on insurgencies (see America’s Modern Wars). Budget again started declining in 2009 and then sequestration arrived, killing everything.

The end result was that there was a period from 1986-1989 when the U.S. modeling community appeared to have identified their problems and were taking corrective action. Since 1989, for all practical purposes, diddlysquat.

So…..30 years later…..I am still fighting the “good fight.” But I am not optimistic. Nothing is going to happen unless people at senior levels fund something to happen. For the price of a Stryker or two, a huge amount of productive and useful work could be done. But to date, having an extra Stryker or two has been more important to the army.

For this year and next year the U.S. Army has increasing budgets. If they wanted to take corrective action….now would be the time. I suspect that bureaucratic inertia will have more weight than any intellectual arguments that I can make. Still, I have to give it one last try.

 

* DUSA (OR) = The Deputy Under Secretary of the Army (Operations Research). It was headed by Walt Hollis forever, but was completely shut down in recent times.

Paul Davis (RAND) on Bugaboos

Just scanning the MORS Wargaming Special Meeting, October 2016, Final Report, January 31, 2017. The link to the 95-page report is here:

http://www.mors.org/Portals/23/Docs/Events/2016/Wargaming/MORS%20Wargaming%20Workshop%20Report.pdf?ver=2017-03-01-151418-980

There are a few comments from Dr. Paul Davis (RAND) starting on page 13 that are worth quoting:

I was struck through the workshop by a schism among attendees. One group believes, intuitively and viscerally, that human gaming–although quite powerful–is just a subset of modeling general. The other group believes, just as intuitively and viscerally, that human gaming is very different….

The impression had deep roots. Writings in the 1950s about defense modeling and systems analysis emphasized being scientific, rigorous, quantitative, and tied to mathematics. This was to be an antidote for hand-waving subjective assertions. That desire translated into an emphasis on “closed” models with no human interactions, which allowed reproducibility. Most DoD-level models have often been at theater or campaign level (e.g., IDAGAM, TACWAR, JICM, Thunder, and Storm). Many represent combat as akin to huge armies grinding each other down, as in the European theaters of World Wars I and II. such models are quite large, requiring considerable expertise and experience to understand.

Another development was standardized scenarios and date set with the term “data” referring to everything from facts to highly uncertain assumptions about scenario, commander decisions, and battle outcomes. Standardization allowed common baselines, which assured that policymakers would receive reports with common assumptions rather than diverse hidden assumptions chosen to favor advocates’ programs. The baselines also promoted joint thinking and assured a level playing field for joint analysis. Such reasons were prominent in DoD’s Analytic Agenda (later called Support for Strategic Analysis). Not surprisingly, however, the tendency was often to be disdainful of such other forms of modeling as the history-base formula models of Trevor Dupuy and the commercial board games of Jim Dunnigan and Mark Herman. These alternative approaches seen as somehow “lesser,” because they were allegedly less rigorous and scientific. Uncertainty analysis has been seriously inadequate. I have demurred on these matters for many years, as in the “Base of Sand” paper in 1993 and more recent monographs available on the RAND website….

The quantitative/qualitative split is a bugaboo. Many “soft” phenomena can be characterized with meaningful, albeit imprecise, numbers.

The Paul Davis “Base of Sand” paper from 1991 is here: https://www.rand.org/pubs/notes/N3148.html

 

Engaging the Phalanx (part 7 of 7)

Hopefully this is my last post on the subject (but I suspect not, as I expect a public response from the three TRADOC authors). This is in response to the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (see Part 1, Part 2, Part 3, Part 4, Part 5, Part 6). The issue here is the “Base of Sand” problem, which is what the original blog post that “inspired” their article was about:

Wargaming Multi-Domain Battle: The Base Of Sand Problem

While the first paragraph of their article addressed this blog post and they reference Paul Davis’ 1992 Base of Sand paper in their footnotes (but not John Stockfish’s paper, which is an equally valid criticism), they then do not discuss the “Base of Sand” problem further. They do not actually state whether this is a problem or not a problem. I gather by this notable omission that in fact they do understand that it is a problem, but being employees of TRADOC they are limited as to what they can publicly say. I am not.

I do address the “Base of Sand” problem in my book War by Numbers, Chapter 18. It has also been addressed in a few other posts on this blog. We are critics because we do not see significant improvement in the industry. In some cases, we are seeing regression.

In the end, I think the best solution for the DOD modeling and simulation community is not to “circle the wagons” and defend what they are currently doing, but instead acknowledge the limitations and problems they have and undertake a corrective action program. This corrective action program would involve: 1) Properly addressing how to measure and quantify certain aspects of combat (for example: Breakpoints) and 2) Validating these aspects and the combat models these aspects are part of by using real-world combat data. This would be an iterative process, as you develop and then test the model, then further develop it, and then test it again. This moves us forward. It is a more valued approach than just “circling the wagons.” As these models and simulations are being used to analyze processes that may or may not make us fight better, and may or may not save American service members lives, then I think it is important enough to do right. That is what we need to be focused on, not squabbling over a blog post (or seven).

SMEs

Continuing my comments on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 6 of 7; see Part 1, Part 2, Part 3, Part 4, Part 5).

SMEs….is a truly odd sounding acronym that means Subject Matter Experts. They talk about it extensively in their article, and this I have no problem with. I do want to make three points related to that:

  1. A SME is not a substitution for validation.
  2. In some respects, the QJM (Quantified Judgment Model) is a quantified and validated SME.
  3. How do you know that the SME is right?

If you can substitute a SME for a proper validation effort, then perhaps you could just substitute the SME for the model. This would save time and money. If your SME is knowledgable enough to sprinkle holy water on the model and bless its results, why not just skip the model and ask the SME. We could certainly simplify and speed up analysis by removing the models and just asking our favorite SME. The weaknesses of this approach are obvious.

Then there is Trevor N. Dupuy’s Quantified Judgment Model (QJM) and Quantified Judgment Method of Analysis (QJMA). This is, in some respects, a SME quantified. Actually it was a board of SMEs, who working with a series of historical studies (the list of studies starts here: http://www.dupuyinstitute.org/tdipubs.htm ). These SMEs developed a set of values for different situations, and then insert them into a model. They then validated the model to historical data (also known as real-world combat data). While the QJM has come under considerable criticism from elements of the Operations Research community…..if you are using SMEs, then in fact, you are using something akin, but less rigorous, than Trevor Dupuy’s Quantified Judgment Method of Analysis.

This last point, how do we know that the SME is right, is significant. How do you test your SMEs to ensure that what they are saying is correct? Another SME, a board of SMEs? Maybe a BOGSAT? Can you validate SMEs? There are limits to SME’s. In the end, you need a validated model.

 

Historical Demonstrations?

Photo from the 1941 Louisiana Maneuvers

Continuing my comments on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 5 of 7; see Part 1, Part 2, Part 3, Part 4).

The authors of the Phalanx article then make the snarky statement that:

Combat simulations have been successfully used to replicate historical battles as a demonstration, but this is not a requirement or their primary intended use.

So, they say in three sentences that combat models using human factors are difficult to validate, they then say that physics-based models are validated, and then they say that running a battle through a model is a demonstration. Really?

Does such a demonstration show that the model works or does not work? Does such a demonstration show that they can get a reasonable outcome when using real-world data? The definition of validation that they gave on the first page of their article is:

The process of determining the degree to which a model or simulation with its associated data is an accurate representation of the real world from the perspective of its intended use is referred to as validation.

This is a perfectly good definition of validation. So where does one get that real-world data? If you are using the model to measure combat effects (as opposed to physical affects) then you probably need to validate it to real-world combat data. This means historical combat data, whether it is from 3,400 years ago or 1 second ago. You need to assemble the data from a (preferably recent) combat situation and run it through the model.

This has been done. The Dupuy Institute does not exist in a vacuum. We have assembled four sets of combat data bases for use in validation. They are:

  1. The Ardennes Campaign Simulation Data Base
  2.  The Kursk Data Base
  3. The Battle of Britain Data Base
  4. Our various division-level, battalion-level and company-level engagement database bases.

Now, the reason we have mostly used World War II data is that you can get detailed data from the unit records of both sides. To date….this is not possible for almost any war since 1945. But, if your high-tech model cannot predict lower-tech combat….then you probably also have a problem modeling high-tech combat. So, it is certainly a good starting point.

More to the point, this was work that was funded in part by the Center for Army Analysis, the Deputy Secretary of the Army (Operations Research) and Office of Secretary of Defense, Planning, Analysis and Evaluation. Hundreds of thousands of dollars were spent developing some of these databases. This was not done just for “demonstration.” This was not done as a hobby. If their sentence was meant to be-little the work of TDI, which is how I do interpret that sentence, then is also belittles the work of CAA, DUSA(OR) and OSD PA&E. I am not sure that is the three author’s intent.

Physics-based Aspects of Combat

Continuing my comments on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 4 of 7; see Part 1, Part 2, Part 3).

The next sentence in the article is interesting. After saying that validating models incorporating human behavior is difficult (and therefore should not be done?) they then say:

In combat simulations, those model components that lend themselves to empirical validation, such as the physics-based aspects of combat, are developed, validated, and verified using data from an accredited source.

This is good. But, the problem lies that it limits one to only validating models that do not include humans. If one is comparing a weapon system to a weapon system, as they discuss later, this is fine. On the other hand, if one is comparing units in combat to units in combat…then there are invariably humans involved. Even if you are comparing weapon systems versus weapon systems in an operational environment, there are humans involved. Therefore, you have to address human factors. Once you have gone beyond simple weapon versus weapon comparisons, you need to use models that are gaming situations that involved humans. I gather from the previous sentence (see part 3 of 7) and this sentence, that means that they are using un-validated models. Their extended discussions of SMEs (Subject Matter Experts) that follows just reinforces that impression.

But, TRADOC is the training and doctrine command. They are clearly modeling something other than just the “physics-based aspect of combat.”

Validating Attrition

Continuing to comment on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 3 of 7; see Part 1, Part 2)

On the first page (page 28) in the third column they make the statement that:

Models of complex systems, especially those that incorporate human behavior, such as that demonstrated in combat, do not often lend themselves to empirical validation of output measures, such as attrition.

Really? Why can’t you? If fact, isn’t that exactly the model you should be validating?

More to the point, people have validated attrition models. Let me list a few cases (this list is not exhaustive):

1. Done by Center for Army Analysis (CAA) for the CEM (Concepts Evaluation Model) using Ardennes Campaign Simulation Study (ARCAS) data. Take a look at this study done for Stochastic CEM (STOCEM): https://apps.dtic.mil/dtic/tr/fulltext/u2/a489349.pdf

2. Done in 2005 by The Dupuy Institute for six different casualty estimation methodologies as part of Casualty Estimation Methodologies Studies. This was work done for the Army Medical Department and funded by DUSA (OR). It is listed here as report CE-1: http://www.dupuyinstitute.org/tdipub3.htm

3. Done in 2006 by The Dupuy Institute for the TNDM (Tactical Numerical Deterministic Model) using Corps and Division-level data. This effort was funded by Boeing, not the U.S. government. This is discussed in depth in Chapter 19 of my book War by Numbers (pages 299-324) where we show 20 charts from such an effort. Let me show you one from page 315:

 

So, this is something that multiple people have done on multiple occasions. It is not so difficult that The Dupuy Institute was not able to do it. TRADOC is an organization with around 38,000 military and civilian employees, plus who knows how many contractors. I think this is something they could also do if they had the desire.

 

Validation

Continuing to comment on the article in the December 2018 issue of the Phalanx by Jonathan Alt, Christopher Morey and Larry Larimer (this is part 2 of 7; see part 1 here).

On the first page (page 28) top of the third column they make the rather declarative statement that:

The combat simulations used by military operations research and analysis agencies adhere to strict standards established by the DoD regarding verification, validation and accreditation (Department of Defense, 2009).

Now, I have not reviewed what has been done on verification, validation and accreditation since 2009, but I did do a few fairly exhaustive reviews before then. One such review is written up in depth in The International TNDM Newsletter. It is Volume 1, No. 4 (February 1997). You can find it here:

http://www.dupuyinstitute.org/tdipub4.htm

The newsletter includes a letter dated 21 January 1997 from the Scientific Advisor to the CG (Commanding General)  at TRADOC (Training and Doctrine Command). This is the same organization that the three gentlemen who wrote the article in the Phalanx work for. The Scientific Advisor sent a letter out to multiple commands to try to flag the issue of validation (letter is on page 6 of the newsletter). My understanding is that he received few responses (I saw only one, it was from Leavenworth). After that, I gather there was no further action taken. This was a while back, so maybe everything has changed, as I gather they are claiming with that declarative statement. I doubt it.

This issue to me is validation. Verification is often done. Actual validations are a lot rarer. In 1997, this was my list of combat models in the industry that had been validated (the list is on page 7 of the newsletter):

1. Atlas (using 1940 Campaign in the West)

2. Vector (using undocumented turning runs)

3. QJM (by HERO using WWII and Middle-East data)

4. CEM (by CAA using Ardennes Data Base)

5. SIMNET/JANUS (by IDA using 73 Easting data)

 

Now, in 2005 we did a report on Casualty Estimation Methodologies (it is report CE-1 list here: http://www.dupuyinstitute.org/tdipub3.htm). We reviewed the listing of validation efforts, and from 1997 to 2005…nothing new had been done (except for a battalion-level validation we had done for the TNDM). So am I now to believe that since 2009, they have actively and aggressively pursued validation? Especially as most of this time was in a period of severely declining budgets, I doubt it. One of the arguments against validation made in meetings I attended in 1987 was that they did not have the time or budget to spend on validating. The budget during the Cold War was luxurious by today’s standards.

If there have been meaningful validations done, I would love to see the validation reports. The proof is in the pudding…..send me the validation reports that will resolve all doubts.