A fellow analyst posted an extended comment to two of our threads:

C-WAM 3

and

Military History and Validation of Combat Models

Instead of responding in the comments section, I have decided to respond with another blog post.

As the person points out, most Army simulations exist to “enable students/staff to maintain and improve readiness…improve their staff skills, SOPs, reporting procedures, and planning….”

Yes this true, but I argue that this does not obviate the need for accurate simulations. Assuming no change in complexity, I cannot think of a single scenario where having a less accurate model is more desirable that having a more accurate model.

Now what is missing from many of these models that I have seen? Often a realistic unit breakpoint methodology, a proper comparison of force ratios, a proper set of casualty rates, addressing human factors, and many other matters. Many of these things are being done in these simulations already, but are being done incorrectly. Quite simply, they do not realistically portray a range of historical or real combat examples.

He then quotes the 1997-1998 Simulation Handbook of the National Simulations Training Center:

The algorithms used in training simulations provide sufficient fidelity for training, not validation of war plans. This is due to the fact that important factors (leadership, morale, terrain, weather, level of training or units) and a myriad of human and environmental impacts are not modeled in sufficient detail….”

Let’s take their list made around 20 years ago. In the last 20 years, what significant quantitative studies have been done on the impact of leadership on combat? Can anyone list them? Can anyone point to even one? The same with morale or level of training of units. The Army has TRADOC, the Army Staff, Leavenworth, the War College, CAA and other agencies, and I have not seen in the last twenty years a quantitative study done to address these issues. And what of terrain and weather? They have been around for a long time.

Army simulations have been around since the late 1950s. So at the time these shortfalls are noted in 1997-1998, 40 years had passed. By their own admission, these issues had not been adequately addressed in the previous 40 years. I gather they have not been adequately in addressed in the last 20 years. So, the clock is ticking, 60 years of Army modeling and simulation, and no one has yet fully and properly address many of these issues. In many cases, they have not even gotten a good start in addressing them.

Anyhow, I have little interest in arguing these issues. My interest is in correcting them.

Share this:
Christopher A. Lawrence
Christopher A. Lawrence

Christopher A. Lawrence is a professional historian and military analyst. He is the Executive Director and President of The Dupuy Institute, an organization dedicated to scholarly research and objective analysis of historical data related to armed conflict and the resolution of armed conflict. The Dupuy Institute provides independent, historically-based analyses of lessons learned from modern military experience.
...
Mr. Lawrence was the program manager for the Ardennes Campaign Simulation Data Base, the Kursk Data Base, the Modern Insurgency Spread Sheets and for a number of other smaller combat data bases. He has participated in casualty estimation studies (including estimates for Bosnia and Iraq) and studies of air campaign modeling, enemy prisoner of war capture rates, medium weight armor, urban warfare, situational awareness, counterinsurgency and other subjects for the U.S. Army, the Defense Department, the Joint Staff and the U.S. Air Force. He has also directed a number of studies related to the military impact of banning antipersonnel mines for the Joint Staff, Los Alamos National Laboratories and the Vietnam Veterans of American Foundation.
...
His published works include papers and monographs for the Congressional Office of Technology Assessment and the Vietnam Veterans of American Foundation, in addition to over 40 articles written for limited-distribution newsletters and over 60 analytical reports prepared for the Defense Department. He is the author of Kursk: The Battle of Prokhorovka (Aberdeen Books, Sheridan, CO., 2015), America’s Modern Wars: Understanding Iraq, Afghanistan and Vietnam (Casemate Publishers, Philadelphia & Oxford, 2015), War by Numbers: Understanding Conventional Combat (Potomac Books, Lincoln, NE., 2017) , The Battle of Prokhorovka (Stackpole Books, Guilford, CT., 2019), The Battle for Kyiv (Frontline Books, Yorkshire, UK, 2023), Aces at Kursk (Air World, Yorkshire, UK, 2024), Hunting Falcon: The Story of WWI German Ace Hans-Joachim Buddecke (Air World, Yorkshire, UK, 2024) and The Siege of Mariupol (Frontline Books, Yorkshire, UK, 2024).
...
Mr. Lawrence lives in northern Virginia, near Washington, D.C., with his wife and son.

Articles: 1539

One comment

  1. Chris makes a valid point. The military complain about a lack of precision in human factors and other areas, but it is largely due to their own lack of effort in investigating the area. One of the major benefits of quantitative modelling is the way it forces you to ask a lot of questions you would have otherwise ignored. So use modelling as an analytical tool as well as a predictive one.

    I have encountered the same fiercely anti-intellectual bias in the Australian Army’s practice of war-gaming and I do not really understand why they are like this. The DST do not seem any different even though they are supposed to be the intellectual inspiration for the army.

    However if you look into non-military research as well as first hand battle accounts there is a lot of material to use as a basis for good quantitative modelling. You can lead the military to data but you can’t make them think.

Leave a Reply

Your email address will not be published. Required fields are marked *