A fellow analyst posted an extended comment to two of our threads:
and
Instead of responding in the comments section, I have decided to respond with another blog post.
As the person points out, most Army simulations exist to “enable students/staff to maintain and improve readiness…improve their staff skills, SOPs, reporting procedures, and planning….”
Yes this true, but I argue that this does not obviate the need for accurate simulations. Assuming no change in complexity, I cannot think of a single scenario where having a less accurate model is more desirable that having a more accurate model.
Now what is missing from many of these models that I have seen? Often a realistic unit breakpoint methodology, a proper comparison of force ratios, a proper set of casualty rates, addressing human factors, and many other matters. Many of these things are being done in these simulations already, but are being done incorrectly. Quite simply, they do not realistically portray a range of historical or real combat examples.
He then quotes the 1997-1998 Simulation Handbook of the National Simulations Training Center:
The algorithms used in training simulations provide sufficient fidelity for training, not validation of war plans. This is due to the fact that important factors (leadership, morale, terrain, weather, level of training or units) and a myriad of human and environmental impacts are not modeled in sufficient detail….”
Let’s take their list made around 20 years ago. In the last 20 years, what significant quantitative studies have been done on the impact of leadership on combat? Can anyone list them? Can anyone point to even one? The same with morale or level of training of units. The Army has TRADOC, the Army Staff, Leavenworth, the War College, CAA and other agencies, and I have not seen in the last twenty years a quantitative study done to address these issues. And what of terrain and weather? They have been around for a long time.
Army simulations have been around since the late 1950s. So at the time these shortfalls are noted in 1997-1998, 40 years had passed. By their own admission, these issues had not been adequately addressed in the previous 40 years. I gather they have not been adequately in addressed in the last 20 years. So, the clock is ticking, 60 years of Army modeling and simulation, and no one has yet fully and properly address many of these issues. In many cases, they have not even gotten a good start in addressing them.
Anyhow, I have little interest in arguing these issues. My interest is in correcting them.
Chris makes a valid point. The military complain about a lack of precision in human factors and other areas, but it is largely due to their own lack of effort in investigating the area. One of the major benefits of quantitative modelling is the way it forces you to ask a lot of questions you would have otherwise ignored. So use modelling as an analytical tool as well as a predictive one.
I have encountered the same fiercely anti-intellectual bias in the Australian Army’s practice of war-gaming and I do not really understand why they are like this. The DST do not seem any different even though they are supposed to be the intellectual inspiration for the army.
However if you look into non-military research as well as first hand battle accounts there is a lot of material to use as a basis for good quantitative modelling. You can lead the military to data but you can’t make them think.