Hadn’t done a blog post in the while. Been focused on getting a book done. Sorry.
There is a rule of thumb often quoted out there and often put in war games that a unit becomes ineffective or reaches a breakpoint at 40% casualties. The basis for this rule is a very limited body of studies and analysis.
First, I have never seen a study on when a unit become ineffective. Even though it is now an accepted discussion point, I have not seen such a study establishing this relationship and do not think that such a study exists. I am not saying that there is not a relationship between casualties and unit effectiveness, what I am saying that I have never seen a study establishing that 1) this relationship exists, and 2) what are its measurements, and 3) what is the degree of degradation.
What has been done is studies on breakpoints, and over time, a rule of thumb that at 40% a unit “breaks” appears to be widely accepted. It appears that this rule has then been transferred to measuring unit effectiveness.
The starting point for “breakpoints” study is Dorothy Clark’s study of 43 battalions from World War II done in 1954. That study showed that the average casualties for these battalions was around 40%, although the ranged from around 1% to near 100%. Her conclusion was that “The statement that a unit can be considered no longer combat effective when it has suffered a specific casualty percentage is a gross oversimplification not supported by combat data.” She also stated “Because of wide variations in data, average loss percentages alone have limited meaning.”. We have discussed this before, see: C-WAM 4 (Breakpoints) | Mystics & Statistics (dupuyinstitute.org) and April | 2018 | Mystics & Statistics (dupuyinstitute.org) and Breakpoints in U.S. Army Doctrine | Mystics & Statistics (dupuyinstitute.org) and Response 3 (Breakpoints) | Mystics & Statistics (dupuyinstitute.org).
The next point is the U.S. Army’s Maneuver Control manuals (FM 105-5) which in 1964 set the attacker’s breakpoint at around 20 percent casualties and the defender’s breakpoint at around 40 percent at the battalion-level. Charts in the 1964 Maneuver Control field manual showed a curve with the probability of unit break based on percentage of combat casualties. Once a defending unit reached around 40 percent casualties, the chance of breaking approached 100 percent. Once an attacking unit reached around 20 percent casualties, the chance of its halting (type I break) approached 100 percent, and the chance of its breaking (type II break) reached 40 percent. These data were for battalion-level combat.
We have never found any studies establishing the data for these Maneuver Control manuals and we do not think they exist. Something may have been assembled when they were writing these manuals, but we have not been able to find any such files. Most likely, the tables were extension of the Dorothy Clark study, even though she said that it should not apply.
Anyhow, that is kind of it. Other stuff had been published on breakpoints, Helmbold in 1972, McQuie in 1987 (see: Battle Outcomes: Casualty Rates As a Measure of Defeat | Mystics & Statistics (dupuyinstitute.org)) and Dupuy in the late 1980s, but I have not seen anything of significance since, as it appears that most significant studies and analysis work stopped around 1989.
Now, Dr. Richard Harrison, who spends a lot of time translating old Soviet documents, has just sent me this:
“Supposing that for the entire month not a single unit will receive reinforcements, then we will have a weakening of 30%, with 70% of the troops present. This is a significant weakening, but it does not yet deprive the unit of its combat strength; the latter’s fall begins approximately with losses of 40%.”
His source is:
N.N. Movchin, Posledovatel’nye Operatsii po Opytu Marny i Visly (Consecutive Operations on the Experience of the Marne and Vistula) (Moscow and Leningrad: Gosudarstvennoe Izdatel’stvo, 1928), page 99.
So, the U.S. came up with the 40% rule in 1954 which it disowned and then adopted in 1964 regardless. And here we have a 1928 Russian writing which is directly applying a 40% rule to unit effectiveness. I have no idea what the analytical basis is for that statement, but it does get my attention.
FWIW, I have that Clark study printed off at home.
C.A.L: “There is a rule of thumb often quoted out there and often put in war games that a unit becomes ineffective or reaches a breakpoint at 40% casualties. The basis for this rule is a very limited body of studies and analysis… I have no idea what the analytical basis is for that statement…”
-I remember hearing a “One-Third” rule in context for rifle platoons, largely because it normally takes two guys to carry off one dead or seriously wounded guy. Even if you’re willing to leave the dead in place, and some of the wounded are ambulatory and can get back on their own power, or with the help of only one other guy, that’s balanced by the guys who are beat (physically or psychologically). Once you get to 33% casualties, a rifle platoon doesn’t have much left to maintain an attack or hold a position, assuming it was full strength to begin with, at least for a while.
The larger the formation, the larger a percentage of the manpower are guys who are non-combatant overhead. If a battalion lost 40% of its guys, a disproportionate percentage of that 40% comes out of rifle platoons, in which case there wouldn’t be much left.
VR,
James Glick
Clarksville, TN
I have been continuing my efforts in developing wargame software that can simulate battlefield behaviour. The various quantitative sources are useful and do contribute a lot, however they are still well short of providing a simulation solution.
My own approach has been to read extensively in the military history of various periods (especially first hand accounts), in addition to taking note of the quantitative research, and to develop behaviour patterns that reflect historical behaviour. IMHO I think this has worked reasonably well but it is not perfect. A lot of people use my software simulations for table top wargaming and I do not get adverse comments about unrealistic battlefield behaviour.
The behaviour patterns vary greatly over different periods and different cultures, so I cannot produce one model to reflect all periods of history – far from it. I have ended up with 15 models each for a different period of history. Some periods need 2 or 3 models as there are diverse behaviour patterns in different locations, due mainly to different cultures in different locations. Even cultures having the same origins (e.g. Anglo derived cultures) have different battlefield behaviour patterns.
Within each model there are a number of differentiators that make one common troop type (e.g. infantry) behave differently to that of another culture. These seem to tie in closely with the quantitative findings and general observations made by experts and include, orders, fatigue, morale, command and control and others. Culture seems to dominate however.
So I think I have gotten further by the simulation route but it is supported more by observation rather than statistical data.
I hope this helps.
It is that they fall still well short that bothers me. It is much more than an issue of producing a good simulation (although this is an issue).
So, in all reality are the DOD and U.S. Army simulations, but they want to deny or ignore that. That is why I mentioned a specific survey on page 295 of War by Numbers, because I got tired of “our simulations are based upon engineering data” bullshit that some people kept spinning out.
Anyhow, the reason I keep hosting historical analysis conferences at my expense is to try to be someone who actually helps address this issue as opposed to just complaining about it.
Chris, the issue appears to be well suited to solution through logit analysis.
However, as you’ve stated, one first must define (quantitatively, or yes/no qualitatively) what “ineffective” or “breakpoint” means such that the dependent variable is valued at 1 to represent ineffective or breakpoint reached and 0 if still effective or breakpoint not reached.
Then it is a matter of estimating the equation of a logistic curve for the probability of a unit becoming ineffective or the probability of reaching a breakpoint based upon casualty percentage for the unit (and any other non-correlated independent variables such as strategic situation, tactical situation, unit type, historic time period and culture, Clinton, or doing separate estimating for different unit types, historic time periods and cultures as you have been doing). Chris, what you’ve done for predicting successful advancing for river-crossing attacks and the like would provide insights concerning which independent variables (i.e. explanatory variables) should be included in logit equations to be estimated.
P = 1 divided by (1 – e to the -(a+BX))
where P is the probability of becoming ineffective or reaching the breakpoint
and e is the value of the natural log (i.e. 2.718281828…)
and a is the estimated intercept coefficient
and BX is the estimated coefficient matrix multiplied by the matrix of independent variables’ values (at least including a continuous variable such as casualty percentage)
If you decide on a variety of definitions of “ineffective” or “breakpoint” then estimate a logit equation for each of the definitions. The purpose of doing that would be to increase the chance of satisfying audiences that are partial to different definitions and not to increase the chance of discovering a statistically significant estimation (i.e. not to attempt the producing of a Shakespearean play by gathering a sufficient number of monkeys and typewriters for a sufficient amount of time : – )
Oops! I mistyped (this monkey didn’t take sufficient time for typing). I meant “1 + e” rather than “1 – e” in my equation.
Hi Neal,
Thanks for the helpful comment. I think what you say is fine if you have a large file of unit casualties just prior to the unit breaking as an input. I do not have such a file and have not been able to find one. I hope that some military authority may have such data, or at least the block of data from which it can be derived, but I have not seen this.
Do you know of a source of such data? If so I would be happy to do the calculations.
Clinton, if someone has such data then that someone is Chris!
By the way, for the logit analysis, you don’t have to use casualty data over the time of an engagement. You can use end-of-engagement data for each unit. Obtain the casualty percentage (and any other relevant variable values) for each unit and note whether or not the unit became ineffective or reached a breaking point by the end of its engagement. I would think that the biggest data collection problem would be that of finding enough units that had become ineffective or had reached a breaking point. Also, defining the dependent variable is still the first thing to do.
I’m guessing that it will be easier to have enough ineffective/broken units if you estimate the logit equation for data from multiple engagements, wars, historical periods, cultures, etc. and account for those differing environments by including dummy variables in the equation being estimated (just as you would for different types of units in various strategic/tactical situations). Still, you might find enough ineffective/broken units in a well-studied battle such as Gettysburg. Simply (he wrote with a smile), use the end-of-battle casualty percentages and the determination of which units are considered to have broken or have become ineffective by the end of the battle (assuming that your definition of ineffective or broken incorporates not having recovered by the end of the battle as a criteria). Again, it will be necessary that you have defined what is meant (or the study author has defined what is meant) by ineffective or broken.
Happy estimating!