Chaplain (Capt.) Emil Kapaun (right) and Capt. Jerome A. Dolan, a medical officer with the 8th Cavalry Regiment, 1st Cavalry Division, carry an exhausted Soldier off the battlefield in Korea, early in the war. Kapaun was famous for exposing himself to enemy fire. When his battalion was overrun by a Chinese force in November 1950, rather than take an opportunity to escape, Kapaun voluntarily remained behind to minister to the wounded. In 2013, Kapaun posthumously received the Medal of Honor for his actions in the battle and later in a prisoner of war camp, where he died in May 1951. [Photo Credit: Courtesy of the U.S. Army Center of Military History]
[This piece was originally published on 27 June 2017.]
Trevor Dupuy’s theories about warfare were sometimes criticized by some who thought his scientific approach neglected the influence of the human element and chance and amounted to an attempt to reduce war to mathematical equations. Anyone who has read Dupuy’s work knows this is not, in fact, the case.
Moral and behavioral (i.e human) factors were central to Dupuy’s research and theorizing about combat. He wrote about them in detail in his books. In 1989, he presented a paper titled “The Fundamental Information Base for Modeling Human Behavior in Combat” at a symposium on combat modeling that provided a clear, succinct summary of his thinking on the topic.
He began by concurring with Carl von Clausewitz’s assertion that
[P]assion, emotion, and fear [are] the fundamental characteristics of combat… No one who has participated in combat can disagree with this Clausewitzean emphasis on passion, emotion, and fear. Without doubt, the single most distinctive and pervasive characteristic of combat is fear: fear in a lethal environment.
Despite the ubiquity of fear on the battlefield, Dupuy pointed out that there is no way to study its impact except through the historical record of combat in the real world.
We cannot replicate fear in laboratory experiments. We cannot introduce fear into field tests. We cannot create an environment of fear in training or in field exercises.
So, to study human reaction in a battlefield environment we have no choice but to go to the battlefield, not the laboratory, not the proving ground, not the training reservation. But, because of the nature of the very characteristics of combat which we want to study, we can’t study them during the battle. We can only do so retrospectively.
We have no choice but to rely on military history. This is why military history has been called the laboratory of the soldier.
He also pointed out that using military history analytically has its own pitfalls and must be handled carefully lest it be used to draw misleading or inaccurate conclusions.
I must also make clear my recognition that military history data is far from perfect, and that–even at best—it reflects the actions and interactions of unpredictable human beings. Extreme caution must be exercised when using or analyzing military history. A single historical example can be misleading for either of two reasons: (a) The data is inaccurate, or (b) The example may be true, but also be untypical.
But, when a number of respectable examples from history show consistent patterns of human behavior, then we can have confidence that behavior in accordance with the pattern is typical, and that behavior inconsistent with the pattern is either untypical, or is inaccurately represented.
He then stated very concisely the scientific basis for his method.
My approach to historical analysis is actuarial. We cannot predict the future in any single instance. But, on the basis of a large set of reliable experience data, we can predict what is likely to occur under a given set of circumstances.
Dupuy listed ten combat phenomena that he believed were directly or indirectly related to human behavior. He considered the list comprehensive, if not exhaustive.
Even though offensive action is essential to ultimate combat success, a combat commander opposed by a more powerful enemy has no choice but to assume a defensive posture. Since defensive posture automatically increases the combat power of his force, the defending commander at least partially redresses the imbalance of forces. At a minimum he is able to slow down the advance of the attacking enemy, and he might even beat him. In this way, through negative combat results, the defender may ultimately hope to wear down the attacker to the extent that his initial relative weakness is transformed into relative superiority, thus offering the possibility of eventually assuming the offensive and achieving positive combat results. The Franklin and Nashville Campaign of our Civil War, and the El Alamein Campaign of World War II are examples.
Sometimes the commander of a numerically superior offensive force may reduce the strength of portions of his force in order to achieve decisive superiority for maximum impact on the enemy at some other critical point on the battlefield, with the result that those reduced-strength components are locally outnumbered. A contingent thus reduced in strength may therefore be required to assume a defensive posture, even though the overall operational posture of the marginally superior force is offensive, and the strengthened contingent of the same force is attacking with the advantage of superior combat power. A classic example was the role of Davout at Auerstadt when Napoléon was crushing the Prussians at Jena. Another is the role played by “Stonewall” Jackson’s corps at the Second Battle of Bull Run. [pp. 2-3]
This verity is both derivative of Dupuy’s belief that the defensive posture is a human reaction to the lethal environment of combat, and his concurrence with Clausewitz’s dictum that the defense is the stronger form of combat. Soldiers in combat will sometimes reach a collective conclusion that they can no longer advance in the face of lethal opposition, and will stop and seek cover and concealment to leverage the power of the defense. Exploiting the multiplying effect of the defensive is also a way for a force with weaker combat power to successfully engage a stronger one.
Minimum essential means must be employed at points other than that of decision. To devote means to unnecessary secondary efforts or to employ excessive means on required secondary efforts is to violate the principle of both mass and the objective. Limited attacks, the defensive, deception, or even retrograde action are used in noncritical areas to achieve mass in the critical area.
These concepts are well ingrained in modern U.S. Army doctrine. FM 3-0 Operations (2017) summarizes the defensive this way:
Defensive tasks are conducted to defeat an enemy attack, gain time, economize forces, and develop conditions favorable for offensive or stability tasks. Normally, the defense alone cannot achieve a decisive victory. However, it can set conditions for a counteroffensive or counterattack that enables Army forces to regain and exploit the initiative. Defensive tasks are a counter to enemy offensive actions. They defeat attacks, destroying as much of an attacking enemy as possible. They also preserve and maintain control over land, resources, and populations. The purpose of defensive tasks is to retain key terrain, guard populations, protect lines of communications, and protect critical capabilities against enemy attacks and counterattacks. Commanders can conduct defensive tasks to gain time and economize forces, so offensive tasks can be executed elsewhere. [Para 1-72]
UPDATE: Just as I posted this, out comes a contrarian view from U.S. Army CAPT Brandon Morgan via the Modern War Institute at West Point blog. He argues that the U.S. Army is not placing enough emphasis on preparing to conduct defensive operations:
In his seminal work On War, Carl von Clausewitz famously declared that, in comparison to the offense, “the defensive form of warfare is intrinsically stronger than the offensive.”
This is largely due to the defender’s ability to occupy key terrain before the attack, and is most true when there is sufficient time to prepare the defense. And yet within the doctrinal hierarchy of the four elements of decisive action (offense, defense, stability, and defense support of civil authorities), the US Army prioritizes offensive operations. Ultimately, this has led to training that focuses almost exclusively on offensive operations at the cost of deliberate planning for the defense. But in the context of a combined arms fight against a near-peer adversary, US Army forces will almost assuredly find themselves initially fighting in a defense. Our current neglect of deliberate planning for the defense puts these soldiers who will fight in that defense at grave risk.
Soldiers from Britain’s Royal Artillery train in a “virtual world” during Exercise Steel Sabre, 2015 [Sgt Si Longworth RLC (Phot)/MOD]
Military History and Validation of Combat Models
A Presentation at MORS Mini-Symposium on Validation, 16 Oct 1990
By Trevor N. Dupuy
In the operations research community there is some confusion as to the respective meanings of the words “validation” and “verification.” My definition of validation is as follows:
“To confirm or prove that the output or outputs of a model are consistent with the real-world functioning or operation of the process, procedure, or activity which the model is intended to represent or replicate.”
In this paper the word “validation” with respect to combat models is assumed to mean assurance that a model realistically and reliably represents the real world of combat. Or, in other words, given a set of inputs which reflect the anticipated forces and weapons in a combat encounter between two opponents under a given set of circumstances, the model is validated if we can demonstrate that its outputs are likely to represent what would actually happen in a real-world encounter between these forces under those circumstances
Thus, in this paper, the word “validation” has nothing to do with the correctness of computer code, or the apparent internal consistency or logic of relationships of model components, or with the soundness of the mathematical relationships or algorithms, or with satisfying the military judgment or experience of one individual.
True validation of combat models is not possible without testing them against modern historical combat experience. And so, in my opinion, a model is validated only when it will consistently replicate a number of military history battle outcomes in terms of: (a) Success-failure; (b) Attrition rates; and (c) Advance rates.
“Why,” you may ask, “use imprecise, doubtful, and outdated history to validate a modem, scientific process? Field tests, experiments, and field exercises can provide data that is often instrumented, and certainly more reliable than any historical data.”
I recognize that military history is imprecise; it is only an approximate, often biased and/or distorted, and frequently inconsistent reflection of what actually happened on historical battlefields. Records are contradictory. I also recognize that there is an element of chance or randomness in human combat which can produce different results in otherwise apparently identical circumstances. I further recognize that history is retrospective, telling us only what has happened in the past. It cannot predict, if only because combat in the future will be fought with different weapons and equipment than were used in historical combat.
Despite these undoubted problems, military history provides more, and more accurate information about the real world of combat, and how human beings behave and perform under varying circumstances of combat, than is possible to derive or compile from arty other source. Despite some discrepancies, patterns are unmistakable and consistent. There is always a logical explanation for any individual deviations from the patterns. Historical examples that are inconsistent, or that are counter-intuitive, must be viewed with suspicion as possibly being poor or false history.
Of course absolute prediction of a future event is practically impossible, although not necessarily so theoretically. Any speculations which we make from tests or experiments must have some basis in terms of projections from past experience.
Training or demonstration exercises, proving ground tests, field experiments, all lack the one most pervasive and most important component of combat: Fear in a lethal environment. There is no way in peacetime, or non-battlefield, exercises, test, or experiments to be sure that the results are consistent with what would have been the behavior or performance of individuals or units or formations facing hostile firepower on a real battlefield.
We know from the writings of the ancients (for instance Sun Tze—pronounced Sun Dzuh—and Thucydides) that have survived to this day that human nature has not changed since the dawn of history. The human factor the way in which humans respond to stimuli or circumstances is the most important basis for speculation and prediction. What about the “scientific” approach of those who insist that we cart have no confidence in the accuracy or reliability of historical data, that it is therefore unscientific, and therefore that it should be ignored? These people insist that only “scientific” data should be used in modeling.
In fact, every model is based upon fundamental assumptions that are intuitive and unprovable. The first step in the creation of a model is a step away from scientific reality in seeking a basis for an unreal representation of a real phenomenon. I have shown that the unreality is perpetuated when we use other imitations of reality as the basis for representing reality. History is less than perfect, but to ignore it, and to use only data that is bound to be wrong, assures that we will not be able to represent human behavior in real combat.
At the risk of repetition, and even of protesting too much, let me assure you that I am well aware of the shortcomings of military history:
The record which is available to us, which is history, only approximately reflects what actually happened. It is incomplete. It is often biased, it is often distorted. Even when it is accurate, it may be reflecting chance rather than normal processes. It is neither precise nor consistent. But, it provides more, and more accurate, information on the real world of battle than is available from the most thoroughly documented field exercises, proving ground less, or laboratory or field experiments.
Military history is imperfect. At best it reflects the actions and interactions of unpredictable human beings. We must always realize that a single historical example can be misleading for either of two reasons: (1) The data may be inaccurate, or (2) The data may be accurate, but untypical.
Nevertheless, history is indispensable. I repeat that the most pervasive characteristic of combat is fear in a lethal environment. For all of its imperfections, military history and only military history represents what happens under the environmental condition of fear.
Unfortunately, and somewhat unfairly, the reported findings of S.L.A. Marshall about human behavior in combat, which he reported in Men Against Fire, have been recently discounted by revisionist historians who assert that he never could have physically performed the research on which the book’s findings were supposedly based. This has raised doubts about Marshall’s assertion that 85% of infantry soldiers didn’t fire their weapons in combat in World War ll. That dramatic and surprising assertion was first challenged in a New Zealand study which found, on the basis of painstaking interviews, that most New Zealanders fired their weapons in combat. Thus, either Americans were different from New Zealanders, or Marshall was wrong. And now American historians have demonstrated that Marshall had had neither the time nor the opportunity to conduct his battlefield interviews which he claimed were the basis for his findings.
I knew Marshall, moderately well. I was fully as aware of his weaknesses as of his strengths. He was not a historian. I deplored the imprecision and lack of documentation in Men Against Fire. But the revisionist historians have underestimated the shrewd journalistic assessment capability of “SLAM” Marshall. His observations may not have been scientifically precise, but they were generally sound, and his assessment has been shared by many American infantry officers whose judgements l also respect. As to the New Zealand study, how many people will, after the war, admit that they didn’t fire their weapons?
Perhaps most important, however, in judging the assessments of SLAM Marshall, is a recent study by a highly-respected British operations research analyst, David Rowland. Using impeccable OR methods Rowland has demonstrated that Marshall’s assessment of the inefficient performance, or non-performance, of most soldiers in combat was essentially correct. An unclassified version of Rowland’s study, “Assessments of Combat Degradation,” appeared in the June 1986 issue of the Royal United Services Institution Journal.
Rowland was led to his investigations by the fact that soldier performance in field training exercises, using the British version of MILES technology, was not consistent with historical experience. Even after allowances for degradation from theoretical proving ground capability of weapons, defensive rifle fire almost invariably stopped any attack in these field trials. But history showed that attacks were often in fact, usually successful. He therefore began a study in which he made both imaginative and scientific use of historical data from over 100 small unit battles in the Boer War and the two World Wars. He demonstrated that when troops are under fire in actual combat, there is an additional degradation of performance by a factor ranging between 10 and 7. A degradation virtually of an order of magnitude! And this, mind you, on top of a comparable built-in degradation to allow for the difference between field conditions and proving ground conditions.
Not only does Rowland‘s study corroborate SLAM Marshall’s observations, it showed conclusively that field exercises, training competitions and demonstrations, give results so different from real battlefield performance as to render them useless for validation purposes.
Which brings us back to military history. For all of the imprecision, internal contradictions, and inaccuracies inherent in historical data, at worst the deviations are generally far less than a factor of 2.0. This is at least four times more reliable than field test or exercise results.
I do not believe that history can ever repeat itself. The conditions of an event at one time can never be precisely duplicated later. But, bolstered by the Rowland study, I am confident that history paraphrases itself.
If large bodies of historical data are compiled, the patterns are clear and unmistakable, even if slightly fuzzy around the edges. Behavior in accordance with this pattern is therefore typical. As we have already agreed, sometimes behavior can be different from the pattern, but we know that it is untypical, and we can then seek for the reason, which invariably can be discovered.
This permits what l call an actuarial approach to data analysis. We can never predict precisely what will happen under any circumstances. But the actuarial approach, with ample data, provides confidence that the patterns reveal what is to happen under those circumstances, even if the actual results in individual instances vary to some extent from this “norm” (to use the Soviet military historical expression.).
It is relatively easy to take into account the differences in performance resulting from new weapons and equipment. The characteristics of the historical weapons and the current (or projected) weapons can be readily compared, and adjustments made accordingly in the validation procedure.
In the early 1960s an effort was made at SHAPE Headquarters to test the ATLAS Model against World War II data for the German invasion of Western Europe in May, 1940. The first excursion had the Allies ending up on the Rhine River. This was apparently quite reasonable: the Allies substantially outnumbered the Germans, they had more tanks, and their tanks were better. However, despite these Allied advantages, the actual events in 1940 had not matched what ATLAS was now predicting. So the analysts did a little “fine tuning,” (a splendid term for fudging). Alter the so-called adjustments, they tried again, and ran another excursion. This time the model had the Allies ending up in Berlin. The analysts (may the Lord forgive them!) were quite satisfied with the ability of ATLAS to represent modem combat. (Or at least they said so.) Their official conclusion was that the historical example was worthless, since weapons and equipment had changed so much in the preceding 20 years!
As I demonstrated in my book, Options of Command, the problem was that the model was unable to represent the German strategy, or to reflect the relative combat effectiveness of the opponents. The analysts should have reached a different conclusion. ATLAS had failed validation because a model that cannot with reasonable faithfulness and consistency replicate historical combat experience, certainly will be unable validly to reflect current or future combat.
How then, do we account for what l have said about the fuzziness of patterns, and the fact that individual historical examples may not fit the patterns? I will give you my rules of thumb:
The battle outcome should reflect historical success-failure experience about four times out of five.
For attrition rates, the model average of five historical scenarios should be consistent with the historical average within a factor of about 1.5.
For the advance rates, the model average of five historical scenarios should be consistent with the historical average within a factor of about 1.5.
Just as the heavens are the laboratory of the astronomer, so military history is the laboratory of the soldier and the military operations research analyst. The scientific basis for both astronomy and military science is the recording of the movements and relationships of bodies, and then analysis of those movements. (In the one case the bodies are heavenly, in the other they are very terrestrial.)
I repeat: Military history is the laboratory of the soldier. Failure of the analyst to use this laboratory will doom him to live with the scientific equivalent of Ptolomean astronomy, whereas he could use the evidence available in his laboratory to progress to the military science equivalent of Copernican astronomy.
An Israeli tank unit crosses the Sinai, heading for the Suez Canal, during the 1973 Arab-Israeli War [Israeli Government Press Office/HistoryNet]
It has been noted throughout the history of human conflict that some armies have consistently fought more effectively on the battlefield than others. The armies of Sparta in ancient Greece, for example, have come to epitomize the warrior ideal in Western societies. Rome’s legions have acquired a similar legendary reputation. Within armies too, some units are known to be superior combatants than others. The U.S. 1st Infantry Division, the British Expeditionary Force of 1914, Japan’s Special Naval Landing Forces, the U.S. Marine Corps, the German 7th Panzer Division, and the Soviet Guards divisions are among the many superior fighting forces from history.
Trevor Dupuy found empirical substantiation of this in his analysis of historical combat data. He discovered that in 1943-1944 during World War II, after accounting for environmental and operational factors, the German Army consistently performed more effectively in ground combat than the U.S. and British armies. This advantage—measured in terms of casualty exchanges, terrain held or lost, and mission accomplishment—manifested whether the Germans were attacking or defending, or winning or losing. Dupuy observed that the Germans demonstrated an even more marked effectiveness in battle against the Soviet Army throughout the war.
He found the same disparity in battlefield effectiveness in combat data on the 1967 and 1973 Arab-Israeli wars. The Israeli Army performed uniformly better in ground combat over all of the Arab armies it faced in both conflicts, regardless of posture or outcome.
The clear and consistent patterns in the historical data led Dupuy to conclude that superior combat effectiveness on the battlefield was attributable to moral and behavioral (i.e. human) factors. Those factors he believed were the most important contributors to combat effectiveness were:
Leadership
Training or Experience
Morale, which may or may not include
Cohesion
Although the influence of human factors on combat effectiveness was identifiable and measurable in the aggregate, Dupuy was skeptical whether all of the individual moral and behavioral intangibles could be discreetly quantified. He thought this particularly true for a set of factors that also contributed to combat effectiveness, but were a blend of human and operational factors. These include:
Logistical effectiveness
Time and Space
Momentum
Technical Command, Control, Communications
Intelligence
Initiative
Chance
Dupuy grouped all of these intangibles together into a composite factor he designated as relative combat effectiveness value, or CEV. The CEV, along with environmental and operational factors (Vf), comprise the Circumstantial Variables of Combat, which when multiplied by force strength (S), determines the combat power (P) of a military force in Dupuy’s formulation.
P = S x Vf x CEV
Dupuy did not believe that CEVs were static values. As with human behavior, they vary somewhat from engagement to engagement. He did think that human factors were the most substantial of the combat variables. Therefore any model or theory of combat that failed to account for them would invariably be inaccurate.
The Prussian military philosopher Carl von Clausewitz identified the concept of friction in warfare in his book On War, published in 1832.
Everything in war is very simple, but the simplest thing is difficult. The difficulties accumulate and end by producing a kind of friction that is inconceivable unless one has experienced war… Countless minor incidents—the kind you can never really foresee—combine to lower the general level of performance, so that one always falls far short of the intended goal… Friction is the only concept that more or less corresponds to the factors that distinguish real war from war on paper… None of [the military machine’s] components is of one piece: each part is composed of individuals, every one of whom retains his potential of friction [and] the least important of whom may chance to delay things or somehow make them go wrong…
While recognizing this hugely significant intangible element, Clausewitz also asserted that “[F]riction…brings about effects that cannot be measured, just they are largely due to chance.” Nevertheless, the clearly self-evident nature of friction in warfare subsequently led to the assimilation of the concept into the thinking of most military theorists and practitioners.
Flash forward 140 years or so. While listening to a lecture on combat simulation, Trevor Dupuy had a flash of insight that led him to conclude that it was indeed possible to measure the effects of friction.[1] Based on his work with historical combat data, Dupuy knew that smaller-sized combat forces suffer higher casualty rates than do larger-sized forces. As the diagram at the top demonstrates, this is partly explained by the fact that small units have a much higher proportion of their front line troops exposed to hostile fire than large units.
However, this relationship can account for only a fraction of friction’s total effect. The average exposure of a company of 200 soldiers is about seven times greater than an army group of 100,000. Yet, casualty rates for a company in intensive combat can be up to 70 times greater than that of an army group. This discrepancy clearly shows the influence of another factor at work.
Dupuy hypothesized that this reflected the apparent influence of the relationship between dispersion, deployment, and friction on combat. As friction in combat accumulates through the aggregation of soldiers into larger-sized units, its effects degrade the lethal effects of weapons from their theoretical maximum. Dupuy calculated that friction affects a force of 100,000 ten times more than it does a unit of 200. Being an ambient, human factor on the battlefield, higher quality forces do a better job of managing friction’s effects than do lower quality ones.
After looking at World War II combat casualty data to calculate the effect of friction on combat, Dupuy looked at casualty rates from earlier eras and found a steady correlation, which he believed further validated his hypothesis.
Despite the consistent fit of the data, Dupuy felt that his work was only the beginning of a proper investigation into the phenomenon.
During the periods of actual combat, the lower the level, the closer the loss rates will approach the theoretical lethalities of the weapons in the hands of the opposing combatants. But there will never be a very close relationship of such rates with the theoretical lethalities. War does not consist merely of a number of duels. Duels, in fact, are only a very small—though integral—part of combat. Combat is a complex process involving interaction over time of many men and numerous weapons combined in a great number of different, and differently organized, units. This process cannot be understood completely by considering the theoretical interactions of individual men and weapons. Complete understanding requires knowing how to structure such interactions and fit them together. Learning how to structure these interactions must be based on scientific analysis of real combat data.
Tom Lea, “The 2,000 Yard Stare” 1944 [Oil on canvas, 36 x 28 Life Collection of Art WWII, U.S. Army Center of Military History, Fort Belvoir, Virginia]
That idea that fatigue is a human factor in combat seems relatively uncontroversial. Military history is replete with examples of how the limits of human physical and mental endurance have affected the character of fighting and the outcome of battles. Perhaps the most salient aspect of military training is preparing soldiers to deal with the rigors of warfare.
Trevor Dupuy was aware that fatigue has a degrading effect on the effectiveness of troops in combat, but he never was able to study the topic specifically himself. He was aware of other examinations of historical experience that were relevant to the issue.
An approximate value for the daily effect of fatigue upon the effectiveness of weapons employment emerged from a HERO study several years ago. There is no question that fatigue has a comparable degrading effect upon the ability of a force to advance. I know of no research to ascertain that effect. Until such research is performed, I have arbitrarily assumed that the degrading effect of fatigue upon advance rates is the same as its degrading effect upon weapons effectiveness. To those who might be shocked at such an assumption, my response is: We know there is an effect; it is better to use a crude approximation of that effect than to ignore it…
During World War II when Colonel S.L.A. Marshall was the Chief Historian of the US European Theater of Operations, he undertook a number of interviews of units just after they had been in combat. After the war, in his book Men Against Fire, Marshall asserted that his interviews revealed that only 15% of US infantry soldiers fired their small arms weapons in combat. This revelation created something of a sensation at the time.
It has since been demonstrated that Marshall did not really have solid, scientific data for his assertion. But those who criticize Marshall for unscholarly, unscientific work should realize that in private life he was an exceptionally good newspaper reporter. His conclusions, based upon his observations, may have been largely intuitive, but I am convinced that they were generally, if not specifically, sound…
One of the few examples of the use of military history in the West in recent years was an important study done at the British Defence Operational Analysis Establishment (DOAE) by David Rowland. An unclassified condensation of that study was published in the June 1986 issue of the Journal of the Royal United Services Institution (RUSI). The article, “Assessments of Combat Degradation,” demonstrates conclusively that, in historical combat, small arms weapons have had only one-seventh to one-tenth of their theoretical effectiveness. Rowland does not attempt to say why this is so, but it is interesting that his value of one-seventh is very close to the S. L. A. Marshall 15% figure. Both values translate into casualty effects very similar to those that have emerged from my own research.
The intent of this post is not to rehash the debate on Marshall. As Dupuy noted above, even if Marshall’s conclusions were not based on empirical evidence, his observations on combat were nevertheless on to something important. (Details on the Marshall debate can be easily found with a Google search. A brief discussion took place on the old TDI Forum in 2007.)
The exhaustion factor (ex) of a fresh unit is 1.0; this is the maximum ex value.
At the conclusion of an engagement, a new ex factor will be calculated for each side.
A unit in normal offensive or defensive combat has its ex factor reduced by .05 for each consecutive day of combat; the ex factor cannot be less than 0.5.
An attacking unit opposed by delaying tactics has its ex factor reduced by 0.05 per day.
A defending unit in delay posture neither loses nor gains in its ex factor.
A withdrawing unit, not seriously engaged, has its ex factor augmented at the rate of 0.05 per day.
An advancing unit in pursuit, and not seriously delayed, neither loses nor gains in its ex factor.
For a unit in reserve, or in non-active posture, an exhaustion factor of less than 1.0 is augmented at the rate of .1 per day.
When a unit in combat, or recently in combat, is reinforced by a unit at least half of its size (in numbers of men), it adopts the ex factor of the reinforcing unit or—if the ex factor of the reinforcing unit is the same or lower than that of the reinforced—both adopt an ex factor 0.1 higher than that of the reinforced unit at the time of reinforcement, save that an ex factor cannot be greater than 1.0.
When a unit in combat, or recently in combat, is reinforced by a unit less than half its size, but not less than one quarter its size, augmentations or modifications of ex factors will be 0.5 times those provided for in paragraph 9, above. When the reinforcing unit is less than one-quarter the size of the reinforced unit, but not less than one-tenth its size, augmentations or modifications of ex factors will be 0.25 times those provided for in paragraph 9, above.
* Approximate reflection of preliminary QJM assessment of effects of casualty and fatigue, WWII engagements. These rates are for division or smaller size; for corps and larger units exhaustion rates are calculated for component divisions and smaller separate units.
EXAMPLES OF APPLICATION
A division in continuous offensive combat for five days stays in the line in inactive posture for two days, then resumes the offensive:
Combat exhaustion effect: 1 – (5 x .05) = 0.75;
Recuperation effect: 75 + (2 x .l) = 0.95.
A division in defensive posture for fifteen days is ordered to undertake a counterattack:
Combat exhaustion effect: 1 – (15 x .05) =0.25; this is below the minimum ex factor, which therefore applies: 0.5;
Recuperation effect: None; ex factor is 0.5.
A division in offensive posture for three days is reinforced by two fresh brigades:
Combat exhaustion effect: 1 – (3 x .05) = 0.85;
Reinforcement effect: Augmentation from 0.85 to 1.0.
A division in offensive posture for three days is reinforced by one fresh brigade:
Combat exhaustion effect: 1 – (3 x .05) = 0.85;
Reinforcement effect: 0.5 x augmentation from 0.85 to 1 = 0.93.
Battle of Spotsylvania by Thure de Thulstrup (1886) [Library of Congress]
Trevor Dupuy considered intensity to be another combat phenomena influenced by human factors. The variation in the intensity of combat is an aspect of battle that is widely acknowledged but little studied.
No one who has paid any attention at all to historical combat statistics can have failed to notice that some battles have been very bloody and hard-fought, while others—often under circumstances superficially similar—have reached a conclusion with relatively light casualties on one or both sides. I don’t believe that it is terribly important to find a quantitative reason for such differences, mainly because I don’t think there is any quantitative reason. The differences are usually due to such things as the general circumstances existing when the battles are fought, the personalities of the commanders, and the natures of the missions or objectives of one or both of the hostile forces, and the interactions of these personalities and missions.
From my standpoint the principal reason for trying to quantify the intensity of a battle is for purposes of comparative analysis. Just because casualties are relatively low on one or both sides does not necessarily mean that the battle was not intensive. And if the casualty rates are misinterpreted, then the analysis of the outcome can be distorted. For instance, a battle fought on a flat plain between two military forces will almost invariably have higher casualty rates for both sides than will a battle between those same two forces in mountainous terrain. A battle between those two forces in a heavy downpour, or in cold, wintry weather, will have lower casualties than when the forces are opposed to each other, under otherwise identical circumstances, in good weather. Casualty rates for small forces in a given set of circumstances are invariably higher than the rates for larger forces under otherwise identical circumstances.
If all of these things are taken into consideration, then it is possible to assess combat intensity fairly consistently. The formula I use is as follows:
CI = CR / (sz’ x rc x hc)
When: CI = Combat Intensity Measure
CR = Casualty rate in percent per day
sz’ = Square root of sz, a factor reflecting the effect of size upon casualty rates, derived from historical experience
rc = The effect of terrain on casualty rates, derived from historical experience
hc = The effect of weather on casualty rates, derived from historical experience
I then (somewhat arbitrarily) identify seven levels of intensity:
0.00 to 0.49 Very low intensity (1)
0.50 to 0.99 Low intensity (56)
1.00 to 1.99 Normal intensity (213)
2.00 to 2.99 High intensity (101)
3.00 to 3.99 Very high intensity (30)
4.00 to 5.00 Extremely high intensity (17)
Over 5.00 Catastrophic outcome (20)
The numbers in parentheses show the distribution of intensity on each side in 219 battles in DMSi’s QJM data base. The catastrophic battles include: the Russians in the Battles of Tannenberg and Gorlice Tarnow on the Eastern Front in World War I; the Russians on the first day of the Battle of Kursk in July 1943; a British defeat in Malaya in December, 1941; and 16 Japanese defeats on Okinawa. Each of these catastrophic instances, quantitatively identified, is consistent with a qualitative assessment of the outcome.
[UPDATE]
As Clinton Reilly pointed out in the comments, this works better when the equation variables are provided. These are from Trevor N. Dupuy, Attrition Forecasting Battle Casualties and Equipment Losses in Modern War (Fall Church, VA: NOVA Publications, 1995), pp. 146, 147, 149.
A military force that is surprised is severely disrupted, and its fighting capability is severely degraded. Surprise is usually achieved by the side that has the initiative, and that is attacking. However, it can be achieved by a defending force. The most common example of defensive surprise is the ambush.
Perhaps the best example of surprise achieved by a defender was that which Hannibal gained over the Romans at the Battle of Cannae, 216 BC, in which the Romans were surprised by the unexpected defensive maneuver of the Carthaginians. This permitted the outnumbered force, aided by the multiplying effect of surprise, to achieve a double envelopment of their numerically stronger force.
It has been hypothesized, and the hypothesis rather conclusively substantiated, that surprise can be quantified in terms of the enhanced mobility (quantifiable) which surprise provides to the surprising force, by the reduced vulnerability (quantifiable) of the surpriser, and the increased vulnerability (quantifiable) of the side that is surprised.
When men believe that their chances of survival in a combat situation become less than some value (which is probably quantifiable, and is unquestionably related to a strength ratio or a power ratio), they cannot and will not advance. They take cover so as to obtain some protection, and by so doing they redress the strength or power imbalance. A force with strength y (a strength less than opponent’s strength x) has its strength multiplied by the effect of defensive posture (let’s give it the symbol p) to a greater power value, so that power py approaches, equals, or exceeds x, the unenhanced power value of the force with the greater strength x. It was because of this that [Carl von] Clausewitz–who considered that battle outcome was the result of a mathematical equation[1]–wrote that “defense is a stronger form of fighting than attack.”[2] There is no question that he considered that defensive posture was a combat multiplier in this equation. It is obvious that the phenomenon of the strengthening effect of defensive posture is a combination of physical and human factors.
Dupuy elaborated on his understanding of Clausewitz’s comparison of the impact of the defensive and offensive posture in combat in his book Understanding War.
The statement [that the defensive is the stronger form of combat] implies a comparison of relative strength. It is essentially scalar and thus ultimately quantitative. Clausewitz did not attempt to define the scale of his comparison. However, by following his conceptual approach it is possible to establish quantities for this comparison. Depending upon the extent to which the defender has had the time and capability to prepare for defensive combat, and depending also upon such considerations as the nature of the terrain which he is able to utilize for defense, my research tells me that the comparative strength of defense to offense can range from a factor with a minimum value of about 1.3 to maximum value of more than 3.0.[3]
NOTES
[1] Dupuy believed Clausewitz articulated a fundamental law for combat theory, which Dupuy termed the “Law of Numbers.” One should bear in mind this concept of a theory of combat is something different than a fundamental law of war or warfare. Dupuy’s interpretation of Clausewitz’s work can be found in Understanding War: History and Theory of Combat (New York: Paragon House, 1987), 21-30.