Gerard credited Trevor Dupuy and his colleagues at the Historical Evaluation Research Organization (HERO) with codifying “the military appropriation of the concept” of lethality, which was defined as: “the inherent capability of a given weapon to kill personnel or make materiel ineffective in a given period, where capability includes the factors of weapon range, rate of fire, accuracy, radius of effects, and battlefield mobility.”
It is gratifying for Gerard to attribute this to Dupuy and HERO, but some clarification is needed. The definition she quoted was, in fact, one provided to HERO for the purposes of a study sponsored by the Advanced Tactics Project (AVTAC) of the U.S. Army Combat Developments Command. The 1964 study report, Historical Trends Related to Weapon Lethality, provided the starting point for Dupuy’s subsequent theorizing about combat.
In his own works, Dupuy used a simpler definition of lethality:
“Lethality—the ability to injure and if possible to kill people.” [The Evolution of Weapons and Warfare (Indianapolis, IN: The Bobbs-Merrill Company, Inc., 1980), p. 286]
“All weapons have at least one common characteristic: lethality. This is the ability to injure and, if possible, to kill people.” [Attrition: Forecasting Battle Casualties and Equipment Losses in Modern War (Falls Church, VA: NOVA Publications, 1995), p. 25, which was drawn from earlier HERO reports].
He also used the terms lethality and firepower interchangeably in his writings. The wording of the original 1964 AVTAC definition tracks closely with the lethality scoring methodology Dupuy and his HERO colleagues developed for the study, known as the Theoretical Lethality Index/Operational Lethality Index (TLI/OLI). The original purpose of this construct was to permit some measurement of lethality by which weapons could be compared to each other (TLI), and to each other through history (OLI). It worked well enough that he incorporated it into his combat models, the Quantified Judgement Model (QJM) and Tactical Numerical Deterministic Model (TNDM).
This is amply demonstrated by the preceding [verities]. All writers on military affairs (including this one) need periodically to remind themselves of this. In military analysis it is often necessary to focus on some particular aspect of combat. However, the results of such closely focused analyses must the be evaluated in the context of the brutal, multifarious, overlapping realities of war.
Trevor Dupuy was sometimes accused of attempting to reduce war to a mathematical equation. A casual reading of his writings might give that impression, but anyone who honestly engages with his ideas quickly finds this to be an erroneous conclusion. Yet, Dupuy believed the temptation to simplify and abstract combat and warfare to be common enough that he he embedded a warning against doing so into his basic theory on the subject. He firmly believed that human behavior comprises the most important aspect of combat, yet it is all too easy to miss the human experience of war figuring who lost or won and why, and counts of weapons, people, and casualties. As a military historian, he was keenly aware that the human stories behind the numbers—however imperfectly recorded and told—tell us more about the reality of war than mere numbers on their own ever will.
Military history demonstrates that whenever an outnumbered force was successful, its combat power was greater than that of the loser. All other things being equal, God has always been on the side of the heaviest battalions and always will be.
In recent years two or three surveys of modern historical experience have led to the finding that relative strength is not a conclusive factor in battle outcome. As we have seen, a superficial analysis of historical combat could support this conclusion. There are a number of examples of battles won by the side with inferior numbers. In many battles, outnumbered attackers were successful.
These examples are not meaningful, however, until the comparison includes the circumstances of the battles and opposing forces. If one take into consideration surprise (when present), relative combat effectiveness of the opponents, terrain features, and the advantage of defensive posture, the result may be different. When all of the circumstances are quantified and applied to the numbers of troops and weapons, the side with the greater combat power on the battlefield is always seen to prevail.
The concept of combat power is foundational to Dupuy’s theory of combat. He did not originate it; the notion that battle encompasses something more than just “physics-based” aspects likely originated with British theorist J.F.C. Fuller during World War I and migrated into U.S. Army thinking via post-war doctrinal revision. Dupuy refined and sharpened the Army’s vague conceptualization of it in the first iterations of his Quantified Judgement Model (QJM) developed in the 1970s.
Dupuy initially defined his idea of combat power in formal terms, as an equation in the QJM:
P = (S x V x CEV)
When:
P = Combat Power
S = Force Strength
V = Environmental and Operational Variable Factors
CEV = Combat Effectiveness Value
Essentially, combat power is the product of:
force strength as measured in his models through the Theoretical/Operational Lethality Index (TLI/OLI), a firepower scoring method for comparing the lethality of weapons relative to each other;
the intangible environmental and operational variables that affect each circumstance of combat; and
the intangible human behavioral (or moral) factors that determine the fighting quality of a combat force.
Dupuy’s theory of combat power and its functional realization in his models have two virtues. First, unlike most existing combat models, it incorporates the effects of those intangible factors unique to each engagement or battle that influence combat outcomes, but are not readily measured in physical terms. As Dupuy argued, combat consists of more than duels between weapons systems. A list of those factors can be found below.
Second, the analytical research in real-world combat data done by him and his colleagues allowed him to begin establishing the specific nature combat processes and their interaction that are only abstracted in other combat theories and models. Those factors and processes for which he had developed a quantification hypothesis are denoted by an asterisk below.
This is the phenomenon that Clausewitz called “friction in war.” Friction is largely due to the disruptive, suppressive, and dispersal effects of firepower upon an aggregation of people. This pace of actual combat operations will be much slower than the progress of field tests and training exercises, even highly realistic ones. Tests and exercises are not truly realistic portrayals of combat, because they lack the element of fear in a lethal environment, present only in real combat. Allowances must be made in planning and execution for the effects of friction, including mistakes, breakdowns, and confusion.
While Clausewitz asserted that the effects of friction on the battlefield could not be measured because they were largely due to chance, Dupuy believed that its influence could, in fact, be gauged and quantified. He identified at least two distinct combat phenomena he thought reflected measurable effects of friction: the differences in casualty rates between large and small sized forces, and diminishing returns from adding extra combat power beyond a certain point in battle. He also believed much more research would be necessary to fully understand and account for this.
Dupuy was skeptical of the accuracy of combat models that failed to account for this interaction between operational and human factors on the battlefield. He was particularly doubtful about approaches that started by calculating the outcomes of combat between individual small-sized units or weapons platforms based on the Lanchester equations or “physics-based” estimates, then used these as inputs for brigade and division-level-battles, the results of which in turn were used as the basis for determining the consequences of theater-level campaigns. He thought that such models, known as “bottom up,” hierarchical, or aggregated concepts (and the prevailing approach to campaign combat modeling in the U.S.), would be incapable of accurately capturing and simulating the effects of friction.
It is doubtful if any of the people who are today writing on the effect of technology on warfare would consciously disagree with this statement. Yet, many of them tend to ignore the impact of firepower on dispersion, and as a consequence they have come to believe that the more lethal the firepower, the more deaths, disruption, and suppression it will cause. In fact, as weapons have become more lethal intrinsically, their casualty-causing capability has either declined or remained about the same because of greater dispersion of targets. Personnel and tank loss rates of the 1973 Arab-Israeli War, for example, were quite similar to those of intensive battles of World War II and the casualty rates in both of these wars were less than in World War I. (p. 7)
Research and analysis of real-world historical combat data by Dupuy and TDI has identified at least four distinct combat effects of firepower: infliction of casualties (lethality), disruption, suppression, and dispersion. All of them were found to be heavily influenced—if not determined—by moral (human) factors.
Again, I have written extensively on this blog about Dupuy’s theory about the historical relationship between weapon lethality, dispersion on the battlefield, and historical decline in average daily combat casualty rates. TDI President Chris Lawrence has done further work on the subject as well.
Successful defense requires depth and reserves. It has been asserted that outnumbered military forces cannot afford to withhold valuable firepower from ongoing defensive operations and keep it idle in reserve posture. History demonstrates that this is specious logic, and that linear defense is disastrously vulnerable. Napoleon’s crossing of the Po in his first campaign in 1796 is perhaps the classic demonstration of the fallacy of linear (or cordon) defense.
The defender may have all of his firepower committed to the anticipated operational area, but the attacker’s advantage in having the initiative can always render much of that defensive firepower useless. Anyone who suggests that modern technology will facilitate the shifting of engaged firepower in battle overlooks three considerations: (a) the attacker can inhibit or prevent such movement by both direct and indirect means, (b) a defender engaged in a fruitless firefight against limited attacks by numerically inferior attackers is neither physically nor psychologically attuned to making lateral movements even if the enemy does not prevent or inhibit it, and (c) withdrawal of forces from the line (even if possible) provides an alert attacker with an opportunity for shifting the thrust of his offensive to the newly created gap in the defenses.
Napoleon recognized that hard-fought combat is usually won by the side committing the last reserves. Marengo, Borodino, and Ligny are typical examples of Napoleonic victories that demonstrated the importance of having resources available to tip the scales. His two greatest defeats, Leipzig and Waterloo, were suffered because his enemies still had reserves after his were all committed. The importance of committing the last reserves was demonstrated with particular poignancy at Antietam in the American Civil War. In World War II there is no better example than that of Kursk. [pp. 5-6]
Dupuy’s observations about the need for depth and reserves for a successful defense take on even greater current salience in light of the probably character of the near-future battlefield. Terrain lost by an unsuccessful defense may be extremely difficult to regain under prevailing circumstances.
The interaction of increasing weapon lethality and the operational and human circumstantial variables of combat continue to drive the long-term trend in dispersion of combat forces in frontage and depth.
As during the Cold War, the stability of alliances may depend on a willingness to defend forward in the teeth of effective anti-access/area denial (A2/AD) regimes that will make the strategic and operational deployment of reserves risky as well. The successful suppression of A2/AD networks might court a nuclear response, however.
Finding an effective solution for enabling a successful defense-in-depth in the future will be a task of great difficulty.
Trevor Dupuy was skeptical about the role of technology in determining outcomes in warfare. While he did believe technological innovation was crucial, he did not think that technology itself has decided success or failure on the battlefield. As he wrote posthumously in 1997,
I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country. But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations. (emphasis added)
His conclusion was largely based upon his quantitative approach to studying military history, particularly the way humans have historically responded to the relentless trend of increasingly lethal military technology.
The Historical Relationship Between Weapon Lethality and Battle Casualty Rates
Based on a 1964 study for the U.S. Army, Dupuy identified a long-term historical relationship between increasing weapon lethality and decreasing average daily casualty rates in battle. (He summarized these findings in his book, The Evolution of Weapons and Warfare (1980). The quotes below are taken from it.)
Since antiquity, military technological development has produced weapons of ever increasing lethality. The rate of increase in lethality has grown particularly dramatically since the mid-19th century.
However, in contrast, the average daily casualty rate in combat has been in decline since 1600. With notable exceptions during the 19th century, casualty rates have continued to fall through the late 20th century. If technological innovation has produced vastly more lethal weapons, why have there been fewer average daily casualties in battle?
the granting of greater freedom to maneuver through decentralized decision-making and enhanced mobility; and
improved use of combined arms and interservice coordination.
Technological Innovation and Organizational Assimilation
Dupuy noted that the historical correlation between weapons development and their use in combat has not been linear because the pace of integration has been largely determined by military leaders, not the rate of technological innovation. “The process of doctrinal assimilation of new weapons into compatible tactical and organizational systems has proved to be much more significant than invention of a weapon or adoption of a prototype, regardless of the dimensions of the advance in lethality.” [p. 337]
As a result, the history of warfare has been exemplified more often by a discontinuity between weapons and tactical systems than effective continuity.
During most of military history there have been marked and observable imbalances between military efforts and military results, an imbalance particularly manifested by inconclusive battles and high combat casualties. More often than not this imbalance seems to be the result of incompatibility, or incongruence, between the weapons of warfare available and the means and/or tactics employing the weapons. [p. 341]
In short, military organizations typically have not been fully effective at exploiting new weapons technology to advantage on the battlefield. Truly decisive alignment between weapons and systems for their employment has been exceptionally rare. Dupuy asserted that
There have been six important tactical systems in military history in which weapons and tactics were in obvious congruence, and which were able to achieve decisive results at small casualty costs while inflicting disproportionate numbers of casualties. These systems were:
the Macedonian system of Alexander the Great, ca. 340 B.C.
the Roman system of Scipio and Flaminius, ca. 200 B.C.
the Mongol system of Ghengis Khan, ca. A.D. 1200
the English system of Edward I, Edward III, and Henry V, ca. A.D. 1350
the French system of Napoleon, ca. A.D. 1800
the German blitzkrieg system, ca. A.D. 1940 [p. 341]
With one caveat, Dupuy could not identify any single weapon that had decisively changed warfare in of itself without a corresponding human adaptation in its use on the battlefield.
Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons; the full significance of this one exception is not yet clear, since the changes it has caused in warfare and the influence it has exerted on international relations have yet to be tested in war.
Until the present time, the application of sound, imaginative thinking to the problem of warfare (on either an individual or an institutional basis) has been more significant than any new weapon; such thinking is necessary to real assimilation of weaponry; it can also alter the course of human affairs without new weapons. [p. 340]
Technological Superiority and Offset Strategies
Will new technologies like robotics and artificial intelligence provide the basis for a seventh tactical system where weapons and their use align with decisive battlefield results? Maybe. If Dupuy’s analysis is accurate, however, it is more likely that future increases in weapon lethality will continue to be counterbalanced by human ingenuity in how those weapons are used, yielding indeterminate—perhaps costly and indecisive—battlefield outcomes.
Genuinely effective congruence between weapons and force employment continues to be difficult to achieve. Dupuy believed the preconditions necessary for successful technological assimilation since the mid-19th century have been a combination of conducive military leadership; effective coordination of national economic, technological-scientific, and military resources; and the opportunity to evaluate and analyze battlefield experience.
Can the U.S. meet these preconditions? That certainly seemed to be the goal of the so-called Third Offset Strategy, articulated in 2014 by the Obama administration. It called for maintaining “U.S. military superiority over capable adversaries through the development of novel capabilities and concepts.” Although the Trump administration has stopped using the term, it has made “maximizing lethality” the cornerstone of the 2018 National Defense Strategy, with increased funding for the Defense Department’s modernization priorities in FY2019 (though perhaps not in FY2020).
Dupuy’s original work on weapon lethality in the 1960s coincided with development in the U.S. of what advocates of a “revolution in military affairs” (RMA) have termed the “First Offset Strategy,” which involved the potential use of nuclear weapons to balance Soviet superiority in manpower and material. RMA proponents pointed to the lopsided victory of the U.S. and its allies over Iraq in the 1991 Gulf War as proof of the success of a “Second Offset Strategy,” which exploited U.S. precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance systems developed to counter the Soviet Army in Germany in the 1980s. Dupuy was one of the few to attribute the decisiveness of the Gulf War both to airpower and to the superior effectiveness of U.S. combat forces.
Trevor Dupuy certainly was not an anti-technology Luddite. He recognized the importance of military technological advances and the need to invest in them. But he believed that the human element has always been more important on the battlefield. Most wars in history have been fought without a clear-cut technological advantage for one side; some have been bloody and pointless, while others have been decisive for reasons other than technology. While the future is certainly unknown and past performance is not a guarantor of future results, it would be a gamble to rely on technological superiority alone to provide the margin of success in future warfare.
In an insightful essay over at The Strategy Bridge, “Lethality: An Inquiry,” Marine Corps officer Olivia Gerard accomplishes one of the most important, yet most often overlooked, aspects of successfully thinking about and planning for war: questioning a basic assumption. She achieves this by posing a simple question: “What is lethality?”
Gerard notes that the current U.S.National Defense Strategy is predicated on lethality; as it states: “A more lethal, resilient, and rapidly innovating Joint Force, combined with a robust constellation of allies and partners, will sustain American influence and ensure favorable balances of power that safeguard the free and open international order.” She also identifies the linkage in the strategy between lethality and deterrence via a supporting statement from Deputy Secretary of Defense Patrick Shanahan: “Everything we do is geared toward one goal: maximizing lethality. A lethal force is the strongest deterrent to war.”
After pointing out that the strategy does not define the concept of lethality, Gerard responds to Shanahan’s statement by asking “why?”
She uses this as a jumping off point to examine the meaning of lethality in warfare. Starting from the traditional understanding of lethality as a tactical concept, Gerard walks through the way it has been understood historically. From this, she formulates a construct for understanding the relationship between lethality and strategy:
Organizational lethality emerges from tactical lethality that is institutionally codified. Tactical lethality is nested within organizational lethality, which is nested within strategic lethality. Plugging these terms into an implicit calculus, we can rewrite strategic lethality as the efficacy with which we can form intentional deadly relationships towards targets that can be actualized towards political ends.
To this, Gerard appends two interesting caveats: “Notice first that the organizational component becomes implicit. What remains outside, however, is the intention–a meta-intention–to form these potential deadly relationships in the first place.”
It is the second of these caveats—the intent to connect lethality to a strategic end—that informs Gerard’s conclusion. While the National Defense Strategy does not define the term, she observes that by explicitly leveraging the threat to use lethality to bolster deterrence, it supplies the necessary credibility needed to make deterrence viable. “Proclaiming lethality a core tenet, especially in a public strategic document, is the communication of the threat.”
Gerard’s exploration of lethality and her proposed framework for understanding it provide a very useful way of thinking about the way it relates to warfare. It is definitely worth your time to read.
What might be just as interesting, however, are the caveats to her construct because they encompass a lot of what is problematic about the way the U.S. military thinks—explicitly and implicitly—about tactical lethality and how it is codified into concepts of organizational lethality. (While I have touched on some of those already, Gerard gives more to reflect on. More on that later.)
Gerard also references the definition of lethality Trevor Dupuy developed for his 1964 study of historical trends in weapon lethality. While noting that his definition was too narrow for the purposes of her inquiry, the historical relationship between lethality, casualties, and dispersion on the battlefield Dupuy found in that study formed the basis for his subsequent theories of warfare and models of combat. (I will write more about those in the future as well.)
Strategic strike? The Army needs to worry about increasing tubes (more than just 155s) and less on fancy munitions. Quantity is a quality all its own in tactical and operational level fires.
— Schrödinger’s Strategist (@barefootboomer) October 1, 2018
@barefootboomer makes a fair point. It appears that the majority of the U.S. Army’s current efforts to improve its artillery capabilities are aimed at increasing lethality and capability of individual systems, but not actually adding additional guns to the force structure.
Are Army combat units undergunned in the era of multi-domain battle? The Mobile Protected Firepower program is intended to provide additional light tanks high-caliber direct fire guns to the Infantry Brigade Combat Teams. In his recent piece at West Point’s Modern War Institute blog, Captain Brandon Morgan recommended increasing the proportion of U.S. corps rocket artillery to tube artillery systems from roughly 1:4 to something closer to the current Russian Army ratio of 3:4.
Should the Army be adding other additional direct or indirect fires systems to its combat forces? What types and at what levels? Direct or indirect fire? More tubes per battery? More batteries? More battalions?
What do you think?
UPDATE: I got a few responses to my queries. The balance reflected this view:
Quantity has a quality all its own until it’s outranged, then it has none at all. The Army shouldn’t seek range, precision, responsiveness, and capacity in isolation, but holistically.
More is always better when it comes to Indirect fires. We’ve shifted to reliance on Joint fires and reduced our organic capability, in number of tubes and battalions. All our potential peer/near-peer adversaries outrange and out gun us. We need to fix that.
— Schrödinger’s Strategist (@barefootboomer) October 5, 2018
There were not many specific suggestions about changes to the existing forces structure, except for this one:
More mortars of all types (light, medium, heavy) at battalion and below.
Are there any other thoughts or suggestions out there about this, or is the consensus that the Army is already pretty much on the right course toward fixing its fires problems?
The U.S. Army Long Range Fires Cross Functional Team
A recent article in Army Times by Todd South looked at some of the changes being implemented by the U.S. Army cross functional team charged with prioritizing improvements in the service’s long range fires capabilities. To meet a requirement to double the ranges of its artillery systems within five years, “the Army has embarked upon three tiers of focus, from upgrading old school artillery cannons, to swapping out its missile system to double the distance it can fire, and giving the Army a way to fire surface-to-surface missiles at ranges of 1,400 miles.”
The Extended Range Cannon Artillery program is working on rocket assisted munitions to double the range of the Army’s workhouse 155mm guns to 24 miles, with some special rounds capable of reaching targets up to 44 miles away. As I touched on recently, the Army is also looking into ramjet rounds that could potentially increase striking range to 62 miles.
To develop the capability for even longer range fires, the Army implemented a Strategic Strike Cannon Artillery program for targets up to nearly 1,000 miles, and a Strategic Fires Missile effort enabling targeting out to 1,400 miles.
The Army is also emphasizing retaining trained artillery personnel and an improved training regime which includes large-scale joint exercises and increased live-fire opportunities.
Increasing the proportion of U.S. corps rocket artillery to tube artillery systems from roughly 1:4 to something closer to the current Russian Army ratio of 3:4.
Fielding a tube artillery system capable of meeting or surpassing the German-made PZH 2000, which can strike targets out to 30 kilometers with regular rounds, sustain a firing rate of 10 rounds per minute, and strike targets with five rounds simultaneously.
Focus on integrating tube and rocket artillery with a multi-domain, joint force to enable the destruction of the majority of enemy maneuver forces before friendly ground forces reach direct-fire range.
Allow tube artillery to be task organized below the brigade level to provide indirect fires capabilities to maneuver battalions, and make rocket artillery available to division and brigade commanders. (Morgan contends that the allocation of indirect fires capabilities to maneuver battalions ended with the disbanding of the Army’s armored cavalry regiments in 2011.)
Increase training in use of unmanned aerial vehicle (UAV) assets at the tactical level to locate, target, and observe fires.
U.S. Air Force and U.S. Navy Face Long Range Penetrating Strike Challenges
The Army’s emphasis on improving long range fires appears timely in light of the challenges the U.S. Air Force and U.S. Navy face in conducting long range penetrating strikes mission in the A2/AD environment. A fascinating analysis by Jerry Hendrix for the Center for a New American Security shows the current strategic problems stemming from U.S. policy decisions taken in the early 1990s following the end of the Cold War.
In an effort to generate a “peace dividend” from the fall of the Soviet Union, the Clinton administration elected to simplify the U.S. military force structure for conducting long range air attacks by relieving the Navy of its associated responsibilities and assigning the mission solely to the Air Force. The Navy no longer needed to replace its aging carrier-based medium range bombers and the Air Force pushed replacements for its aging B-52 and B-1 bombers into the future.
Both the Air Force and Navy emphasized development and acquisition of short range tactical aircraft which proved highly suitable for the regional contingencies and irregular conflicts of the 1990s and early 2000s. Impressed with U.S. capabilities displayed in those conflicts, China, Russia, and Iran invested in air defense and ballistic missile technologies specifically designed to counter American advantages.
The U.S. now faces a strategic environment where its long range strike platforms lack the range and operational and technological capability to operate within these AS/AD “bubbles.” The Air Force has far too few long range bombers with stealth capability, and neither the Air Force nor Navy tactical stealth aircraft can carry long range strike missiles. The missiles themselves lack stealth capability. The short range of the Navy’s aircraft and insufficient numbers of screening vessels leave its aircraft carriers vulnerable to ballistic missile attack.
Remedying this state of affairs will take time and major investments in new weapons and technological upgrades. However, with certain upgrades, Hendrix sees the current Air Force and Navy force structures capable of providing the basis for a long range penetrating strike operational concept effective against A2/AD defenses. The unanswered question is whether these upgrades will be implemented at all.