Tag Analysis

Wargaming Multi-Domain Battle: The Base Of Sand Problem

“JTLS Overview Movie by Rolands & Associates” [YouTube]

[This piece was originally posted on 10 April 2017.]

As the U.S. Army and U.S. Marine Corps work together to develop their joint Multi-Domain Battle concept, wargaming and simulation will play a significant role. Aspects of the construct have already been explored through the Army’s Unified Challenge, Joint Warfighting Assessment, and Austere Challenge exercises, and upcoming Unified Quest and U.S. Army, Pacific war games and exercises. U.S. Pacific Command and U.S. European Command also have simulations and exercises scheduled.

A great deal of importance has been placed on the knowledge derived from these activities. As the U.S. Army Training and Doctrine Command recently stated,

Concept analysis informed by joint and multinational learning events…will yield the capabilities required of multi-domain battle. Resulting doctrine, organization, training, materiel, leadership, personnel and facilities solutions will increase the capacity and capability of the future force while incorporating new formations and organizations.

There is, however, a problem afflicting the Defense Department’s wargames, of which the military operations research and models and simulations communities have long been aware, but have been slow to address: their models are built on a thin foundation of empirical knowledge about the phenomenon of combat. None have proven the ability to replicate real-world battle experience. This is known as the “base of sand” problem.

A Brief History of The Base of Sand

All combat models and simulations are abstracted theories of how combat works. Combat modeling in the United States began in the early 1950s as an extension of military operations research that began during World War II. Early model designers did not have large base of empirical combat data from which to derive their models. Although a start had been made during World War II and the Korean War to collect real-world battlefield data from observation and military unit records, an effort that provided useful initial insights, no systematic effort has ever been made to identify and assemble such information. In the absence of extensive empirical combat data, model designers turned instead to concepts of combat drawn from official military doctrine (usually of uncertain provenance), subject matter expertise, historians and theorists, the physical sciences, or their own best guesses.

As the U.S. government’s interest in scientific management methods blossomed in the late 1950s and 1960s, the Defense Department’s support for operations research and use of combat modeling in planning and analysis grew as well. By the early 1970s, it became evident that basic research on combat had not kept pace. A survey of existing combat models by Gary Shubik and Martin Brewer for RAND in 1972 concluded that

Basic research and knowledge is lacking. The majority of the MSGs [models, simulations and games] sampled are living off a very slender intellectual investment in fundamental knowledge…. [T]he need for basic research is so critical that if no other funding were available we would favor a plan to reduce by a significant proportion all current expenditures for MSGs and to use the saving for basic research.

In 1975, John Stockfish took a direct look at the use of data and combat models for managing decisions regarding conventional military forces for RAND. He emphatically stated that “[T]he need for better and more empirical work, including operational testing, is of such a magnitude that a major reallocating of talent from model building to fundamental empirical work is called for.”

In 1991, Paul K. Davis, an analyst for RAND, and Donald Blumenthal, a consultant to the Livermore National Laboratory, published an assessment of the state of Defense Department combat modeling. It began as a discussion between senior scientists and analysts from RAND, Livermore, and the NASA Jet Propulsion Laboratory, and the Defense Advanced Research Projects Agency (DARPA) sponsored an ensuing report, The Base of Sand Problem: A White Paper on the State of Military Combat Modeling.

Davis and Blumenthal contended

The [Defense Department] is becoming critically dependent on combat models (including simulations and war games)—even more dependent than in the past. There is considerable activity to improve model interoperability and capabilities for distributed war gaming. In contrast to this interest in model-related technology, there has been far too little interest in the substance of the models and the validity of the lessons learned from using them. In our view, the DoD does not appreciate that in many cases the models are built on a base of sand…

[T]he DoD’s approach in developing and using combat models, including simulations and war games, is fatally flawed—so flawed that it cannot be corrected with anything less than structural changes in management and concept. [Original emphasis]

As a remedy, the authors recommended that the Defense Department create an office to stimulate a national military science program. This Office of Military Science would promote and sponsor basic research on war and warfare while still relying on the military services and other agencies for most research and analysis.

Davis and Blumenthal initially drafted their white paper before the 1991 Gulf War, but the performance of the Defense Department’s models and simulations in that conflict underscored the very problems they described. Defense Department wargames during initial planning for the conflict reportedly predicted tens of thousands of U.S. combat casualties. These simulations were said to have led to major changes in U.S. Central Command’s operational plan. When the casualty estimates leaked, they caused great public consternation and inevitable Congressional hearings.

While all pre-conflict estimates of U.S. casualties in the Gulf War turned out to be too high, the Defense Department’s predictions were the most inaccurate, by several orders of magnitude. This performance, along with Davis and Blumenthal’s scathing critique, should have called the Defense Department’s entire modeling and simulation effort into question. But it did not.

The Problem Persists

The Defense Department’s current generation of models and simulations harbor the same weaknesses as the ones in use in the 1990s. Some are new iterations of old models with updated graphics and code, but using the same theoretical assumptions about combat. In most cases, no one other than the designers knows exactly what data and concepts the models are based upon. This practice is known in the technology world as black boxing. While black boxing may be an essential business practice in the competitive world of government consulting, it makes independently evaluating the validity of combat models and simulations nearly impossible. This should be of major concern because many models and simulations in use today contain known flaws.

Some, such as  Joint Theater Level Simulation (JTLS), use the Lanchester equations for calculating attrition in ground combat. However, multiple studies have shown that these equations are incapable of replicating real-world combat. British engineer Frederick W. Lanchester developed and published them in 1916 as an abstract conceptualization of aerial combat, stating himself that he did not believe they were applicable to ground combat. If Lanchester-based models cannot accurately represent historical combat, how can there be any confidence that they are realistically predicting future combat?

Others, such as the Joint Conflict And Tactical Simulation (JCATS), MAGTF Tactical Warfare System (MTWS), and Warfighters’ Simulation (WARSIM) adjudicate ground combat using probability of hit/probability of kill (pH/pK) algorithms. Corps Battle Simulation (CBS) uses pH/pK for direct fire attrition and a modified version of Lanchester for indirect fire. While these probabilities are developed from real-world weapon system proving ground data, their application in the models is combined with inputs from subjective sources, such as outputs from other combat models, which are likely not based on real-world data. Multiplying an empirically-derived figure by a judgement-based coefficient results in a judgement-based estimate, which might be accurate or it might not. No one really knows.

Potential Remedies

One way of assessing the accuracy of these models and simulations would be to test them against real-world combat data, which does exist. In theory, Defense Department models and simulations are supposed to be subjected to validation, verification, and accreditation, but in reality this is seldom, if ever, rigorously done. Combat modelers could also open the underlying theories and data behind their models and simulations for peer review.

The problem is not confined to government-sponsored research and development. In his award-winning 2004 book examining the bases for victory and defeat in battle, Military Power: Explaining Victory and Defeat in Modern Battle, analyst Stephen Biddle noted that the study of military science had been neglected in the academic world as well. “[F]or at least a generation, the study of war’s conduct has fallen between the stools of the institutional structure of modern academia and government,” he wrote.

This state of affairs seems remarkable given the enormous stakes that are being placed on the output of the Defense Department’s modeling and simulation activities. After decades of neglect, remedying this would require a dedicated commitment to sustained basic research on the military science of combat and warfare, with no promise of a tangible short-term return on investment. Yet, as Biddle pointed out, “With so much at stake, we surely must do better.”

[NOTE: The attrition methodologies used in CBS and WARSIM have been corrected since this post was originally published per comments provided by their developers.]

TDI Friday Read: Multi-Domain Battle/Operations Doctrine

With the December 2018 update of the U.S. Army’s Multi-Domain Operations (MDO) concept, this seems like a good time to review the evolution of doctrinal thinking about it. We will start with the event that sparked the Army’s thinking about the subject: the 2014 rocket artillery barrage fired from Russian territory that devastated Ukrainian Army forces near the village of Zelenopillya. From there we will look at the evolution of Army thinking beginning with the initial draft of an operating concept for Multi-Domain Battle (MDB) in 2017. To conclude, we will re-up two articles expressing misgivings over the manner with which these doctrinal concepts are being developed, and the direction they are taking.

The Russian Artillery Strike That Spooked The U.S. Army

Army And Marine Corps Join Forces To Define Multi-Domain Battle Concept

Army/Marine Multi-Domain Battle White Paper Available

What Would An Army Optimized For Multi-Domain Battle Look Like?

Sketching Out Multi-Domain Battle Operational Doctrine

U.S. Army Updates Draft Multi-Domain Battle Operating Concept

U.S. Army Multi-Domain Operations Concept Continues Evolving

U.S. Army Doctrine and Future Warfare

 

U.S. Army Doctrine and Future Warfare

Pre-war U.S. Army warfighting doctrine led to fielding the M10, M18 and M36 tank destroyers to counter enemy tanks. Their relatively ineffective performance against German panzers in Europe during World War II has been seen as the result of flawed thinking about tank warfare. [Wikimedia]

Two recently published articles on current U.S. Army doctrine development and the future of warfare deserve to be widely read:

“An Army Caught in the Middle Between Luddites, Luminaries, and the Occasional Looney,”

The first, by RAND’s David Johnson, is titled “An Army Caught in the Middle Between Luddites, Luminaries, and the Occasional Looney,” published by War on the Rocks.

Johnson begins with an interesting argument:

Contrary to what it says, the Army has always been a concepts-based, rather than a doctrine-based, institution. Concepts about future war generate the requirements for capabilities to realize them… Unfortunately, the Army’s doctrinal solutions evolve in war only after the failure of its concepts in its first battles, which the Army has historically lost since the Revolutionary War.

The reason the Army fails in its first battles is because its concepts are initially — until tested in combat — a statement of how the Army “wants to fight” and rarely an analytical assessment of how it “will have to fight.”

Starting with the Army’s failure to develop its own version of “blitzkrieg” after World War I, Johnson identified conservative organizational politics, misreading technological advances, and a stubborn refusal to account for the capabilities of potential adversaries as common causes for the inferior battlefield weapons and warfighting methods that contributed to its impressive string of lost “first battles.”

Conversely, Johnson credited the Army’s novel 1980s AirLand Battle doctrine as the product of an honest assessment of potential enemy capabilities and the development of effective weapon systems that were “based on known, proven technologies that minimized the risk of major program failures.”

“The principal lesson in all of this” he concluded, “is that the U.S. military should have a clear problem that it is trying to solve to enable it to innovate, and is should realize that innovation is generally not invention.” There are “also important lessons from the U.S. Army’s renaissance in the 1970s, which also resulted in close cooperation between the Army and the Air Force to solve the shared problem of the defense of Western Europe against Soviet aggression that neither could solve independently.”

“The US Army is Wrong on Future War”

The other article, provocatively titled “The US Army is Wrong on Future War,” was published by West Point’s Modern War Institute. It was co-authored by Nathan Jennings, Amos Fox, and Adam Taliaferro, all graduates of the School of Advanced Military Studies, veterans of Iraq and Afghanistan, and currently serving U.S. Army officers.

They argue that

the US Army is mistakenly structuring for offensive clashes of mass and scale reminiscent of 1944 while competitors like Russia and China have adapted to twenty-first-century reality. This new paradigm—which favors fait accompli acquisitions, projection from sovereign sanctuary, and indirect proxy wars—combines incremental military actions with weaponized political, informational, and economic agendas under the protection of nuclear-fires complexes to advance territorial influence. The Army’s failure to conceptualize these features of the future battlefield is a dangerous mistake…

Instead, they assert that the current strategic and operational realities dictate a far different approach:

Failure to recognize the ascendancy of nuclear-based defense—with the consequent potential for only limited maneuver, as in the seventeenth century—incurs risk for expeditionary forces. Even as it idealizes Patton’s Third Army with ambiguous “multi-domain” cyber and space enhancements, the US Army’s fixation with massive counter-offensives to defeat unrealistic Russian and Chinese conquests of Europe and Asia misaligns priorities. Instead of preparing for past wars, the Army should embrace forward positional and proxy engagement within integrated political, economic, and informational strategies to seize and exploit initiative.

The factors they cite that necessitate the adoption of positional warfare include nuclear primacy; sanctuary of sovereignty; integrated fires complexes; limited fait accompli; indirect proxy wars; and political/economic warfare.

“Given these realities,” Jennings, Fox, and Taliaferro assert, “the US Army must adapt and evolve to dominate great-power confrontation in the nuclear age. As such, they recommend that the U.S. (1) adopt “an approach more reminiscent of the US Army’s Active Defense doctrine of the 1970s than the vaunted AirLand Battle concept of the 1980s,” (2) “dramatically recalibrate its approach to proxy warfare; and (3) compel “joint, interagency and multinational coordination in order to deliberately align economic, informational, and political agendas in support of military objectives.”

Future U.S. Army Doctrine: How It Wants to Fight or How It Has to Fight?

Readers will find much with which to agree or disagree in each article, but they both provide viewpoints that should supply plenty of food for thought. Taken together they take on a different context. The analysis put forth by Jenninigs, Fox, and Taliaferro can be read as fulfilling Johnson’s injunction to base doctrine on a sober assessment of the strategic and operational challenges presented by existing enemy capabilities, instead of as an aspirational concept for how the Army would prefer to fight a future war. Whether or not Jennings, et al, have accurately forecasted the future can be debated, but their critique should raise questions as to whether the Army is repeating past doctrinal development errors identified by Johnson.

Comparing Force Ratios to Casualty Exchange Ratios

“American Marines in Belleau Wood (1918)” by Georges Scott [Wikipedia]

Comparing Force Ratios to Casualty Exchange Ratios
Christopher A. Lawrence

[The article below is reprinted from the Summer 2009 edition of The International TNDM Newsletter.]

There are three versions of force ratio versus casualty exchange ratio rules, such as the three-to-one rule (3-to-1 rule), as it applies to casualties. The earliest version of the rule as it relates to casualties that we have been able to find comes from the 1958 version of the U.S. Army Maneuver Control manual, which states: “When opposing forces are in contact, casualties are assessed in inverse ratio to combat power. For friendly forces advancing with a combat power superiority of 5 to 1, losses to friendly forces will be about 1/5 of those suffered by the opposing force.”[1]

The RAND version of the rule (1992) states that: “the famous ‘3:1 rule ’, according to which the attacker and defender suffer equal fractional loss rates at a 3:1 force ratio the battle is in mixed terrain and the defender enjoys ‘prepared ’defenses…” [2]

Finally, there is a version of the rule that dates from the 1967 Maneuver Control manual that only applies to armor that shows:

As the RAND construct also applies to equipment losses, then this formulation is directly comparable to the RAND construct.

Therefore, we have three basic versions of the 3-to-1 rule as it applies to casualties and/or equipment losses. First, there is a rule that states that there is an even fractional loss ratio at 3-to-1 (the RAND version), Second, there is a rule that states that at 3-to-1, the attacker will suffer one-third the losses of the defender. And third, there is a rule that states that at 3-to-1, the attacker and defender will suffer the same losses as the defender. Furthermore, these examples are highly contradictory, with either the attacker suffering three times the losses of the defender, the attacker suffering the same losses as the defender, or the attacker suffering 1/3 the losses of the defender.

Therefore, what we will examine here is the relationship between force ratios and exchange ratios. In this case, we will first look at The Dupuy Institute’s Battles Database (BaDB), which covers 243 battles from 1600 to 1900. We will chart on the y-axis the force ratio as measured by a count of the number of people on each side of the forces deployed for battle. The force ratio is the number of attackers divided by the number of defenders. On the x-axis is the exchange ratio, which is a measured by a count of the number of people on each side who were killed, wounded, missing or captured during that battle. It does not include disease and non-battle injuries. Again, it is calculated by dividing the total attacker casualties by the total defender casualties. The results are provided below:

As can be seen, there are a few extreme outliers among these 243 data points. The most extreme, the Battle of Tippennuir (l Sep 1644), in which an English Royalist force under Montrose routed an attack by Scottish Covenanter militia, causing about 3,000 casualties to the Scots in exchange for a single (allegedly self-inflicted) casualty to the Royalists, was removed from the chart. This 3,000-to-1 loss ratio was deemed too great an outlier to be of value in the analysis.

As it is, the vast majority of cases are clumped down into the corner of the graph with only a few scattered data points outside of that clumping. If one did try to establish some form of curvilinear relationship, one would end up drawing a hyperbola. It is worthwhile to look inside that clump of data to see what it shows. Therefore, we will look at the graph truncated so as to show only force ratios at or below 20-to-1 and exchange rations at or below 20-to-1.

Again, the data remains clustered in one corner with the outlying data points again pointing to a hyperbola as the only real fitting curvilinear relationship. Let’s look at little deeper into the data by truncating the data on 6-to-1 for both force ratios and exchange ratios. As can be seen, if the RAND version of the 3-to-1 rule is correct, then the data should show at 3-to-1 force ratio a 3-to-1 casualty exchange ratio. There is only one data point that comes close to this out of the 243 points we examined.

If the FM 105-5 version of the rule as it applies to armor is correct, then the data should show that at 3-to-1 force ratio there is a 1-to-1 casualty exchange ratio, at a 4-to-1 force ratio a 1-to-2 casualty exchange ratio, and at a 5-to-1 force ratio a 1-to-3 casualty exchange ratio. Of course, there is no armor in these pre-WW I engagements, but again no such exchange pattern does appear.

If the 1958 version of the FM 105-5 rule as it applies to casualties is correct, then the data should show that at a 3-to-1 force ratio there is 0.33-to-1 casualty exchange ratio, at a 4-to-1 force ratio a .25-to-1 casualty exchange ratio, and at a 5-to-1 force ratio a 0.20-to-5 casualty exchange ratio. As can be seen, there is not much indication of this pattern, or for that matter any of the three patterns.

Still, such a construct may not be relevant to data before 1900. For example, Lanchester claimed in 1914 in Chapter V, “The Principal of Concentration,” of his book Aircraft in Warfare, that there is greater advantage to be gained in modern warfare from concentration of fire.[3] Therefore, we will tap our more modern Division-Level Engagement Database (DLEDB) of 675 engagements, of which 628 have force ratios and exchange ratios calculated for them. These 628 cases are then placed on a scattergram to see if we can detect any similar patterns.

Even though this data covers from 1904 to 1991, with the vast majority of the data coming from engagements after 1940, one again sees the same pattern as with the data from 1600-1900. If there is a curvilinear relationship, it is again a hyperbola. As before, it is useful to look into the mass of data clustered into the corner by truncating the force and exchange ratios at 20-to-1. This produces the following:

Again, one sees the data clustered in the corner, with any curvilinear relationship again being a hyperbola. A look at the data further truncated to a 10-to-1 force or exchange ratio does not yield anything more revealing.

And, if this data is truncated to show only 5-to-1 force ratio and exchange ratios, one again sees:

Again, this data appears to be mostly just noise, with no clear patterns here that support any of the three constructs. In the case of the RAND version of the 3-to-1 rule, there is again only one data point (out of 628) that is anywhere close to the crossover point (even fractional exchange rate) that RAND postulates. In fact, it almost looks like the data conspires to make sure it leaves a noticeable “hole” at that point. The other postulated versions of the 3-to-1 rules are also given no support in these charts.

Also of note, that the relationship between force ratios and exchange ratios does not appear to significantly change for combat during 1600-1900 when compared to the data from combat from 1904-1991. This does not provide much support for the intellectual construct developed by Lanchester to argue for his N-square law.

While we can attempt to torture the data to find a better fit, or can try to argue that the patterns are obscured by various factors that have not been considered, we do not believe that such a clear pattern and relationship exists. More advanced mathematical methods may show such a pattern, but to date such attempts have not ferreted out these alleged patterns. For example, we refer the reader to Janice Fain’s article on Lanchester equations, The Dupuy Institute’s Capture Rate Study, Phase I & II, or any number of other studies that have looked at Lanchester.[4]

The fundamental problem is that there does not appear to be a direct cause and effect between force ratios and exchange ratios. It appears to be an indirect relationship in the sense that force ratios are one of several independent variables that determine the outcome of an engagement, and the nature of that outcome helps determines the casualties. As such, there is a more complex set of interrelationships that have not yet been fully explored in any study that we know of, although it is briefly addressed in our Capture Rate Study, Phase I & II.

NOTES

[1] FM 105-5, Maneuver Control (1958), 80.

[2] Patrick Allen, “Situational Force Scoring: Accounting for Combined Arms Effects in Aggregate Combat Models,” (N-3423-NA, The RAND Corporation, Santa Monica, CA, 1992), 20.

[3] F. W. Lanchester, Aircraft in Warfare: The Dawn of the Fourth Arm (Lanchester Press Incorporated, Sunnyvale, Calif., 1995), 46-60. One notes that Lanchester provided no data to support these claims, but relied upon an intellectual argument based upon a gross misunderstanding of ancient warfare.

[4] In particular, see page 73 of Janice B. Fain, “The Lanchester Equations and Historical Warfare: An Analysis of Sixty World War II Land Engagements,” Combat Data Subscription Service (HERO, Arlington, Va., Spring 1975).

Trevor Dupuy and Technological Determinism in Digital Age Warfare

Is this the only innovation in weapons technology in history with the ability in itself to change warfare and alter the balance of power? Trevor Dupuy thought it might be. Shot IVY-MIKE, Eniwetok Atoll, 1 November 1952. [Wikimedia]

Trevor Dupuy was skeptical about the role of technology in determining outcomes in warfare. While he did believe technological innovation was crucial, he did not think that technology itself has decided success or failure on the battlefield. As he wrote posthumously in 1997,

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country. But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations. (emphasis added)

His conclusion was largely based upon his quantitative approach to studying military history, particularly the way humans have historically responded to the relentless trend of increasingly lethal military technology.

The Historical Relationship Between Weapon Lethality and Battle Casualty Rates

Based on a 1964 study for the U.S. Army, Dupuy identified a long-term historical relationship between increasing weapon lethality and decreasing average daily casualty rates in battle. (He summarized these findings in his book, The Evolution of Weapons and Warfare (1980). The quotes below are taken from it.)

Since antiquity, military technological development has produced weapons of ever increasing lethality. The rate of increase in lethality has grown particularly dramatically since the mid-19th century.

However, in contrast, the average daily casualty rate in combat has been in decline since 1600. With notable exceptions during the 19th century, casualty rates have continued to fall through the late 20th century. If technological innovation has produced vastly more lethal weapons, why have there been fewer average daily casualties in battle?

The primary cause, Dupuy concluded, was that humans have adapted to increasing weapon lethality by changing the way they fight. He identified three key tactical trends in the modern era that have influenced the relationship between lethality and casualties:

Technological Innovation and Organizational Assimilation

Dupuy noted that the historical correlation between weapons development and their use in combat has not been linear because the pace of integration has been largely determined by military leaders, not the rate of technological innovation. “The process of doctrinal assimilation of new weapons into compatible tactical and organizational systems has proved to be much more significant than invention of a weapon or adoption of a prototype, regardless of the dimensions of the advance in lethality.” [p. 337]

As a result, the history of warfare has been exemplified more often by a discontinuity between weapons and tactical systems than effective continuity.

During most of military history there have been marked and observable imbalances between military efforts and military results, an imbalance particularly manifested by inconclusive battles and high combat casualties. More often than not this imbalance seems to be the result of incompatibility, or incongruence, between the weapons of warfare available and the means and/or tactics employing the weapons. [p. 341]

In short, military organizations typically have not been fully effective at exploiting new weapons technology to advantage on the battlefield. Truly decisive alignment between weapons and systems for their employment has been exceptionally rare. Dupuy asserted that

There have been six important tactical systems in military history in which weapons and tactics were in obvious congruence, and which were able to achieve decisive results at small casualty costs while inflicting disproportionate numbers of casualties. These systems were:

  • the Macedonian system of Alexander the Great, ca. 340 B.C.
  • the Roman system of Scipio and Flaminius, ca. 200 B.C.
  • the Mongol system of Ghengis Khan, ca. A.D. 1200
  • the English system of Edward I, Edward III, and Henry V, ca. A.D. 1350
  • the French system of Napoleon, ca. A.D. 1800
  • the German blitzkrieg system, ca. A.D. 1940 [p. 341]

With one caveat, Dupuy could not identify any single weapon that had decisively changed warfare in of itself without a corresponding human adaptation in its use on the battlefield.

Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons; the full significance of this one exception is not yet clear, since the changes it has caused in warfare and the influence it has exerted on international relations have yet to be tested in war.

Until the present time, the application of sound, imaginative thinking to the problem of warfare (on either an individual or an institutional basis) has been more significant than any new weapon; such thinking is necessary to real assimilation of weaponry; it can also alter the course of human affairs without new weapons. [p. 340]

Technological Superiority and Offset Strategies

Will new technologies like robotics and artificial intelligence provide the basis for a seventh tactical system where weapons and their use align with decisive battlefield results? Maybe. If Dupuy’s analysis is accurate, however, it is more likely that future increases in weapon lethality will continue to be counterbalanced by human ingenuity in how those weapons are used, yielding indeterminate—perhaps costly and indecisive—battlefield outcomes.

Genuinely effective congruence between weapons and force employment continues to be difficult to achieve. Dupuy believed the preconditions necessary for successful technological assimilation since the mid-19th century have been a combination of conducive military leadership; effective coordination of national economic, technological-scientific, and military resources; and the opportunity to evaluate and analyze battlefield experience.

Can the U.S. meet these preconditions? That certainly seemed to be the goal of the so-called Third Offset Strategy, articulated in 2014 by the Obama administration. It called for maintaining “U.S. military superiority over capable adversaries through the development of novel capabilities and concepts.” Although the Trump administration has stopped using the term, it has made “maximizing lethality” the cornerstone of the 2018 National Defense Strategy, with increased funding for the Defense Department’s modernization priorities in FY2019 (though perhaps not in FY2020).

Dupuy’s original work on weapon lethality in the 1960s coincided with development in the U.S. of what advocates of a “revolution in military affairs” (RMA) have termed the “First Offset Strategy,” which involved the potential use of nuclear weapons to balance Soviet superiority in manpower and material. RMA proponents pointed to the lopsided victory of the U.S. and its allies over Iraq in the 1991 Gulf War as proof of the success of a “Second Offset Strategy,” which exploited U.S. precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance systems developed to counter the Soviet Army in Germany in the 1980s. Dupuy was one of the few to attribute the decisiveness of the Gulf War both to airpower and to the superior effectiveness of U.S. combat forces.

Trevor Dupuy certainly was not an anti-technology Luddite. He recognized the importance of military technological advances and the need to invest in them. But he believed that the human element has always been more important on the battlefield. Most wars in history have been fought without a clear-cut technological advantage for one side; some have been bloody and pointless, while others have been decisive for reasons other than technology. While the future is certainly unknown and past performance is not a guarantor of future results, it would be a gamble to rely on technological superiority alone to provide the margin of success in future warfare.

Force Ratios in Conventional Combat

American soldiers of the 117th Infantry Regiment, Tennessee National Guard, part of the 30th Infantry Division, move past a destroyed American M5A1 “Stuart” tank on their march to recapture the town of St. Vith during the Battle of the Bulge, January 1945. [Wikipedia]
[This piece was originally posted on 16 May 2017.]

This post is a partial response to questions from one of our readers (Stilzkin). On the subject of force ratios in conventional combat….I know of no detailed discussion on the phenomenon published to date. It was clearly addressed by Clausewitz. For example:

At Leuthen Frederick the Great, with about 30,000 men, defeated 80,000 Austrians; at Rossbach he defeated 50,000 allies with 25,000 men. These however are the only examples of victories over an opponent two or even nearly three times as strong. Charles XII at the battle of Narva is not in the same category. The Russian at that time could hardly be considered as Europeans; moreover, we know too little about the main features of that battle. Bonaparte commanded 120,000 men at Dresden against 220,000—not quite half. At Kolin, Frederick the Great’s 30,000 men could not defeat 50,000 Austrians; similarly, victory eluded Bonaparte at the desperate battle of Leipzig, though with his 160,000 men against 280,000, his opponent was far from being twice as strong.

These examples may show that in modern Europe even the most talented general will find it very difficult to defeat an opponent twice his strength. When we observe that the skill of the greatest commanders may be counterbalanced by a two-to-one ratio in the fighting forces, we cannot doubt that superiority in numbers (it does not have to more than double) will suffice to assure victory, however adverse the other circumstances.

and:

If we thus strip the engagement of all the variables arising from its purpose and circumstance, and disregard the fighting value of the troops involved (which is a given quantity), we are left with the bare concept of the engagement, a shapeless battle in which the only distinguishing factors is the number of troops on either side.

These numbers, therefore, will determine victory. It is, of course, evident from the mass of abstractions I have made to reach this point that superiority of numbers in a given engagement is only one of the factors that determines victory. Superior numbers, far from contributing everything, or even a substantial part, to victory, may actually be contributing very little, depending on the circumstances.

But superiority varies in degree. It can be two to one, or three or four to one, and so on; it can obviously reach the point where it is overwhelming.

In this sense superiority of numbers admittedly is the most important factor in the outcome of an engagement, as long as it is great enough to counterbalance all other contributing circumstance. It thus follows that as many troops as possible should be brought into the engagement at the decisive point.

And, in relation to making a combat model:

Numerical superiority was a material factor. It was chosen from all elements that make up victory because, by using combinations of time and space, it could be fitted into a mathematical system of laws. It was thought that all other factors could be ignored if they were assumed to be equal on both sides and thus cancelled one another out. That might have been acceptable as a temporary device for the study of the characteristics of this single factor; but to make the device permanent, to accept superiority of numbers as the one and only rule, and to reduce the whole secret of the art of war to a formula of numerical superiority at a certain time and a certain place was an oversimplification that would not have stood up for a moment against the realities of life.

Force ratios were discussed in various versions of FM 105-5 Maneuver Control, but as far as I can tell, this was not material analytically developed. It was a set of rules, pulled together by a group of anonymous writers for the sake of being able to adjudicate wargames.

The only detailed quantification of force ratios was provided in Numbers, Predictions and War by Trevor Dupuy. Again, these were modeling constructs, not something that was analytically developed (although there was significant background research done and the model was validated multiple times). He then discusses the subject in his book Understanding War, which I consider the most significant book of the 90+ that he wrote or co-authored.

The only analytically based discussion of force ratios that I am aware of (or at least can think of at this moment) is my discussion in my upcoming book War by Numbers: Understanding Conventional Combat. It is the second chapter of the book: https://dupuyinstitute.org/2016/02/17/war-by-numbers-iii/

In this book, I assembled the force ratios required to win a battle based upon a large number of cases from World War II division-level combat. For example (page 18 of the manuscript):

I did this for the ETO, for the battles of Kharkov and Kursk (Eastern Front 1943, divided by when the Germans are attacking and when the Soviets are attacking) and for PTO (Manila and Okinawa 1945).

There is more than can be done on this, and we do have the data assembled to do this, but as always, I have not gotten around to it. This is why I am already considering a War by Numbers II, as I am already thinking about all the subjects I did not cover in sufficient depth in my first book.

What Does Lethality Mean In Warfare?

In an insightful essay over at The Strategy Bridge, “Lethality: An Inquiry,” Marine Corps officer Olivia Gerard accomplishes one of the most important, yet most often overlooked, aspects of successfully thinking about and planning for war: questioning a basic assumption. She achieves this by posing a simple question: “What is lethality?”

Gerard notes that the current U.S. National Defense Strategy is predicated on lethality; as it states: “A more lethal, resilient, and rapidly innovating Joint Force, combined with a robust constellation of allies and partners, will sustain American influence and ensure favorable balances of power that safeguard the free and open international order.” She also identifies the linkage in the strategy between lethality and deterrence via a supporting statement from Deputy Secretary of Defense Patrick Shanahan: “Everything we do is geared toward one goal: maximizing lethality. A lethal force is the strongest deterrent to war.”

After pointing out that the strategy does not define the concept of lethality, Gerard responds to Shanahan’s statement by asking “why?”

She uses this as a jumping off point to examine the meaning of lethality in warfare. Starting from the traditional understanding of lethality as a tactical concept, Gerard walks through the way it has been understood historically. From this, she formulates a construct for understanding the relationship between lethality and strategy:

Organizational lethality emerges from tactical lethality that is institutionally codified. Tactical lethality is nested within organizational lethality, which is nested within strategic lethality. Plugging these terms into an implicit calculus, we can rewrite strategic lethality as the efficacy with which we can form intentional deadly relationships towards targets that can be actualized towards political ends.

To this, Gerard appends two interesting caveats: “Notice first that the organizational component becomes implicit. What remains outside, however, is the intention–a meta-intention–to form these potential deadly relationships in the first place.”

It is the second of these caveats—the intent to connect lethality to a strategic end—that informs Gerard’s conclusion. While the National Defense Strategy does not define the term, she observes that by explicitly leveraging the threat to use lethality to bolster deterrence, it supplies the necessary credibility needed to make deterrence viable. “Proclaiming lethality a core tenet, especially in a public strategic document, is the communication of the threat.”

Gerard’s exploration of lethality and her proposed framework for understanding it provide a very useful way of thinking about the way it relates to warfare. It is definitely worth your time to read.

What might be just as interesting, however, are the caveats to her construct because they encompass a lot of what is problematic about the way the U.S. military thinks—explicitly and implicitly—about tactical lethality and how it is codified into concepts of organizational lethality. (While I have touched on some of those already, Gerard gives more to reflect on. More on that later.)

Gerard also references the definition of lethality Trevor Dupuy developed for his 1964 study of historical trends in weapon lethality. While noting that his definition was too narrow for the purposes of her inquiry, the historical relationship between lethality, casualties, and dispersion on the battlefield Dupuy found in that study formed the basis for his subsequent theories of warfare and models of combat. (I will write more about those in the future as well.)

Artillery Effectiveness vs. Armor (Part 5-Summary)

U.S. Army 155mm field howitzer in Normandy. [padresteve.com]

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

Table IX shows the distribution of cause of loss by type or armor vehicle. From the distribution it might be inferred that better protected armored vehicles may be less vulnerable to artillery attack. Nevertheless, the heavily armored vehicles still suffered a minimum loss of 5.6 percent due to artillery. Unfortunately the sample size for heavy tanks was very small, 18 of 980 cases or only 1.8 percent of the total.

The data are limited at this time to the seven cases.[6] Further research is necessary to expand the data sample so as to permit proper statistical analysis of the effectiveness of artillery versus tanks.

NOTES

[18] Heavy armor includes the KV-1, KV-2, Tiger, and Tiger II.

[19] Medium armor includes the T-34, Grant, Panther, and Panzer IV.

[20] Light armor includes the T-60, T-70. Stuart, armored cars, and armored personnel carriers.

Artillery Effectiveness vs. Armor (Part 4-Ardennes)

Knocked-out Panthers in Krinkelt, Belgium, Battle of the Bulge, 17 December 1944. [worldwarphotos.info]

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

NOTES

[14] From ORS Joint Report No. 1. A total of an estimated 300 German armor vehicles were found following the battle.

[15] Data from 38th Infantry After Action Report (including “Sketch showing enemy vehicles destroyed by 38th Inf Regt. and attached units 17-20 Dec. 1944″), from 12th SS PzD strength report dated 8 December 1944, and from strengths indicated on the OKW briefing maps for 17 December (1st [circa 0600 hours], 2d [circa 1200 hours], and 3d [circa 1800 hours] situation), 18 December (1st and 2d situation), 19 December (2d situation), 20 December (3d situation), and 21 December (2d and 3d situation).

[16] Losses include confirmed and probable losses.

[17] Data from Combat Interview “26th Infantry Regiment at Dom Bütgenbach” and from 12th SS PzD, ibid.

Artillery Effectiveness vs. Armor (Part 3-Normandy)

The U.S. Army 333rd Field Artillery Battalion (Colored) in Normandy, July 1944 (US Army Photo/Tom Gregg)

[This series of posts is adapted from the article “Artillery Effectiveness vs. Armor,” by Richard C. Anderson, Jr., originally published in the June 1997 edition of the International TNDM Newsletter.]

Posts in the series
Artillery Effectiveness vs. Armor (Part 1)
Artillery Effectiveness vs. Armor (Part 2-Kursk)
Artillery Effectiveness vs. Armor (Part 3-Normandy)
Artillery Effectiveness vs. Armor (Part 4-Ardennes)
Artillery Effectiveness vs. Armor (Part 5-Summary)

NOTES

[10] From ORS Report No. 17.

[11] Five of the 13 counted as unknown were penetrated by both armor piercing shot and by infantry hollow charge weapons. There was no evidence to indicate which was the original cause of the loss.

[12] From ORS Report No. 17

[13] From ORS Report No. 15. The “Pocket” was the area west of the line Falaise-Argentan and east of the line Vassy-Gets-Domfront in Normandy that was the site in August 1944 of the beginning of the German retreat from France. The German forces were being enveloped from the north and south by Allied ground forces and were under constant, heavy air attack.