Tag technological determinism

Trevor Dupuy and Technological Determinism in Digital Age Warfare

Is this the only innovation in weapons technology in history with the ability in itself to change warfare and alter the balance of power? Trevor Dupuy thought it might be. Shot IVY-MIKE, Eniwetok Atoll, 1 November 1952. [Wikimedia]

Trevor Dupuy was skeptical about the role of technology in determining outcomes in warfare. While he did believe technological innovation was crucial, he did not think that technology itself has decided success or failure on the battlefield. As he wrote posthumously in 1997,

I am a humanist, who is also convinced that technology is as important today in war as it ever was (and it has always been important), and that any national or military leader who neglects military technology does so to his peril and that of his country. But, paradoxically, perhaps to an extent even greater than ever before, the quality of military men is what wins wars and preserves nations. (emphasis added)

His conclusion was largely based upon his quantitative approach to studying military history, particularly the way humans have historically responded to the relentless trend of increasingly lethal military technology.

The Historical Relationship Between Weapon Lethality and Battle Casualty Rates

Based on a 1964 study for the U.S. Army, Dupuy identified a long-term historical relationship between increasing weapon lethality and decreasing average daily casualty rates in battle. (He summarized these findings in his book, The Evolution of Weapons and Warfare (1980). The quotes below are taken from it.)

Since antiquity, military technological development has produced weapons of ever increasing lethality. The rate of increase in lethality has grown particularly dramatically since the mid-19th century.

However, in contrast, the average daily casualty rate in combat has been in decline since 1600. With notable exceptions during the 19th century, casualty rates have continued to fall through the late 20th century. If technological innovation has produced vastly more lethal weapons, why have there been fewer average daily casualties in battle?

The primary cause, Dupuy concluded, was that humans have adapted to increasing weapon lethality by changing the way they fight. He identified three key tactical trends in the modern era that have influenced the relationship between lethality and casualties:

Technological Innovation and Organizational Assimilation

Dupuy noted that the historical correlation between weapons development and their use in combat has not been linear because the pace of integration has been largely determined by military leaders, not the rate of technological innovation. “The process of doctrinal assimilation of new weapons into compatible tactical and organizational systems has proved to be much more significant than invention of a weapon or adoption of a prototype, regardless of the dimensions of the advance in lethality.” [p. 337]

As a result, the history of warfare has been exemplified more often by a discontinuity between weapons and tactical systems than effective continuity.

During most of military history there have been marked and observable imbalances between military efforts and military results, an imbalance particularly manifested by inconclusive battles and high combat casualties. More often than not this imbalance seems to be the result of incompatibility, or incongruence, between the weapons of warfare available and the means and/or tactics employing the weapons. [p. 341]

In short, military organizations typically have not been fully effective at exploiting new weapons technology to advantage on the battlefield. Truly decisive alignment between weapons and systems for their employment has been exceptionally rare. Dupuy asserted that

There have been six important tactical systems in military history in which weapons and tactics were in obvious congruence, and which were able to achieve decisive results at small casualty costs while inflicting disproportionate numbers of casualties. These systems were:

  • the Macedonian system of Alexander the Great, ca. 340 B.C.
  • the Roman system of Scipio and Flaminius, ca. 200 B.C.
  • the Mongol system of Ghengis Khan, ca. A.D. 1200
  • the English system of Edward I, Edward III, and Henry V, ca. A.D. 1350
  • the French system of Napoleon, ca. A.D. 1800
  • the German blitzkrieg system, ca. A.D. 1940 [p. 341]

With one caveat, Dupuy could not identify any single weapon that had decisively changed warfare in of itself without a corresponding human adaptation in its use on the battlefield.

Save for the recent significant exception of strategic nuclear weapons, there have been no historical instances in which new and lethal weapons have, of themselves, altered the conduct of war or the balance of power until they have been incorporated into a new tactical system exploiting their lethality and permitting their coordination with other weapons; the full significance of this one exception is not yet clear, since the changes it has caused in warfare and the influence it has exerted on international relations have yet to be tested in war.

Until the present time, the application of sound, imaginative thinking to the problem of warfare (on either an individual or an institutional basis) has been more significant than any new weapon; such thinking is necessary to real assimilation of weaponry; it can also alter the course of human affairs without new weapons. [p. 340]

Technological Superiority and Offset Strategies

Will new technologies like robotics and artificial intelligence provide the basis for a seventh tactical system where weapons and their use align with decisive battlefield results? Maybe. If Dupuy’s analysis is accurate, however, it is more likely that future increases in weapon lethality will continue to be counterbalanced by human ingenuity in how those weapons are used, yielding indeterminate—perhaps costly and indecisive—battlefield outcomes.

Genuinely effective congruence between weapons and force employment continues to be difficult to achieve. Dupuy believed the preconditions necessary for successful technological assimilation since the mid-19th century have been a combination of conducive military leadership; effective coordination of national economic, technological-scientific, and military resources; and the opportunity to evaluate and analyze battlefield experience.

Can the U.S. meet these preconditions? That certainly seemed to be the goal of the so-called Third Offset Strategy, articulated in 2014 by the Obama administration. It called for maintaining “U.S. military superiority over capable adversaries through the development of novel capabilities and concepts.” Although the Trump administration has stopped using the term, it has made “maximizing lethality” the cornerstone of the 2018 National Defense Strategy, with increased funding for the Defense Department’s modernization priorities in FY2019 (though perhaps not in FY2020).

Dupuy’s original work on weapon lethality in the 1960s coincided with development in the U.S. of what advocates of a “revolution in military affairs” (RMA) have termed the “First Offset Strategy,” which involved the potential use of nuclear weapons to balance Soviet superiority in manpower and material. RMA proponents pointed to the lopsided victory of the U.S. and its allies over Iraq in the 1991 Gulf War as proof of the success of a “Second Offset Strategy,” which exploited U.S. precision-guided munitions, stealth, and intelligence, surveillance, and reconnaissance systems developed to counter the Soviet Army in Germany in the 1980s. Dupuy was one of the few to attribute the decisiveness of the Gulf War both to airpower and to the superior effectiveness of U.S. combat forces.

Trevor Dupuy certainly was not an anti-technology Luddite. He recognized the importance of military technological advances and the need to invest in them. But he believed that the human element has always been more important on the battlefield. Most wars in history have been fought without a clear-cut technological advantage for one side; some have been bloody and pointless, while others have been decisive for reasons other than technology. While the future is certainly unknown and past performance is not a guarantor of future results, it would be a gamble to rely on technological superiority alone to provide the margin of success in future warfare.

Are There Only Three Ways of Assessing Military Power?

military-power[This article was originally posted on 11 October 2016]

In 2004, military analyst and academic Stephen Biddle published Military Power: Explaining Victory and Defeat in Modern Battle, a book that addressed the fundamental question of what causes victory and defeat in battle. Biddle took to task the study of the conduct of war, which he asserted was based on “a weak foundation” of empirical knowledge. He surveyed the existing literature on the topic and determined that the plethora of theories of military success or failure fell into one of three analytical categories: numerical preponderance, technological superiority, or force employment.

Numerical preponderance theories explain victory or defeat in terms of material advantage, with the winners possessing greater numbers of troops, populations, economic production, or financial expenditures. Many of these involve gross comparisons of numbers, but some of the more sophisticated analyses involve calculations of force density, force-to-space ratios, or measurements of quality-adjusted “combat power.” Notions of threshold “rules of thumb,” such as the 3-1 rule, arise from this. These sorts of measurements form the basis for many theories of power in the study of international relations.

The next most influential means of assessment, according to Biddle, involve views on the primacy of technology. One school, systemic technology theory, looks at how technological advances shift balances within the international system. The best example of this is how the introduction of machine guns in the late 19th century shifted the advantage in combat to the defender, and the development of the tank in the early 20th century shifted it back to the attacker. Such measures are influential in international relations and political science scholarship.

The other school of technological determinacy is dyadic technology theory, which looks at relative advantages between states regardless of posture. This usually involves detailed comparisons of specific weapons systems, tanks, aircraft, infantry weapons, ships, missiles, etc., with the edge going to the more sophisticated and capable technology. The use of Lanchester theory in operations research and combat modeling is rooted in this thinking.

Biddle identified the third category of assessment as subjective assessments of force employment based on non-material factors including tactics, doctrine, skill, experience, morale or leadership. Analyses on these lines are the stock-in-trade of military staff work, military historians, and strategic studies scholars. However, international relations theorists largely ignore force employment and operations research combat modelers tend to treat it as a constant or omit it because they believe its effects cannot be measured.

The common weakness of all of these approaches, Biddle argued, is that “there are differing views, each intuitively plausible but none of which can be considered empirically proven.” For example, no one has yet been able to find empirical support substantiating the validity of the 3-1 rule or Lanchester theory. Biddle notes that the track record for predictions based on force employment analyses has also been “poor.” (To be fair, the problem of testing theory to see if applies to the real world is not limited to assessments of military power, it afflicts security and strategic studies generally.)

So, is Biddle correct? Are there only three ways to assess military outcomes? Are they valid? Can we do better?

Artificial Intelligence (AI) And Warfare

Arnold Schwarzenegger and friend. [Image Credit Jordan Strauss/Invision/AP/File]

Humans are a competitive lot. With machines making so much rapid progress (see Moore’s Law), the singularity approaches—see the discussion between Michio Kaku and Ray Kurzweil, two prominent futurologists. This is the “hypothesis that the invention of artificial super intelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.” (Wikipedia). This was also referred to as general artificial intelligence (GAI) by The Economist, and previously discussed in this blog.

We humans also exhibit a tendency to anthropomorphize, or to endow any observed object with human qualities. The image above illustrates Arnold Schwarzenegger sizing up his robotic doppelgänger. This is further evidenced by statements made about the ability of military networks to spontaneously become self-aware:

The idea behind the Terminator films – specifically, that a Skynet-style military network becomes self-aware, sees humans as the enemy, and attacks – isn’t too far-fetched, one of the nation’s top military officers said this week. Nor is that kind of autonomy the stuff of the distant future. ‘We’re a decade or so away from that capability,’ said Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff.

This exhibits a fundamental fear, and I believe a misconception, about the capabilities of these technologies. This is exemplified by Jay Tuck’s TED talk, “Artificial Intelligence: it will kill us.” His examples of AI in use today include airline and hotel revenue management, aircraft autopilot, and medical imaging. He also holds up the MQ-9 Reaper’s Argus (aka Gorgon Stare) imaging systems, as well as the X-47B Pegasus, previously discussed, as an example of modern AI, and the pinnacle in capability. Among several claims, he states that the X-47B has an optical stealth capability, which is inaccurate:

[X-47B], a descendant of an earlier killer drone with its roots in the late 1990s, is possibly the least stealthy of the competitors, owing to Northrop’s decision to build the drone big, thick and tough. Those qualities help it survive forceful carrier landings, but also make it a big target for enemy radars. Navy Capt. Jamie Engdahl, manager of the drone test program, described it as ‘low-observable relevant,’ a careful choice of words copping to the X-47B’s relative lack of stealth. (Emphasis added).

Such questions limit the veracity of these claims. I believe that this is little more than modern fear mongering, playing on ignorance. But, Mr. Tuck is not alone. From the forefront of technology, Elon Musk is often held up as an example of commercial success in the field of AI, and he recently addressed the national governors association meeting on this topic, specifically in the need for regulation in the commercial sphere.

On the artificial intelligence [AI] front, I have exposure to the most cutting edge AI, and I think people should be really concerned about it. … AI is a rare case, I think we should be proactive in terms of regulation, rather that reactive about it. Because by the time we are reactive about it, its too late. … AI is a fundamental risk to human civilization, in a way that car crashes, airplane crashes, faulty drugs or bad food were not. … In space, we get regulated by the FAA. But you know, if you ask the average person, ‘Do you want to get rid of the FAA? Do you want to take a chance on manufacturers not cutting corners on aircraft because profits were down that quarter? Hell no, that sounds terrible.’ Because robots will be able to do everything better than us, and I mean all of us. … We have companies that are racing to build AI, they have to race otherwise they are going to be made uncompetitive. … When the regulators are convinced it is safe they we can go, but otherwise, slow down.  [Emphasis added]

Mr. Musk also hinted at American exceptionalism: “America is the distillation of the human spirit of exploration.” Indeed, the link between military technology and commercial applications is an ongoing virtuous cycle. But, the kind of regulation that exists in the commercial sphere from within the national, subnational, and local governments of humankind do not apply so easily in the field of warfare, where no single authority exists. Any agreements to limit technology are a consensus-based agreement, such as a treaty.

The husky was mistakenly classified as wolf, because the classifier learned to use snow as feature. [Machine Master blog]

In a recent TEDx talk, Peter Haas describes his work in AI, and some of challenges that exist within the state of the art of this technology. As illustrated above, when asked to distinguish between a wolf and a dog, the machine classified the Husky in the above photo as a wolf. The humans developing the AI system did not know why this happened, so they asked the AI system to show the regions of the image that were used to make this decision, and the result is depicted on the right side of the image. The fact that this dog was photographed with snow in the background is a form of bias – are fact that snow exists in a photo does not yield any conclusive proof that any particular animal is a dog or a wolf.

Right now there are people – doctors, judges, accountants – who are getting information from an AI system and treating it like it was information from a trusted colleague. It is this trust that bothers me. Not because of how often AI gets it wrong; AI researchers pride themselves on the accuracy of results. It is how badly it gets it wrong when it makes a mistake that has me worried. These systems do not fail gracefully.

AI systems clearly have drawbacks, but they also have significant advantages, such as in the curation of shared model of the battlefield.

In a paper for the Royal Institute of International Affairs in London, Mary Cummings of Duke University says that an autonomous system perceives the world through its sensors and reconstructs it to give its computer ‘brain’ a model of the world which it can use to make decisions. The key to effective autonomous systems is ‘the fidelity of the world model and the timeliness of its updates.‘ [Emphasis added]

Perhaps AI systems might best be employed in the cyber domain, where their advantages are naturally “at home?” Mr. Haas noted that machines at the current time have a tough time doing simple tasks, like opening a door. As was covered in this blog, former Deputy Defense Secretary Robert Work noted this same problem, and thus called for man-machine teaming as one of the key areas of pursuit within the Third Offset Strategy.

Just as the previous blog post illustrates, “the quality of military men is what wins wars and preserves nations.” Let’s remember Paul Van Ripper’s performance in Millennium Challenge 2002:

Red, commanded by retired Marine Corps Lieutenant General Paul K. Van Riper, adopted an asymmetric strategy, in particular, using old methods to evade Blue’s sophisticated electronic surveillance network. Van Riper used motorcycle messengers to transmit orders to front-line troops and World-War-II-style light signals to launch airplanes without radio communications. Red received an ultimatum from Blue, essentially a surrender document, demanding a response within 24 hours. Thus warned of Blue’s approach, Red used a fleet of small boats to determine the position of Blue’s fleet by the second day of the exercise. In a preemptive strike, Red launched a massive salvo of cruise missiles that overwhelmed the Blue forces’ electronic sensors and destroyed sixteen warships.

We should learn lessons on the over reliance on technology. AI systems are incredibly fickle, but which offer incredible capabilities. We should question and inspect results by such systems. They do not exhibit emotions, they are not self-aware, they do not spontaneously ask questions unless specifically programmed to do so. We should recognize their significant limitations and use them in conjunction with humans who will retain command decisions for the foreseeable future.

Robert Work On Recent Chinese Advances In A2/AD Technology

An image of a hypersonic glider-like object broadcast by Chinese state media in October 2017. No known images of the DF-17’s hypersonic glide vehicle exist in the public domain. [CCTV screen capture via East Pendulum/The Diplomat]

Robert Work, former Deputy Secretary of Defense and one of the architects of the Third Offset Strategy, has a very interesting article up over at Task & Purpose detailing the origins of the People’s Republic of China’s (PRC) anti-access/area denial (A2/AD) strategy and the development of military technology to enable it.

According to Work, the PRC government was humiliated by the impunity with which the U.S. was able to sail its aircraft carrier task forces unimpeded through the waters between China and Taiwan during the Third Taiwan Straits crisis in 1995-1996. Soon after, the PRC began a process of military modernization that remains in progress. Part of the modernization included technical development along three main “complementary lines of effort.”

  • The objective of the first line of effort was to obtain rough parity with the U.S. in “battle network-guided munitions warfare in the Western Pacific.” This included detailed study of U.S. performance in the 1990-1991 Gulf War and development of a Chinese version of a battle network that features ballistic and guided missiles.
  • The second line of effort resulted in a sophisticated capability to attack U.S. networked military capabilities through “a blend of cyber, electronic warfare, and deception operations.”
  • The third line of effort produced specialized “assassin’s mace” capabilities for attacking specific weapons systems used for projecting U.S. military power overseas, such as aircraft carriers.

Work asserts that “These three lines of effort now enable contemporary Chinese battle networks to contest the U.S. military in every operating domain: sea, air, land, space, and cyberspace.”

He goes on to describe a fourth technological development line of effort, the fielding of hypersonic glide vehicles (HGV). HGV’s are winged re-entry vehicles boosted aloft by ballistic missiles. Moving at hypersonic speeds at near space altitudes (below 100 kilometers) yet maneuverable, HGVs carrying warheads would be exceptionally difficult to intercept even if the U.S. fielded ballistic missile defense systems capable of engaging such targets (which it currently does not). The Chinese have already deployed HGVs on Dong Feng (DF) 17 intermediate-range ballistic missiles, and late last year began operational testing of the DF-21 possessing intercontinental range.

Work concludes with a stark admonition: “An energetic and robust U.S. response to HGVs is required, including the development of new defenses and offensive hypersonic weapons of our own.”

Strachan On The Changing Character Of War

The Cove, the professional development site for the Australian Army, has posted a link to a 2011 lecture by Professor Sir Hew Strachan. Strachan, a Professor of International Relations at St. Andrews University in Scotland, is one of the more perceptive and trenchant observers about the recent trends in strategy, war, and warfare from a historian’s perspective. I highly recommend his recent book, The Direction of War.

Strachan’s lecture, “The Changing Character of War,” proceeds from Carl von Clausewitz’s discussions in On War on change and continuity in the history of war to look at the trajectories of recent conflicts. Among the topics Strachan’s lecture covers are technological determinism, the irregular conflicts of the early 21st century, political and social mobilization, the spectrum of conflict, the impact of the Second World War on contemporary theorizing about war and warfare, and deterrence.

This is well worth the time to listen to and think about.