When we last checked in with the U.S. Army’s Mobile Protected Firepower (MPF) program—an effort to quickly field a new light tank lightweight armored vehicle with a long-range direct fire capability—Request for Proposals (RFPs) were expected by November 2017 and the first samples by April 2018. It now appears the first MPF prototypes will not be delivered before mid-2020 at the earliest.
According to a recent report by Kris Osborn on Warrior Maven, “The service expects to award two Engineering Manufacturing and Development (EMD) deals by 2019 as part of an initial step to building prototypes from multiple vendors, service officials said. Army statement said initial prototypes are expected within 14 months of a contract award.”
Part of the delay appears to stem from uncertainty about requirements. As Osborn reported, “For the Army, the [MPF} effort involves what could be described as a dual-pronged acquisition strategy in that it seeks to leverage currently available or fast emerging technology while engineering the vehicle with an architecture such that it can integrate new weapons and systems as they emerge over time.”
Among the technologies the Army will seek to integrate into the MPF are a lightweight, heavy caliber main gun, lightweight armor composites, active protection systems, a new generation of higher-resolution targeting sensors, greater computer automation, and artificial intelligence.
Osborn noted that
the Army’s Communications Electronics Research, Development and Engineering Center (CERDEC) is already building prototype sensors – with this in mind. In particular, this early work is part of a longer-range effort to inform the Army’s emerging Next-Generation Combat Vehicle (NGCV). The NGCV, expected to become an entire fleet of armored vehicles, is now being explored as something to emerge in the late 2020s or early 2030s.
These evolving requirements are already impacting the Army’s approach to fielding MPF. It originally intended to “do acquisition differently to deliver capability quickly.” MPF program director Major General David Bassett declared in October 2017, “We expect to be delivering prototypes off of that program effort within 15 months of contract award…and getting it in the hands of an evaluation unit six months after that — rapid!“
It is now clear the Army won’t be meeting that schedule after all. Stay tuned.
The U.S. National Academies of Sciences, Engineering, and Medicine has issued a new report emphasizing the need for developing countermeasures against multiple small unmanned aerial aircraft systems (sUASs) — organized in coordinated groups, swarms, and collaborative groups — which could be used much sooner than the U.S. Army anticipates. [There is a summary here.]
National Defense University’s Frank Hoffman has a very good piece in the current edition of Parameters, “Will War’s Nature Change in the Seventh Military Revolution?,” that explores the potential implications of the combinations of robotics, artificial intelligence, and deep learning systems on the character and nature of war.
Humans are a competitive lot. With machines making so much rapid progress (see Moore’s Law), the singularity approaches—see the discussion between Michio Kaku and Ray Kurzweil, two prominent futurologists. This is the “hypothesis that the invention of artificial super intelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.” (Wikipedia). This was also referred to as general artificial intelligence (GAI) by The Economist, and previously discussed in this blog.
We humans also exhibit a tendency to anthropomorphize, or to endow any observed object with human qualities. The image above illustrates Arnold Schwarzenegger sizing up his robotic doppelgänger. This is further evidenced by statements made about the ability of military networks to spontaneously become self-aware:
The idea behind the Terminator films – specifically, that a Skynet-style military network becomes self-aware, sees humans as the enemy, and attacks – isn’t too far-fetched, one of the nation’s top military officers said this week. Nor is that kind of autonomy the stuff of the distant future. ‘We’re a decade or so away from that capability,’ said Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff.
This exhibits a fundamental fear, and I believe a misconception, about the capabilities of these technologies. This is exemplified by Jay Tuck’s TED talk, “Artificial Intelligence: it will kill us.” His examples of AI in use today include airline and hotel revenue management, aircraft autopilot, and medical imaging. He also holds up the MQ-9 Reaper’s Argus (aka Gorgon Stare) imaging systems, as well as the X-47B Pegasus, previously discussed, as an example of modern AI, and the pinnacle in capability. Among several claims, he states that the X-47B has an optical stealth capability, which is inaccurate:
[X-47B], a descendant of an earlier killer drone with its roots in the late 1990s, is possibly the least stealthy of the competitors, owing to Northrop’s decision to build the drone big, thick and tough. Those qualities help it survive forceful carrier landings, but also make it a big target for enemy radars. Navy Capt. Jamie Engdahl, manager of the drone test program, described it as ‘low-observable relevant,’ a careful choice of words copping to the X-47B’s relative lack of stealth. (Emphasis added).
Such questions limit the veracity of these claims. I believe that this is little more than modern fear mongering, playing on ignorance. But, Mr. Tuck is not alone. From the forefront of technology, Elon Musk is often held up as an example of commercial success in the field of AI, and he recently addressed the national governors association meeting on this topic, specifically in the need for regulation in the commercial sphere.
On the artificial intelligence [AI] front, I have exposure to the most cutting edge AI, and I think people should be really concerned about it. … AI is a rare case, I think we should be proactive in terms of regulation, rather that reactive about it. Because by the time we are reactive about it, its too late. … AI is a fundamental risk to human civilization, in a way that car crashes, airplane crashes, faulty drugs or bad food were not. … In space, we get regulated by the FAA. But you know, if you ask the average person, ‘Do you want to get rid of the FAA? Do you want to take a chance on manufacturers not cutting corners on aircraft because profits were down that quarter? Hell no, that sounds terrible.’ Because robots will be able to do everything better than us, and I mean all of us. … We have companies that are racing to build AI, they have to race otherwise they are going to be made uncompetitive. … When the regulators are convinced it is safe they we can go, but otherwise, slow down. [Emphasis added]
Mr. Musk also hinted at American exceptionalism: “America is the distillation of the human spirit of exploration.” Indeed, the link between military technology and commercial applications is an ongoing virtuous cycle. But, the kind of regulation that exists in the commercial sphere from within the national, subnational, and local governments of humankind do not apply so easily in the field of warfare, where no single authority exists. Any agreements to limit technology are a consensus-based agreement, such as a treaty.
In a recent TEDx talk, Peter Haas describes his work in AI, and some of challenges that exist within the state of the art of this technology. As illustrated above, when asked to distinguish between a wolf and a dog, the machine classified the Husky in the above photo as a wolf. The humans developing the AI system did not know why this happened, so they asked the AI system to show the regions of the image that were used to make this decision, and the result is depicted on the right side of the image. The fact that this dog was photographed with snow in the background is a form of bias – are fact that snow exists in a photo does not yield any conclusive proof that any particular animal is a dog or a wolf.
Right now there are people – doctors, judges, accountants – who are getting information from an AI system and treating it like it was information from a trusted colleague. It is this trust that bothers me. Not because of how often AI gets it wrong; AI researchers pride themselves on the accuracy of results. It is how badly it gets it wrong when it makes a mistake that has me worried. These systems do not fail gracefully.
AI systems clearly have drawbacks, but they also have significant advantages, such as in the curation of shared model of the battlefield.
In a paper for the Royal Institute of International Affairs in London, Mary Cummings of Duke University says that an autonomous system perceives the world through its sensors and reconstructs it to give its computer ‘brain’ a model of the world which it can use to make decisions. The key to effective autonomous systems is ‘the fidelity of the world model and the timeliness of its updates.‘ [Emphasis added]
Perhaps AI systems might best be employed in the cyber domain, where their advantages are naturally “at home?” Mr. Haas noted that machines at the current time have a tough time doing simple tasks, like opening a door. As was covered in this blog, former Deputy Defense Secretary Robert Work noted this same problem, and thus called for man-machine teaming as one of the key areas of pursuit within the Third Offset Strategy.
Just as the previous blog post illustrates, “the quality of military men is what wins wars and preserves nations.” Let’s remember Paul Van Ripper’s performance in Millennium Challenge 2002:
Red, commanded by retired Marine Corps Lieutenant General Paul K. Van Riper, adopted an asymmetric strategy, in particular, using old methods to evade Blue’s sophisticated electronic surveillance network. Van Riper used motorcycle messengers to transmit orders to front-line troops and World-War-II-style light signals to launch airplanes without radio communications. Red received an ultimatum from Blue, essentially a surrender document, demanding a response within 24 hours. Thus warned of Blue’s approach, Red used a fleet of small boats to determine the position of Blue’s fleet by the second day of the exercise. In a preemptive strike, Red launched a massive salvo of cruise missiles that overwhelmed the Blue forces’ electronic sensors and destroyed sixteen warships.
We should learn lessons on the over reliance on technology. AI systems are incredibly fickle, but which offer incredible capabilities. We should question and inspect results by such systems. They do not exhibit emotions, they are not self-aware, they do not spontaneously ask questions unless specifically programmed to do so. We should recognize their significant limitations and use them in conjunction with humans who will retain command decisions for the foreseeable future.
My previous post outlined the potential advantages and limitations of current and future drone technology. The real utility of drones in future warfare may lie in a tactic that is both quite old and new, swarming. “‘This [drone swarm concept] goes all the way back to the tactics of Attila the Hun,’ says Randall Steeb, senior engineer at the Rand Corporation in the US. ‘A light attack force that can defeat more powerful and sophisticated opponents. They come out of nowhere, attack from all sides and then disappear, over and over.'”
In order to be effective, Mr. Steeb’s concept would require drones to be able to speed away from their adversary, or be able to hide. The Huns are described “as preferring to defeat their enemies by deceit, surprise attacks, and cutting off supplies. The Huns brought large numbers of horses to use as replacements and to give the impression of a larger army on campaign.” Also, prior to problems caused to the Roman Empire by the Huns under Attila (~400 CE), another group of people, the Scythians, used similar tactics much earlier, as mentioned by Herodotus, (~800 BCE). “With great mobility, the Scythians could absorb the attacks of more cumbersome foot soldiers and cavalry, just retreating into the steppes. Such tactics wore down their enemies, making them easier to defeat.” These tactics were also used by the Parthians, resulted in the Roman defeat under Crassis at the Battle of Carrahe, 53 BCE. Clearly, maneuver is as old as warfare itself.
Today, fighter pilots approach warfare like a questing medieval knight. They search for opponents with similar capabilities and defeat them by using technologically superior equipment or better application of individual tactics and techniques. For decades, leading air forces nurtured this dynamic by developing expensive, manned air superiority fighters. This will all soon change. Advances in unmanned combat aerial vehicles (UCAVs) will turn fighter pilots from noble combatants to small-unit leaders and drive the development of new aerial combined arms tactics.
Peter Singer, an expert on future warfare at the New America think-tank, is in no doubt. ‘What we have is a series of technologies that change the game. They’re not science fiction. They raise new questions. What’s possible? What’s proper?’ Mr. Singer is talking about artificial intelligence, machine learning, robotics and big-data analytics. Together they will produce systems and weapons with varying degrees of autonomy, from being able to work under human supervision to ‘thinking’ for themselves. The most decisive factor on the battlefield of the future may be the quality of each side’s algorithms. Combat may speed up so much that humans can no longer keep up. Frank Hoffman, a fellow of the National Defense University who coined the term ‘hybrid warfare’, believes that these new technologies have the potential not just to change the character of war but even possibly its supposedly immutable nature as a contest of wills. For the first time, the human factors that have defined success in war, ‘will, fear, decision-making and even the human spark of genius, may be less evident,’ he says.” (emphasis added).
Drones are highly capable, and with increasing autonomy, they themselves may be immune to fear. Technology has been progressing step by step to alter the character of war. Think of the Roman soldier and his personal experience in warfare up close vs. the modern sniper. They each have a different experience in warfare, and fear manifests itself in different ways. Unless we create and deploy full autonomous systems, with no human in or on the loop, there will be an opportunity for fear and confusion by the human mind to creep into martial matters. An indeed, with so much new technology, friction of some sort is almost assured.
I’m not alone in this assessment. Secretary of Defense James Mattis has said “You go all the way back to Thucydides who wrote the first history and it was of a war and he said it’s fear and honor and interest and those continue to this day. The fundamental nature of war is unchanging. War is a human social phenomenon.”
Aerial combat over the past two decades, though relatively rare, continues to demonstrate the importance of superior SA. The building blocks, however, of superior SA, information acquisition and information denial, seem to be increasingly associated with sensors, signature reduction, and networks. Looking forward, these changes have greatly increased the proportion of BVR [Beyond Visual Range] engagements and likely reduced the utility of traditional fighter aircraft attributes, such as speed and maneuverability, in aerial combat. At the same time, they seem to have increased the importance of other attributes.
[I]t is important to acknowledge that all of the foregoing discussion is based on certain assumptions plus analysis of past trends, and the future of aerial combat might continue to belong to fast, agile aircraft. The alternative vision of future aerial combat presented in Chapter 5 relies heavily on robust LoS [Line of Sight] data links to enable widely distributed aircraft to efficiently share information and act in concert to achieve superior SA and combat effectiveness. Should the links be degraded or denied, the concept put forward here would be difficult or impossible to implement.
Therefore, in the near term, one of the most important capabilities to enable is a secure battle network. This will be required for remotely piloted and autonomous system alike, and this will be the foundation of information dominance – the acquisition of information for use by friendly forces, and the denial of information to an adversary.
In the recently issued 2018 National Defense Strategy, the United States acknowledged that “long-term strategic competitions with China and Russia are the principal priorities for the Department [of Defense], and require both increased and sustained investment, because of the magnitude of the threats they pose to U.S. security and prosperity today, and the potential for those threats to increase in the future.”
The strategy statement lists technologies that will be focused upon:
The drive to develop new technologies is relentless, expanding to more actors with lower barriers of entry, and moving at accelerating speed. New technologies include advanced computing, “big data” analytics, artificial intelligence, autonomy, robotics, directed energy, hypersonics, and biotechnology— the very technologies that ensure we will be able to fight and win the wars of the future… The Department will invest broadly in military application of autonomy, artificial intelligence, and machine learning, including rapid application of commercial breakthroughs, to gain competitive military advantages.” (emphasis added).
Autonomy, robotics, artificial intelligence and machine learning…these are all related to the concept of “drone swarms.” TDI has reported previously on the idea of drone swarms on land. There is indeed promise in many domains of warfare for such technology. In testimony to the Senate Armed Services Committee on the future of warfare, Mr Bryan Clark of the Center for Strategic and Budgetary Assessments argued that “America should apply new technologies to four main areas of warfare: undersea, strike, air and electromagnetic.”
Drones have certainly transformed the way that the U.S. wages war from the air. The Central Intelligence Agency (CIA) innovated, deployed and fired weapons from drones first against the Taliban in Afghanistan, less than one month after the 9/11 attacks against the U.S. homeland. Most drones today are airborne, partly because it is generally easier to navigate in the air than it is on the land, due to fewer obstacles and more uniform and predictable terrain. The same is largely true of the oceans, at least the blue water parts.
Aerial Drones and Artificial Intelligence
It is important to note that the drones in active use today by the U.S. military are actually remotely piloted Unmanned Aerial Vehicles (UAVs). With the ability to fire missiles since 2001, one could argue that these crossed the threshold into Unmanned Combat Aerial Vehicles (UCAVs), but nonetheless, they have a pilot—typically a U.S. Air Force (USAF) member, who would very much like to be flying an F-16, rather than sitting in a shipping container in the desert somewhere safe, piloting a UAV in a distant theater of war.
A distinction needs to be made between “narrow” AI, which allows a machine to carry out a specific task much better than a human could, and “general” AI, which has far broader applications. Narrow AI is already in wide use for civilian tasks such as search and translation, spam filters, autonomous vehicles, high-frequency stock trading and chess-playing computers… General AI may still be at least 20 years off. A general AI machine should be able to carry out almost any intellectual task that a human is capable of.” (emphasis added)
Thus, it is reasonable to assume that the U.S. military (or others) will not field a fully automated drone, capable of prosecuting a battle without human assistance, until roughly 2038. This means that in the meantime, a human will be somewhere “in” or “on” the loop, making at least some of the decisions, especially those involving deadly force.
Future Aerial Drone Roles and Missions
The CIA’s initial generation of UAVs was armed in an ad-hoc fashion; further innovation was spurred by the drive to seek out and destroy the 9/11 perpetrators. These early vehicles were designed for intelligence, reconnaissance, and surveillance (ISR) missions. In this role, drones have some big advantages over manned aircraft, including the ability to loiter for long periods. They are not quick, not very maneuverable, and as such are suited to operations in permissive airspace.
The development of UCAVs has allowed their integration into strike (air-to-ground) and air superiority (air-to-air) missions in contested airspace. UCAV strike missions could target and destroy land and sea nodes in command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) networks in an attempt to establish “information dominance.” They might also be targeted against assets like surface to air missiles and radars, part of an adversary anti-access/area denial (A2/AD) capability.
Given the sophistication of Russian and Chinese A2/AD networks and air forces, some focus should be placed upon developing more capable and advanced drones required to defeat these challenges. One example comes from Kratos, a drone maker, and reported on in Popular Science.
The Mako drone pictured above has much higher performance than some other visions of future drone swarms, which look more like paper airplanes. Given their size and numbers, they might be difficult to shoot down entirely, and this might be able to operate reasonably well within contested airspace. But, they’re not well suited for air-to-air combat, as they will not have the weapons or the speed necessary to engage with current manned aircraft in use with potential enemy air forces. Left unchecked, an adversary’s current fighters and bombers could easily avoid these types of drones and prosecute their own attacks on vital systems, installations and facilities.
The real utility of drones may lie in the unique tactic for which they are suited, swarming. More on that in my next post.