Month April 2018

Response 2 (Performance of Armies)

In an exchange with one of readers, he mentioned that about the possibility to quantifiably access the performances of armies and produce a ranking from best to worst. The exchange is here:

The Dupuy Institute Air Model Historical Data Study

We have done some work on this, and are the people who have done the most extensive published work on this. Swedish researcher Niklas Zetterling in his book Normandy 1944: German Military Organization, Combat Power and Organizational Effectiveness also addresses this subject, as he has elsewhere, for example, an article in The International TNDM Newsletter, volume I, No. 6, pages 21-23 called “CEV Calculations in Italy, 1943.” It is here: http://www.dupuyinstitute.org/tdipub4.htm

When it came to measuring the differences in performance of armies, Martin van Creveld referenced Trevor Dupuy in his book Fighting Power: German and U.S. Army Performance, 1939-1945, pages 4-8.

What Trevor Dupuy has done is compare the performances of both overall forces and individual divisions based upon his Quantified Judgment Model (QJM). This was done in his book Numbers, Predictions and War: The Use of History to Evaluate and Predict the Outcome of Armed Conflict. I bring the readers attention to pages ix, 62-63, Chapter 7: Behavioral Variables in World War II (pages 95-110), Chapter 9: Reliably Representing the Arab-Israeli Wars (pages 118-139), and in particular page 135, and pages 163-165. It was also discussed in Understanding War: History and Theory of Combat, Chapter Ten: Relative Combat Effectiveness (pages 105-123).

I ended up dedicating four chapters in my book War by Numbers: Understanding Conventional Combat to the same issue. One of the problems with Trevor Dupuy’s approach is that you had to accept his combat model as a valid measurement of unit performance. This was a reach for many people, especially those who did not like his conclusions to start with. I choose to simply use the combined statistical comparisons of dozens of division-level engagements, which I think makes the case fairly convincingly without adding a construct to manipulate the data. If someone has a disagreement with my statistical compilations and the results and conclusions from it, I have yet to hear them. I would recommend looking at Chapter 4: Human Factors (pages 16-18), Chapter 5: Measuring Human Factors in Combat: Italy 1943-1944 (pages 19-31), Chapter 6: Measuring Human Factors in Combat: Ardennes and Kursk (pages 32-48), and Chapter 7: Measuring Human Factors in Combat: Modern Wars (pages 49-59).

Now, I did end up discussing Trevor Dupuy’s model in Chapter 19: Validation of the TNDM and showing the results of the historical validations we have done of his model, but the model was not otherwise used in any of the analysis done in the book.

But….what we (Dupuy and I) have done is a comparison between forces that opposed each other. It is a measurement of combat value relative to each other. It is not an absolute measurement that can be compared to other armies in different times and places. Trevor Dupuy toyed with this on page 165 of NPW, but this could only be done by assuming that combat effectiveness of the U.S. Army in WWII was the same as the Israeli Army in 1973.

Anyhow, it is probably impossible to come up with a valid performance measurement that would allow you to rank an army from best to worse. It is possible to come up with a comparative performance measurement of armies that have faced each other. This, I believe we have done, using different methodologies and different historical databases. I do believe it would be possible to then determine what the different factors are that make up this difference. I do believe it would be possible to assign values or weights to those factors. I believe this would be very useful to know, in light of the potential training and organizational value of this knowledge.

Why is WWI so forgotten?

A view on the U.S. remembrance, or lack thereof, of World War One from the British paper The Guardian:  https://www.theguardian.com/world/2017/apr/06/world-war-1-centennial-us-history-modern-america

We do have World War I engagements in our databases and have included in some of our analysis. We have done some other research related to World War I (funded by the UK Ministry of Defence, of course):

Captured Records: World War I

Also have a few other blog post about the war:

Learning From Defeat in World War I

First World War Digital Resources

It was my grandfather’s war, but he was British at the time.

Murmansk

 

The Dupuy Institute Air Model Historical Data Study

British Air Ministry aerial combat diagram that sought to explain how the RAF had fought off the Luftwaffe. [World War II Today]

[The article below is reprinted from the April 1997 edition of The International TNDM Newsletter.]

Air Model Historical Data Study
by Col. Joseph A. Bulger, Jr., USAF, Ret

The Air Model Historical Study (AMHS) was designed to lead to the development of an air campaign model for use by the Air Command and Staff College (ACSC). This model, never completed, became known as the Dupuy Air Campaign Model (DACM). It was a team effort led by Trevor N. Dupuy and included the active participation of Lt. Col. Joseph Bulger, Gen. Nicholas Krawciw, Chris Lawrence, Dave Bongard, Robert Schmaltz, Robert Shaw, Dr. James Taylor, John Kettelle, Dr. George Daoust and Louis Zocchi, among others. After Dupuy’s death, I took over as the project manager.

At the first meeting of the team Dupuy assembled for the study, it became clear that this effort would be a serious challenge. In his own style, Dupuy was careful to provide essential guidance while, at the same time, cultivating a broad investigative approach to the unique demands of modeling for air combat. It would have been no surprise if the initial guidance established a focus on the analytical approach, level of aggregation, and overall philosophy of the QJM [Quantified Judgement Model] and TNDM [Tactical Numerical Deterministic Model]. It was clear that Trevor had no intention of steering the study into an air combat modeling methodology based directly on QJM/TNDM. To the contrary, he insisted on a rigorous derivation of the factors that would permit the final choice of model methodology.

At the time of Dupuy’s death in June 1995, the Air Model Historical Data Study had reached a point where a major decision was needed. The early months of the study had been devoted to developing a consensus among the TDI team members with respect to the factors that needed to be included in the model. The discussions tended to highlight three areas of particular interest—factors that had been included in models currently in use, the limitations of these models, and the need for new factors (and relationships) peculiar to the properties and dynamics of the air campaign. Team members formulated a family of relationships and factors, but the model architecture itself was not investigated beyond the surface considerations.

Despite substantial contributions from team members, including analytical demonstrations of selected factors and air combat relationships, no consensus had been achieved. On the contrary, there was a growing sense of need to abandon traditional modeling approaches in favor of a new application of the “Dupuy Method” based on a solid body of air combat data from WWII.

The Dupuy approach to modeling land combat relied heavily on the ratio of force strengths (largely determined by firepower as modified by other factors). After almost a year of investigations by the AMHDS team, it was beginning to appear that air combat differed in a fundamental way from ground combat. The essence of the difference is that in air combat, the outcome of the maneuver battle for platform position must be determined before the firepower relationships may be brought to bear on the battle outcome.

At the time of Dupuy’s death, it was apparent that if the study contract was to yield a meaningful product, an immediate choice of analysis thrust was required. Shortly prior to Dupuy’s death, I and other members of the TDI team recommended that we adopt the overall approach, level of aggregation, and analytical complexity that had characterized Dupuy’s models of land combat. We also agreed on the time-sequenced predominance of the maneuver phase of air combat. When I was asked to take the analytical lead for the contact in Dupuy’s absence, I was reasonably confident that there was overall agreement.

In view of the time available to prepare a deliverable product, it was decided to prepare a model using the air combat data we had been evaluating up to that point—June 1995. Fortunately, Robert Shaw had developed a set of preliminary analysis relationships that could be used in an initial assessment of the maneuver/firepower relationship. In view of the analytical, logistic, contractual, and time factors discussed, we decided to complete the contract effort based on the following analytical thrust:

  1. The contract deliverable would be based on the maneuver/firepower analysis approach as currently formulated in Robert Shaw’s performance equations;
  2. A spreadsheet formulation of outcomes for selected (Battle of Britain) engagements would be presented to the customer in August 1995;
  3. To the extent practical, a working model would be provided to the customer with suggestions for further development.

During the following six weeks, the demonstration model was constructed. The model (programmed for a Lotus 1-2-3 style spreadsheet formulation) was developed, mechanized, and demonstrated to ACSC in August 1995. The final report was delivered in September of 1995.

The working model demonstrated to ACSC in August 1995 suggests the following observations:

  • A substantial contribution to the understanding of air combat modeling has been achieved.
  • While relationships developed in the Dupuy Air Combat Model (DACM) are not fully mature, they are analytically significant.
  • The approach embodied in DACM derives its authenticity from the famous “Dupuy Method” thus ensuring its strong correlations with actual combat data.
  • Although demonstrated only for air combat in the Battle of Britain, the methodology is fully capable of incorporating modem technology contributions to sensor, command and control, and firepower performance.
  • The knowledge base, fundamental performance relationships, and methodology contributions embodied in DACM are worthy of further exploration. They await only the expression of interest and a relatively modest investment to extend the analysis methodology into modem air combat and the engagements anticipated for the 21st Century.

One final observation seems appropriate. The DACM demonstration provided to ACSC in August 1995 should not be dismissed as a perhaps interesting, but largely simplistic approach to air combat modeling. It is a significant contribution to the understanding of air combat relationships that will prevail in the 21st Century. The Dupuy Institute is convinced that further development of DACM makes eminent good sense. An exploitation of the maneuver and firepower relationships already demonstrated in DACM will provide a valid basis for modeling air combat with modern technology sensors, control mechanisms, and weapons. It is appropriate to include the Dupuy name in the title of this latest in a series of distinguished combat models. Trevor would be pleased.

Why it is difficult to withdraw from (Syria, Iraq, Afghanistan….)

Leaving an unstable country in some regions is an invite to further international problems. This was the case with Afghanistan in the 1990s, which resulted in Al-Qaeda being hosted there. This was the case with Somalia, which not only hosted elements of Al-Qaeda, but also conducted rampant piracy. This was the case with Iraq/Syria, which gave the Islamic State a huge opening and resulted in them seizing the second largest city in Iraq. It seems a bad idea to ignore these areas, even though there is a cost to not ignoring them.

The cost of not ignoring them is one must maintain a presence of something like 2,000 to 20,000 or more support troops, Air Force personnel, trainers, advisors, special operations forces, etc. And they must be maintained for a while. It will certainly result in the loss of a few American lives, perhaps even dozens. It will certainly cost hundreds of millions to pay for deployment, security operations, develop the local forces, and to re-build and re-vitalize these areas. In fact, the bill usually ends up costing billions. Furthermore, these operations go on for a decade or two or more. The annual cost times 20 years gets considerable. We have never done any studies of “security operations” or “advisory missions.” The focus of our work was on insurgencies, but we have no doubt that these things tend to drag on a while before completion.

The cost of ignoring these countries may be nothing. If there is no international terror threat and no direct threat to our interests, then there may not be a major cost to withdrawing. On the other hand, the cost of ignoring Somalia was a pirate campaign that started around 2005 and where they attacked at least 232 ships. They captured over 3,500 seafarers. At least 62 of them died. The cost of ignoring Afghanistan in the 1990s? Well, was it 9-11? Would 9-11 have occurred anyway if Al-Qaeda was not free to reside, organize, recruit and train in Afghanistan? I don’t know for sure…..but I think it was certainly an enabling factor.

I have never seen a study that analyzes/estimates the cost of these interventions (although some such studies may exist).  Conversely, I have never seen a study that analyzes/estimates the cost of not doing these interventions (and I kind of doubt that such a study exists).

Hard to do analyze the cost of the trade-off if we really don’t know the cost.

 

Syrian Disengagement

The United States has struggled with what to do in Syria. We never had good relations with the dictatorial Assad family. Their civil war started with civil protests on 15 March 2011 as part of the Arab Spring. The protests turned bloody with over a thousand civilian dead (have no idea how accurate this number is) and thousands arrested. It had turned into a full civil war by late July 2011. Our initial response was to remain disengaged.

It is only when Assad used chemical weapons against his own population, similar to Saddam Hussein of Iraq, that we finally considered intervening. President Obama announced a “red line” on 20 August 2012 against the use of chemical weapons. Assad’s forces violated this on 17 October 2012 in Salqin, 23 December 2012 at Al-Bayadah, most notably in 19 March 2013 in Aleppo and in several other locations during March and April,  29 April 2013 in Saraqib and a couple of more incidents in May, 21 August 2013 in Ghouta and several other incidents in August. All attacks used the nerve agent Sarin. Instead of responding militarily, this then turned into a coordinated international effort to eliminate all the Syria chemical weapons, which was done in conjunction with Russia. This was not entirely successful, as repeated later incidences would demonstrate.

In my opinion, the United States should have intervened with considerable force in March 2013 if not before. This would include an significant air campaign, extensive aid to the rebels, and a small number of advisors. This would have certainly entailed some American casualties. Perhaps the overall results would have been no better than Libya (which has also been in civil war from 2011). But, at least with Libya we did got rid of Muammar Gaddafi in October 2011. Gaddafi had most likely organized a terrorist attack against the United States. This was the 1988 Lockerbie bombing of Pan Am Flight 103 which killed 270 people, including 190 Americans (and was most likely conducted in response to Reagan’s 1986 U.S. bombing of Libya).

Still, an intervention in Syria at that point may well have ended Assad’s regime and empowered a moderate Sunni Arab force that could control the government. It may have also forestalled the rise of ISIL. Or it may not have…it is hard to say. But, what happened over the next eight years, with the rise of ISIL, their seizure of Mosul in Iraq, and the extended civil war, was probably close to a worse case scenario. This was a case where an early intervention may have lead to a more favorable result for us. I suspect that our intervention in Libya probably created a more favorable result than if we had not intervened.

The problem in Syria is that Assad represents a minority government of Shiite Arabs. They make up around 13% of the population (largest group are Alawites). This lords over a population of 69-74% Sunni (most are Arabs but it includes Kurds and Turcoman). In the end, given enough decades and enough violence, the majority will eventually rule. It is hard to imagine in this day and age that a minority can continue to rule forever, although Bashir Assad and his father have now ruled over Syria for almost 49 years. Part of what makes that possible is that around 10% of the population of Syria is Christian and 3% Druze. They tend to side with and support the Alawites, as a dominant, non-democratic Sunni rule would be extremely prejudiced against them. Needless to say, something like an Islamic State would be a nightmare scenario for them. So, for all practical purposes, Assad tends to have the support of at least a quarter of the population. From their central position, and armed by Russia, this makes them a significant force.

So, the question becomes, should the United States now disengage from Syria, now that the Islamic States is gone (but as many as 3,000 of their fighters remain)? Right now, we have at least 2,000 troops in and around Syria, with most of them outside of Syria (mostly based with our fellow NATO member Turkey). We have lost a total of two people since this affair started. We are allied with and supporting small moderate Sunni Arab groups and some Kurdish groups (which Turkey is opposed to and sometimes engages in combat). Turkey is supporting some of its own moderate Sunni Arab groups. Also in Syria is the radical Arab groups, Al-Qaeda and of course, the Islamic State (whose leader is still at large) and Al-Nusrah. So, is it time to leave?

What are the possible outcomes if we leave?

  1. Assad will win the civil war and we will have “peace in our time” (written with irony).
    1. As the moderate Sunni groups are primarily based in Turkey they may not disappear anytime soon, especially if they are still being given support from Saudi Arabia and other Arab nations, even if the U.S. withdraws support.
    2. The Kurdish groups are still in Syria and probably not going away soon. They have some support from the Kurds in Iraq.
    3. Al-Qaeda and ISIL and other radical groups are probably not going away as long as Syria is ruled by the Alawites.
    4. There is a border with Iraq that facilitates flow of arms and men in both directions.
  2. The civil war will continue at a low level.
    1. A pretty likely scenario given the points above.
    2. Will this allow for the resurgence of radical Islamist groups?
  3. The civil war will continue at significant intensity for a while.
    1. Hard to say how long people can maintain a civil war, but the war in Lebanon went on for a while (over 15 years, from 1975 to 1990).
    2. This will certainly allow for the resurgence of radical Islamist groups.
  4. We will have a period of relative peace and then there will be a second civil war later.
    1. The conditions that lead to the first revolt have not been corrected in any manner.
    2. Syria is still a minority ruled government.
    3. This could allow for the resurgences of radical Islamist groups.
  5. There is a political compromise and joint or shared rule.
    1. I don’t think this was ever on the Assad’s agenda before, and will certainly not be now.
  6. Assad is overthrown.
    1. This is extremely unlikely, but one cannot rule out an internal Alawite coup by a leadership with a significantly different view and approach.
    2. As it is, it does not look like he is going to be defeated militarily any time soon.

So, where does continued U.S. engagement or disengagement help or hinder in these scenarios?

A few related links:

  1. Map of situation in Syria (have no idea how accurate it is): https://www.aljazeera.com/indepth/interactive/2015/05/syria-country-divided-150529144229467.html
  2. Comments by Lindsey Graham on Syria: https://www.yahoo.com/news/republican-senator-graham-warns-against-syria-troop-withdrawal-165314872.html
  3. More Maps: http://www.newsweek.com/russia-says-syria-war-nearly-over-trump-claims-us-leave-very-soon-866770

 

 

Response

A fellow analyst posted an extended comment to two of our threads:

C-WAM 3

and

Military History and Validation of Combat Models

Instead of responding in the comments section, I have decided to respond with another blog post.

As the person points out, most Army simulations exist to “enable students/staff to maintain and improve readiness…improve their staff skills, SOPs, reporting procedures, and planning….”

Yes this true, but I argue that this does not obviate the need for accurate simulations. Assuming no change in complexity, I cannot think of a single scenario where having a less accurate model is more desirable that having a more accurate model.

Now what is missing from many of these models that I have seen? Often a realistic unit breakpoint methodology, a proper comparison of force ratios, a proper set of casualty rates, addressing human factors, and many other matters. Many of these things are being done in these simulations already, but are being done incorrectly. Quite simply, they do not realistically portray a range of historical or real combat examples.

He then quotes the 1997-1998 Simulation Handbook of the National Simulations Training Center:

The algorithms used in training simulations provide sufficient fidelity for training, not validation of war plans. This is due to the fact that important factors (leadership, morale, terrain, weather, level of training or units) and a myriad of human and environmental impacts are not modeled in sufficient detail….”

Let’s take their list made around 20 years ago. In the last 20 years, what significant quantitative studies have been done on the impact of leadership on combat? Can anyone list them? Can anyone point to even one? The same with morale or level of training of units. The Army has TRADOC, the Army Staff, Leavenworth, the War College, CAA and other agencies, and I have not seen in the last twenty years a quantitative study done to address these issues. And what of terrain and weather? They have been around for a long time.

Army simulations have been around since the late 1950s. So at the time these shortfalls are noted in 1997-1998, 40 years had passed. By their own admission, these issues had not been adequately addressed in the previous 40 years. I gather they have not been adequately in addressed in the last 20 years. So, the clock is ticking, 60 years of Army modeling and simulation, and no one has yet fully and properly address many of these issues. In many cases, they have not even gotten a good start in addressing them.

Anyhow, I have little interest in arguing these issues. My interest is in correcting them.

Assessing The Assessments Of The Military Balance In The China Seas

“If we maintain our faith in God, love of freedom, and superior global airpower, the future [of the US] looks good.” — U.S. Air Force General Curtis E. LeMay (Commander, U.S. Strategic Command, 1948-1957)

Curtis LeMay was involved in the formation of RAND Corporation after World War II. RAND created several models to measure the dynamics of the US-China military balance over time. Since 1996, this has been computed for two scenarios, differing by range from mainland China: one over Taiwan and the other over the Spratly Islands. The results of the model results for selected years can be seen in the graphic below.

The capabilities listed in the RAND study are interesting, notable in that the air superiority category, rough parity exists as of 2017. Also, the ability to attack air bases has given an advantage to the Chinese forces.

Investigating the methodology used does not yield any precise quantitative modeling examples, as would be expected in a rigorous academic effort, although there is some mention of statistics, simulation and historical examples.

The analysis presented here necessarily simplifies a great number of conflict characteristics. The emphasis throughout is on developing and assessing metrics in each area that provide a sense of the level of difficulty faced by each side in achieving its objectives. Apart from practical limitations, selectivity is driven largely by the desire to make the work transparent and replicable. Moreover, given the complexities and uncertainties in modern warfare, one could make the case that it is better to capture a handful of important dynamics than to present the illusion of comprehensiveness and precision. All that said, the analysis is grounded in recognized conclusions from a variety of historical sources on modern warfare, from the air war over Korea and Vietnam to the naval conflict in the Falklands and SAM hunting in Kosovo and Iraq. [Emphasis added].

We coded most of the scorecards (nine out of ten) using a five-color stoplight scheme to denote major or minor U.S. advantage, a competitive situation, or major or minor Chinese advantage. Advantage, in this case, means that one side is able to achieve its primary objectives in an operationally relevant time frame while the other side would have trouble in doing so. [Footnote] For example, even if the U.S. military could clear the skies of Chinese escort fighters with minimal friendly losses, the air superiority scorecard could be coded as “Chinese advantage” if the United States cannot prevail while the invasion hangs in the balance. If U.S. forces cannot move on to focus on destroying attacking strike and bomber aircraft, they cannot contribute to the larger mission of protecting Taiwan.

All of the dynamic modeling methodology (which involved a mix of statistical analysis, Monte Carlo simulation, and modified Lanchester equations) is publicly available and widely used by specialists at U.S. and foreign civilian and military universities.” [Emphasis added].

As TDI has contended before, the problem with using Lanchester’s equations is that, despite numerous efforts, no one has been able to demonstrate that they accurately represent real-world combat. So, even with statistics and simulation, how good are the results if they have relied on factors or force ratios with no relation to actual combat?

What about new capabilities?

As previously posted, the Kratos Mako Unmanned Combat Aerial Vehicle (UCAV), marketed as the “unmanned wingman,” has recently been cleared for export by the U.S. State Department. This vehicle is specifically oriented towards air-to-air combat, is stated to have unparalleled maneuverability, as it need not abide by limits imposed by human physiology. The Mako “offers fighter-like performance and is designed to function as a wingman to manned aircraft, as a force multiplier in contested airspace, or to be deployed independently or in groups of UASs. It is capable of carrying both weapons and sensor systems.” In addition, the Mako has the capability to be launched independently of a runway, as illustrated below. The price for these vehicles is three million each, dropping to two million each for an order of at least 100 units. Assuming a cost of $95 million for an F-35A, we can imagine a hypothetical combat scenario pitting two F-35As up against 100 of these Mako UCAVs in a drone swarm; a great example of the famous phrase, quantity has a quality all its own.

A battery of Kratos Aerial Target drone ready for take off. One of the advantages of the low-cost Kratos drones are their ability to get into the air quickly. [Kratos Defense]

How to evaluate the effects of these possible UCAV drone swarms?

In building up towards the analysis of all of these capabilities in the full theater, campaign level conflict, some supplemental wargaming may be useful. One game that takes a good shot at modeling these dynamics is Asian Fleet.  This is a part of the venerable Fleet Series, published by Victory Games, designed by Joseph Balkoski to model modern (that is Cold War) naval combat. This game system has been extended in recent years, originally by Command Magazine Japan, and then later by Technical Term Gaming Company.

Screenshot of Asian Fleet module by Bryan Taylor [vassalengine.org]

More to follow on how this game transpires!

C-WAM 3

Now, in the article by Michael Peck introducing C-WAM, there was a quote that got our attention:

“We tell everybody: Don’t focus on the various tactical outcomes,” Mahoney says. “We know they are wrong. They are just approximations. But they are good enough to say that at the operational level, ‘This is a good idea. This might work. That is a bad idea. Don’t do that.’”

Source: https://www.govtechworks.com/how-a-board-game-helps-dod-win-real-battles/#gs.ifXPm5M

I am sorry, but this line of argument has always bothered me.

While I understand that no model is perfect, that is the goal that modelers should always strive for. If the model is a poor representation of combat, or parts of combat, then what are you teaching the user? If the user is professional military, then is this negative training? Are you teaching them an incorrect understanding of combat? Will that understanding only be corrected after real combat and loss of American lives? This is not being melodramatic…..you fight as you train.

We have seen the argument made elsewhere that some models are only being used for training, so…….

I would like to again bring your attention to the “base of sand” problem:

https://dupuyinstitute.org/2017/04/10/wargaming-multi-domain-battle-the-base-of-sand-problem/

As always, it seems that making the models more accurate seems to take lower precedence to whatever. Validating models tends to never be done. JICM has never been validated. COSAGE and ATCAL as used in JICM have never been validated. I don’t think C-WAM has ever been validated.

Just to be annoyingly preachy, I would like to again bring your attention to the issue of validation:

Military History and Validation of Combat Models