Category War by Numbers

Measuring the Effects of Combat in Cities, Phase III – part 1

Now comes Phase III of this effort. The Phase I report was dated 11 January 2002 and covered the European Theater of Operations (ETO). The Phase II report [Part I and Part II] was dated 30 June 2003 and covered the Eastern Front (the three battles of Kharkov). Phase III was completed in 31 July 2004 and covered the Battle of Manila in the Pacific Theater, post-WWII engagements, and battalion-level engagements. It was a pretty far ranging effort.

In the case of Manila, this was the first time that we based our analysis using only one-side data (U.S. only). In this case, the Japanese tended to fight to almost the last man. We occupied the field of combat after the battle and picked up their surviving unit records. Among the Japanese, almost all died and only a few were captured by the U.S. So, we had fairly good data from the U.S. intelligence files. Regardless, the U.S. battle reports for Japanese data was the best data available. This allowed us to work with one-sided data. The engagements were based upon the daily operations of the U.S. Army’s 37th Infantry Division and the 1st Cavalry Division.

Conclusions (from pages 44-45):

The overall conclusions derived from the data analysis in Phase I were as follows, while those from this Phase III analysis are in bold italics.

  1. Urban combat did not significantly influence the Mission Accomplishment (Outcome) of the engagements. Phase III Conclusion: This conclusion was further supported.
  2. Urban combat may have influenced the casualty rate. If so, it appears that it resulted in a reduction of the attacker casualty rate and a more favorable casualty exchange ratio compared to non-urban warfare. Whether or not these differences are caused by the data selection or by the terrain differences is difficult to say, but regardless, there appears to be no basis to the claim that urban combat is significantly more intense with regards to casualties than is non-urban warfare. Phase III Conclusion: This conclusion was further supported. If urban combat influenced the casualty rate, it appears that it resulted in a reduction of the attacker casualty rate and a more favorable casualty exchange ratio compared to non-urban warfare. There still appears to be no basis to the claim that urban combat is significantly more intense with regards to casualties than is non-urban warfare.
  3. The average advance rate in urban combat should be one-half to one-third that of non-urban combat. Phase III Conclusion: There was strong evidence of a reduction in the advance rates in urban terrain in the PTO data. However, given that this was a single extreme case, then TDI still stands by its original conclusion that the average advance rate in urban combat should be about one-half to one-third that of non-urban combat/
  4. Overall, there is little evidence that the presence of urban terrain results in a higher linear density of troops, although the data does seem to trend in that direction. Phase III Conclusion: The PTO data shows the highest densities found in the data sets for all three phases of this study. However, it does not appear that the urban density in the PTO was significantly higher than the non-urban density. So it remains difficult to tell whether or not the higher density was a result of the urban terrain or was simply a consequence of the doctrine adopted to meet the requirements found in the Pacific Theater.
  5. Overall, it appears that the loss of armor in urban terrain is the same as or less than that found in non-urban terrain, and in some cases is significantly lower. Phase III Conclusion: This conclusion was further supported.
  6. Urban combat did not significantly influence the Force Ratio required to achieve success or effectively conduct combat operations. Phase III Conclusion: This conclusion was further supported.
  7. Nothing could be determined from an analysis of the data regarding the Duration of Combat (Time) in urban versus non-urban terrain. Phase III Conclusion: Nothing could be determined from an analysis of the data regarding the Duration of Combat (Time) in urban versus non-urban terrain.

So, in Phase I we compared 46 urban and conurban engagements in the ETO to 91 non-urban engagements. In Phase II, we compared 51 urban and conurban engagements in an around Kharkov to 49 non-urban Kursk engagements. On Phase III, from Manila we compared 53 urban and conurban engagements to 41 non-urban engagements mostly from Iwo Jima, Okinawa and Manila. The next blog post on urban warfare will discuss our post-WWII data.

P.S. The picture is an aerial view of the destroyed walled city of Intramuros taken on May 1945

Measuring the Effects of Combat in Cities, Phase II – part 1

Our first urban warfare report that we did had a big impact. It clearly showed that the intensity of urban warfare was not what some of the “experts” out there were claiming. In particular, it called into question some of the claims being made by RAND. But, the report was based upon Aachen, Cherbourg, and a collection of mop-up operations along the Channel Coast. Although this was a good starting point because of the ease of research and availability of data, we did not feel that this was a fully representative collection of cases. We also did not feel that it was based upon enough cases, although we had already assembled more cases than most “experts” were using. We therefore convinced CAA (Center for Army Analysis) to fund a similar effort for the Eastern Front in World War II.

For this second phase, we again assembled a collection of Eastern Front urban warfare engagements in our DLEDB (Division-level Engagement Data Base) and compared it to Eastern Front non-urban engagements. We had, of course, a considerable collection of non-urban engagements already assembled from the Battle of Kursk in July 1943. We therefore needed a good urban engagement nearby. Kharkov is the nearest major city to where these non-urban engagements occurred and it was fought over three times in 1943. It was taken by the Red Army in February, it was retaken by the German Army in March, and it was taken again by the Red Army in August. Many of the units involved were the same units involved in the Battle of Kursk. This was a good close match. It has the additional advantage that both sides were at times on the offense.

Furthermore, Kharkov was a big city. At the time it was the fourth biggest city in the Soviet Union, being bigger than Stalingrad (as measured by pre-war population). A picture of its Red Square in March 1943, after the Germans retook it, is above.

We did have good German records for 1943 and we were able to get access to Soviet division-level records from February, March and August from the Soviet military archives in Podolsk. Therefore, we were able to assembled all the engagements based upon the unit records of both sides. No secondary sources were used, and those that were available were incomplete, usually one-sided, sometimes biased and often riddled with factual errors.

So, we ended up with 51 urban and conurban engagements from the fighting around Kharkov, along with 65 non-urban engagements from Kursk (we have more now).

The Phase II effort was completed on 30 June 2003. The conclusions of Phase II (pages 40-41) were similar to Phase I:

.Phase II Conclusions:

  1. Mission Accomplishment: This [Phase I] conclusion was further supported. The data does show a tendency for urban engagements not to generate penetrations.
  2. Casualty Rates: This [Phase I] conclusion was further supported. If urban combat influenced the casualty rate, it appears that it resulted in a reduction of the attacker casualty rate and a more favorable casualty exchange ratio compared to nonurban warfare. There still appears to be no basis to the claim that urban combat is significantly more intense with regards to casualties than is nonurban warfare.
  3. Advance Rates: There is no strong evidence of a reduction in the advance rates in urban terrain in the Eastern Front data. TDI still stands by its original conclusion that the average advance rate in urban combat should be one-half to one-third that of nonurban combat.
  4. Linear Density: Again, there is little evidence that the presence of urban terrain results in a higher linear density of troops, but unlike the ETO data, the data did not show a tendency to trend in that direction.
  5. Armor Losses: This conclusion was further supported (Phase I conclusion was: Overall, it appears that the loss of armor in urban terrain is the same as or less than that found in nonurban terrain, and in some cases is significantly lower.)
  6. Force Ratios: The conclusion was further supported (Phase I conclusion was: Urban combat did not significantly influence the Force Ratio required to achieve success or effectively conduct combat operations).
  7. Duration of Combat: Nothing could be determined from an analysis of the data regarding the Duration of Combat (Time) in urban versus nonurban terrain.

There is a part 2 to this effort that I will pick up in a later post.

The (Missing) Urban Warfare Study

[This post was originally published on 13 December 2017]

And then…..we discovered the existence of a significant missing study that we wanted to see.

Around 2000, the Center for Army Analysis (CAA) contracted The Dupuy Institute to conduct an analysis of how to represent urban warfare in combat models. This was the first work we had ever done on urban warfare, so…….we first started our literature search. While there was a lot of impressionistic stuff gathered from reading about Stalingrad and watching field exercises, there was little hard data or analysis. Simply no one had ever done any analysis of the nature of urban warfare.

But, on the board of directors of The Dupuy Institute was a grand old gentleman called John Kettelle. He had previously been the president of Ketron, an operations research company that he had founded. Kettelle had been around the business for a while, having been an office mate of Kimball, of Morse and Kimbell fame (the people who wrote the original U.S. Operations Research “textbook” in 1951: Methods of Operations Research). He is here: https://www.adventfuneral.com/services/john-dunster-kettelle-jr.htm?wpmp_switcher=mobile

John had mentioned several times a massive study on urban warfare that he had done  for the U.S. Army in the 1970s. He had mentioned details of it, including that it was worked on by his staff over the course of several years, consisted of several volumes, looked into operations in Stalingrad, was pretty extensive and exhaustive, and had a civil disturbance component to it that he claimed was there at the request of the Nixon White House. John Kettelle sold off his company Ketron in the 1990s and was now semi-retired.

So, I asked John Kettelle where his study was. He said he did not know. He called over to the surviving elements of Ketron and they did not have a copy. Apparently significant parts of the study were classified. In our review of the urban warfare literature around 2000 we found no mention of the study or indications that anyone had seen or drawn any references from it.

This was probably the first extensive study ever done on urban warfare. It employed at least a half-dozen people for multiple years. Clearly the U.S. Army spent several million of our hard earned tax dollars on it…..yet is was not being used and could not be found. It was not listed in DTIC, NTIS, on the web, nor was it in Ketron’s files, and John Kettelle did not have a copy of it. It was lost !!!

So, we proceeded with our urban warfare studies independent of past research and ended up doing three reports on the subject. Theses studies are discussed in two chapters of my book War by Numbers.

All three studies are listed in our report list: http://www.dupuyinstitute.org/tdipub3.htm

The first one is available on line at:  http://www.dupuyinstitute.org/pdf/urbanwar.pdf

As the Ketron urban warfare study was classified, there were probably copies of it in classified U.S. Army command files in the 1970s. If these files have been properly retired then these classified files may exist in the archives. At some point, they may be declassified. At some point the study may be re-discovered. But……the U.S. Army after spending millions for this study, preceded to obtain no benefit from the study in the late 1990s, when a lot of people re-opened the issue of urban warfare. This would have certainly been a useful study, especially as much of what the Army, RAND and others were discussing at the time was not based upon hard data and was often dead wrong.

This may be a case of the U.S. Army having to re-invent the wheel because it has not done a good job of protecting and disseminating its studies and analysis. This seems to particularly be a problem with studies that were done by contractors that have gone out of business. Keep in mind, we were doing our urban warfare work for the Center for Army Analysis. As a minimum, they should have had a copy of it.

Measuring The Effects Of Combat In Cities, Phase I

“Catalina Kid,” a M4 medium tank of Company C, 745th Tank Battalion, U.S. Army, drives through the entrance of the Aachen-Rothe Erde railroad station during the fighting around the city viaduct on Oct. 20, 1944. [Courtesy of First Division Museum/Daily Herald]

In 2002, TDI submitted a report to the U.S. Army Center for Army Analysis (CAA) on the first phase of a study examining the effects of combat in cities, or what was then called “military operations on urbanized terrain,” or MOUT. This first phase of a series of studies on urban warfare focused on the impact of urban terrain on division-level engagements and army-level operations, based on data drawn from TDI’s DuWar database suite.

This included engagements in France during 1944 including the Channel and Brittany port cities of Brest, Boulogne, Le Havre, Calais, and Cherbourg, as well as Paris, and the extended series of battles in and around Aachen in 1944. These were then compared to data on fighting in contrasting non-urban terrain in Western Europe in 1944-45.

The conclusions of Phase I of that study (pp. 85-86) were as follows:

The Effect of Urban Terrain on Outcome

The data appears to support a null hypothesis, that is, that the urban terrain had no significantly measurable influence on the outcome of battle.

The Effect of Urban Terrain on Casualties

Overall, any way the data is sectioned, the attacker casualties in the urban engagements are less than in the non-urban engagements and the casualty exchange ratio favors the attacker as well. Because of the selection of the data, there is some question whether these observations can be extended beyond this data, but it does not provide much support to the notion that urban combat is a more intense environment than non-urban combat.

The Effect of Urban Terrain on Advance Rates

It would appear that one of the primary effects of urban terrain is that it slows opposed advance rates. One can conclude that the average advance rate in urban combat should be one-half to one-third that of non-urban combat.

The Effect of Urban Terrain on Force Density

Overall, there is little evidence that combat operations in urban terrain result in a higher linear density of troops, although the data does seem to trend in that direction.

The Effect of Urban Terrain on Armor

Overall, it appears that armor losses in urban terrain are the same as, or lower than armor losses in non-urban terrain. And in some cases it appears that armor losses are significantly lower in urban than non-urban terrain.

The Effect of Urban Terrain on Force Ratios

Urban terrain did not significantly influence the force ratio required to achieve success or effectively conduct combat operations.

The Effect of Urban Terrain on Stress in Combat

Overall, it appears that urban terrain was no more stressful a combat environment during actual combat operations than was non-urban terrain.

The Effect of Urban Terrain on Logistics

Overall, the evidence appears to be that the expenditure of artillery ammunition in urban operations was not greater than that in non-urban operations. In the two cases where exact comparisons could be made, the average expenditure rates were about one-third to one-quarter the average expenditure rates expected for an attack posture in the European Theater of Operations as a whole.

The evidence regarding the expenditure of other types of ammunition is less conclusive, but again does not appear to be significantly greater than the expenditures in non-urban terrain. Expenditures of specialized ordnance may have been higher, but the total weight expended was a minor fraction of that for all of the ammunition expended.

There is no evidence that the expenditure of other consumable items (rations, water or POL) was significantly different in urban as opposed to non-urban combat.

The Effect of Urban Combat on Time Requirements

It was impossible to draw significant conclusions from the data set as a whole. However, in the five significant urban operations that were carefully studied, the maximum length of time required to secure the urban area was twelve days in the case of Aachen, followed by six days in the case of Brest. But the other operations all required little more than a day to complete (Cherbourg, Boulogne and Calais).

However, since it was found that advance rates in urban combat were significantly reduced, then it is obvious that these two effects (advance rates and time) are interrelated. It does appear that the primary impact of urban combat is to slow the tempo of operations.

This in turn leads to a hypothetical construct, where the reduced tempo of urban operations (reduced casualties, reduced opposed advance rates and increased time) compared to non-urban operations, results in two possible scenarios.

The first is if the urban area is bounded by non-urban terrain. In this case the urban area will tend to be enveloped during combat, since the pace of battle in the non-urban terrain is quicker. Thus, the urban battle becomes more a mopping-up operation, as it historically has usually been, rather than a full-fledged battle.

The alternate scenario is that created by an urban area that cannot be enveloped and must therefore be directly attacked. This may be caused by geography, as in a city on an island or peninsula, by operational requirements, as in the case of Cherbourg, Brest and the Channel Ports, or by political requirements, as in the case of Stalingrad, Suez City and Grozny.

Of course these last three cases are also those usually included as examples of combat in urban terrain that resulted in high casualty rates. However, all three of them had significant political requirements that influenced the nature, tempo and even the simple necessity of conducting the operation. And, in the case of Stalingrad and Suez City, significant geographical limitations effected the operations as well. These may well be better used to quantify the impact of political agendas on casualties, rather than to quantify the effects of urban terrain on casualties.

The effects of urban terrain at the operational level, and the effect of urban terrain on the tempo of operations, will be further addressed in Phase II of this study.

My Response To My 1997 Article

Shawn likes to post up on the blog old articles from The International TNDM Newsletter. The previous blog post was one such article I wrote in 1997 (he posted it under my name…although he put together the post). This is the first time I have read it since say….1997. A few comments:

  1. In fact, we did go back in systematically review and correct all the Italian engagements. This was primarily done by Richard Anderson from German records and UK records. All the UK engagements were revised as were many of the other Italian Campaign records. In fact, we ended up revising at least half of the WWII engagements in the Land Warfare Data Base (LWDB).
  2. We did greatly expand our collection of data, to over 1,200 engagements, including 752 in a division-level engagement database. Basically we doubled the size of the database (and placed it in Access).
  3. Using this more powerful data collection, I then re-shot the analysis of combat effectiveness. I did not use any modeling structure, but simply just used basic statistics. This effort again showed a performance difference in combat in Italy between the Germans, the Americans and the British. This is discussed in War by Numbers, pages 19-31.
  4. We did actually re-validate the TNDM. The results of this validation are published in War by Numbers, pages 299-324. They were separately validated at corps-level (WWII), division-level (WWII) and at Battalion-level (WWI, WWII and post-WWII).
  5. War by Numbers also includes a detailed discussion of differences in casualty reporting between nations (pages 202-205) and between services (pages 193-202).
  6. We have never done an analysis of the value of terrain using our larger more robust databases, although this is on my short-list of things to do. This is expected to be part of War by Numbers II, if I get around to writing it.
  7. We have done no significant re-design of the TNDM.

Anyhow, that is some of what we have been doing in the intervening 20 years since I wrote that article.

General McInerney

Lt. General Thomas McInerney has been in the news lately, mostly for saying things that are getting him kicked off of news shows:

https://en.wikipedia.org/wiki/Thomas_McInerney

It is my understanding that he was the person who was responsible for making sure that DACM (Dupuy Air Combat Model) was funded by AFSC. He then retired from the Air Force in 1994. We completed the demonstration phase of the DACM and quite simply, there was no one left in the Air Force who was interested in funding it. So, work stopped. I never met General McInerney and was not involved in the marketing of the initial effort.

The Dupuy Institute Air Model Historical Data Study

The Dupuy Air Campaign Model (DACM)

But, this is typical of the problems with doing business with the Pentagon, where an officer will take an interest in your work, generate funding for it, but by the time the first steps are completed, that officer has moved on to another assignment. This has happened to us with other projects. One of these efforts was a joint research project that was done by TDI and former Army surgeon on casualty rates. It was for J-4 of the Joint Staff. The project officer there was extremely interested and involved in the work, but then moved to another assignment. By the time we got original effort completed, the division was headed by an Air Force Colonel who appeared to be only interested in things that flew. Therefore, the project died (except that parts of it were used for Chapter 15: Casualties, pages 193-198, in War by Numbers).

Our experience in dealing with the U.S. defense establishment is that sometimes research efforts that takes longer than a few months will die……because the people interested in it have moved on. This sometimes leads to simple, short-term analysis and fewer properly funded long-term projects.

U.S. Army Force Ratios

People do send me some damn interesting stuff. Someone just sent me a page clipped from U.S. Army FM 3-0 Operations, dated 6 October 2017. There is a discussion in Chapter 7 on “penetration.” This brief discussion on paragraph 7-115 states in part:

7-115. A penetration is a form of maneuver in which an attacking force seeks to rupture enemy defenses on a narrow front to disrupt the defensive system (FM 3-90-1) ….The First U.S. Army’s Operation Cobra (the breakout from the Normandy lodgment in July 1944) is a classic example of a penetration. Figure 7-10 illustrates potential correlation of forces or combat power for a penetration…..”

This is figure 7-10:

So:

  1. Corps shaping operations: 3:1
  2. Corps decisive operations: 9-1
    1. Lead battalion: 18-1

Now, in contrast, let me pull some material from War by Numbers:

From page 10:

European Theater of Operations (ETO) Data, 1944

 

Force Ratio                       Result                          Percent Failure   Number of cases

0.55 to 1.01-to-1.00            Attack Fails                          100%                     5

1.15 to 1.88-to-1.00            Attack usually succeeds        21%                   48

1.95 to 2.56-to-1.00            Attack usually succeeds        10%                   21

2.71-to-1.00 and higher      Attacker Advances                   0%                   42

 

Note that these are division-level engagements. I guess I could assemble the same data for corps-level engagements, but I don’t think it would look much different.

From page 210:

Force Ratio…………Cases……Terrain…….Result

1.18 to 1.29 to 1        4             Nonurban   Defender penetrated

1.51 to 1.64               3             Nonurban   Defender penetrated

2.01 to 2.64               2             Nonurban   Defender penetrated

3.03 to 4.28               2             Nonurban   Defender penetrated

4.16 to 4.78               2             Urban         Defender penetrated

6.98 to 8.20               2             Nonurban   Defender penetrated

6.46 to 11.96 to 1      2             Urban         Defender penetrated

 

These are also division-level engagements from the ETO. One will note that out of 17 cases where the defender was penetrated, only once was the force ratio as high as 9 to 1. The mean force ratio for these 17 cases is 3.77 and the median force ratio is 2.64.

Now the other relevant tables in this book are in Chapter 8: Outcome of Battles (page 60-71). There I have a set of tables looking at the loss rates based upon one of six outcomes. Outcome V is defender penetrated. Unfortunately, as the purpose of the project was to determine prisoner of war capture rates, we did not bother to calculate the average force ratio for each outcome. But, knowing the database well, the average force ratio for defender penetrated results may be less than 3-to-1 and is certainly is less than 9-to-1. Maybe I will take few days at some point and put together a force ratio by outcome table.

Now, the source of FM 3.0 data is not known to us and is not referenced in the manual. Why they don’t provide such a reference is a mystery to me, as I can point out several examples of this being an issue. On more than one occasion data has appeared in Army manuals that we can neither confirm or check, and which we could never find the source for. But…it is not referenced. I have not looked at the operation in depth, but don’t doubt that at some point during Cobra they had a 9:1 force ratio and achieved a penetration. But…..this is different than leaving the impression that a 9:1 force ratio is needed to achieve a penetration. I do not know it that was the author’s intent, but it is something that the casual reader might infer. This probably needs to be clarified.

Response 3 (Breakpoints)

This is in response to long comment by Clinton Reilly about Breakpoints (Forced Changes in Posture) on this thread:

Breakpoints in U.S. Army Doctrine

Reilly starts with a very nice statement of the issue:

Clearly breakpoints are crucial when modelling battlefield combat. I have read extensively about it using mostly first hand accounts of battles rather than high level summaries. Some of the major factors causing it appear to be loss of leadership (e.g. Harald’s death at Hastings), loss of belief in the units capacity to achieve its objectives (e.g. the retreat of the Old Guard at Waterloo, surprise often figured in Mongol successes, over confidence resulting in impetuous attacks which fail dramatically (e.g. French attacks at Agincourt and Crecy), loss of control over the troops (again Crecy and Agincourt) are some of the main ones I can think of off hand.

The break-point crisis seems to occur against a background of confusion, disorder, mounting casualties, increasing fatigue and loss of morale. Casualties are part of the background but not usually the actual break point itself.

He then states:

Perhaps a way forward in the short term is to review a number of first hand battle accounts (I am sure you can think of many) and calculate the percentage of times these factors and others appear as breakpoints in the literature.

This has been done. In effect this is what Robert McQuie did in his article and what was the basis for the DMSI breakpoints study.

Battle Outcomes: Casualty Rates As a Measure of Defeat

Mr. Reilly then concludes:

Why wait for the military to do something? You will die of old age before that happens!

That is distinctly possible. If this really was a simple issue that one person working for a year could produce a nice definitive answer for…..it would have already been done !!!

Let us look at the 1988 Breakpoints study. There was some effort leading up to that point. Trevor Dupuy and DMSI had already looked into the issue. This included developing a database of engagements (the Land Warfare Data Base or LWDB) and using that to examine the nature of breakpoints. The McQuie article was developed from this database, and his article was closely coordinated with Trevor Dupuy. This was part of the effort that led to the U.S. Army’s Concepts Analysis Agency (CAA) to issue out a RFP (Request for Proposal). It was competitive. I wrote the proposal that won the contract award, but the contract was given to Dr. Janice Fain to lead. My proposal was more quantitative in approach than what she actually did. Her effort was more of an intellectual exploration of the issue. I gather this was done with the assumption that there would be a follow-on contract (there never was). Now, up until that point at least a man-year of effort had been expended, and if you count the time to develop the databases used, it was several man-years.

Now the Breakpoints study was headed up by Dr. Janice B. Fain, who worked on it for the better part of a year. Trevor N. Dupuy worked on it part-time. Gay M. Hammerman conducted the interview with the veterans. Richard C. Anderson researched and created an additional 24 engagements that had clear breakpoints in them for the study (that is DMSI report 117B). Charles F. Hawkins was involved in analyzing the engagements from the LWDB. There were several other people also involved to some extent. Also, 39 veterans were interviewed for this effort. Many were brought into the office to talk about their experiences (that was truly entertaining). There were also a half-dozen other staff members and consultants involved in the effort, including Lt. Col. James T. Price (USA, ret), Dr. David Segal (sociologist), Dr. Abraham Wolf (a research psychologist), Dr. Peter Shapiro (social psychology) and Col. John R. Brinkerhoff (USA, ret). There were consultant fees, travel costs and other expenses related to that. So, the entire effort took at least three “man-years” of effort. This was what was needed just get to the point where we are able to take the next step.

This is not something that a single scholar can do. That is why funding is needed.

As to dying of old age before that happens…..that may very well be the case. Right now, I am working on two books, one of them under contract. I sort of need to finish those up before I look at breakpoints again. After that, I will decide whether to work on a follow-on to America’s Modern Wars (called Future American Wars) or work on a follow-on to War by Numbers (called War by Numbers II…being the creative guy that I am). Of course, neither of these books are selling well….so perhaps my time would be better spent writing another Kursk book, or any number of other interesting projects on my plate. Anyhow, if I do War by Numbers II, then I do plan on investing several chapters into addressing breakpoints. This would include using the 1,000+ cases that now populate our combat databases to do some analysis. This is going to take some time. So…….I may get to it next year or the year after that, but I may not. If someone really needs the issue addressed, they really need to contract for it.

C-WAM 4 (Breakpoints)

A breakpoint or involuntary change in posture is an essential part of modeling. There is a breakpoint methodology in C-WAM. According to slide 18 and rule book section 5.7.2 is that ground unit below 50% strength can only defend. It is removed at below 30% strength. I gather this is a breakpoint for a brigade.

C-WAM 2

Let me just quote from Chapter 18 (Modeling Warfare) of my book War by Numbers: Understanding Conventional Combat (pages 288-289):

The original breakpoints study was done in 1954 by Dorothy Clark of ORO [which can be found here].[1] It examined forty-three battalion-level engagements where the units “broke,” including measuring the percentage of losses at the time of the break. Clark correctly determined that casualties were probably not the primary cause of the breakpoint and also declared the need to look at more data. Obviously, forty-three cases of highly variable social science-type data with a large number of variables influencing them are not enough for any form of definitive study. Furthermore, she divided the breakpoints into three categories, resulting in one category based upon only nine observations. Also, as should have been obvious, this data would apply only to battalion-level combat. Clark concluded “The statement that a unit can be considered no longer combat effective when it has suffered a specific casualty percentage is a gross oversimplification not supported by combat data.” She also stated “Because of wide variations in data, average loss percentages alone have limited meaning.”[2]

Yet, even with her clear rejection of a percent loss formulation for breakpoints, the 20 to 40 percent casualty breakpoint figures remained in use by the training and combat modeling community. Charts in the 1964 Maneuver Control field manual showed a curve with the probability of unit break based on percentage of combat casualties.[3] Once a defending unit reached around 40 percent casualties, the chance of breaking approached 100 percent. Once an attacking unit reached around 20 percent casualties, the chance of it halting (type I break) approached 100% and the chance of it breaking (type II break) reached 40 percent. These data were for battalion-level combat. Because they were also applied to combat models, many models established a breakpoint of around 30 or 40 percent casualties for units of any size (and often applied to division-sized units).

To date, we have absolutely no idea where these rule-of-thumb formulations came from and despair of ever discovering their source. These formulations persist despite the fact that in fifteen (35%) of the cases in Clark’s study, the battalions had suffered more than 40 percent casualties before they broke. Furthermore, at the division-level in World War II, only two U.S. Army divisions (and there were ninety-one committed to combat) ever suffered more than 30% casualties in a week![4] Yet, there were many forced changes in combat posture by these divisions well below that casualty threshold.

The next breakpoints study occurred in 1988.[5] There was absolutely nothing of any significance (meaning providing any form of quantitative measurement) in the intervening thirty-five years, yet there were dozens of models in use that offered a breakpoint methodology. The 1988 study was inconclusive, and since then nothing further has been done.[6]

This seemingly extreme case is a fairly typical example. A specific combat phenomenon was studied only twice in the last fifty years, both times with inconclusive results, yet this phenomenon is incorporated in most combat models. Sadly, similar examples can be pulled for virtually each and every phenomena of combat being modeled. This failure to adequately examine basic combat phenomena is a problem independent of actual combat modeling methodology.

Footnotes:

[1] Dorothy K. Clark, Casualties as a Measure of the Loss of Combat Effectiveness of an Infantry Battalion (Operations Research Office, Johns Hopkins University, 1954).

 [2] Ibid, page 34.

[3] Headquarters, Department of the Army, FM 105-5 Maneuver Control (Washington, D.C., December, 1967), pages 128-133.

[4] The two exceptions included the U.S. 106th Infantry Division in December 1944, which incidentally continued fighting in the days after suffering more than 40 percent losses, and the Philippine Division upon its surrender in Bataan on 9 April 1942 suffered 100% losses in one day in addition to very heavy losses in the days leading up to its surrender.

[5] This was HERO Report number 117, Forced Changes of Combat Posture (Breakpoints) (Historical Evaluation and Research Organization, Fairfax, VA., 1988). The intervening years between 1954 and 1988 were not entirely quiet. See HERO Report number 112, Defeat Criteria Seminar, Seminar Papers on the Evaluation of the Criteria for Defeat in Battle (Historical Evaluation and Research Organization, Fairfax, VA., 12 June 1987) and the significant article by Robert McQuie, “Battle Outcomes: Casualty Rates as a Measure of Defeat” in Army, issue 37 (November 1987). Some of the results of the 1988 study was summarized in the book by Trevor N. Dupuy, Understanding Defeat: How to Recover from Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

 [6] The 1988 study was the basis for Trevor Dupuy’s book: Col. T. N. Dupuy, Understanding Defeat: How to Recover From Loss in Battle to Gain Victory in War (Paragon House Publishers, New York, 1990).

Also see:

Battle Outcomes: Casualty Rates As a Measure of Defeat

[NOTE: Post updated to include link to Dorothy Clark’s original breakpoints study.]

Response 2 (Performance of Armies)

In an exchange with one of readers, he mentioned that about the possibility to quantifiably access the performances of armies and produce a ranking from best to worst. The exchange is here:

The Dupuy Institute Air Model Historical Data Study

We have done some work on this, and are the people who have done the most extensive published work on this. Swedish researcher Niklas Zetterling in his book Normandy 1944: German Military Organization, Combat Power and Organizational Effectiveness also addresses this subject, as he has elsewhere, for example, an article in The International TNDM Newsletter, volume I, No. 6, pages 21-23 called “CEV Calculations in Italy, 1943.” It is here: http://www.dupuyinstitute.org/tdipub4.htm

When it came to measuring the differences in performance of armies, Martin van Creveld referenced Trevor Dupuy in his book Fighting Power: German and U.S. Army Performance, 1939-1945, pages 4-8.

What Trevor Dupuy has done is compare the performances of both overall forces and individual divisions based upon his Quantified Judgment Model (QJM). This was done in his book Numbers, Predictions and War: The Use of History to Evaluate and Predict the Outcome of Armed Conflict. I bring the readers attention to pages ix, 62-63, Chapter 7: Behavioral Variables in World War II (pages 95-110), Chapter 9: Reliably Representing the Arab-Israeli Wars (pages 118-139), and in particular page 135, and pages 163-165. It was also discussed in Understanding War: History and Theory of Combat, Chapter Ten: Relative Combat Effectiveness (pages 105-123).

I ended up dedicating four chapters in my book War by Numbers: Understanding Conventional Combat to the same issue. One of the problems with Trevor Dupuy’s approach is that you had to accept his combat model as a valid measurement of unit performance. This was a reach for many people, especially those who did not like his conclusions to start with. I choose to simply use the combined statistical comparisons of dozens of division-level engagements, which I think makes the case fairly convincingly without adding a construct to manipulate the data. If someone has a disagreement with my statistical compilations and the results and conclusions from it, I have yet to hear them. I would recommend looking at Chapter 4: Human Factors (pages 16-18), Chapter 5: Measuring Human Factors in Combat: Italy 1943-1944 (pages 19-31), Chapter 6: Measuring Human Factors in Combat: Ardennes and Kursk (pages 32-48), and Chapter 7: Measuring Human Factors in Combat: Modern Wars (pages 49-59).

Now, I did end up discussing Trevor Dupuy’s model in Chapter 19: Validation of the TNDM and showing the results of the historical validations we have done of his model, but the model was not otherwise used in any of the analysis done in the book.

But….what we (Dupuy and I) have done is a comparison between forces that opposed each other. It is a measurement of combat value relative to each other. It is not an absolute measurement that can be compared to other armies in different times and places. Trevor Dupuy toyed with this on page 165 of NPW, but this could only be done by assuming that combat effectiveness of the U.S. Army in WWII was the same as the Israeli Army in 1973.

Anyhow, it is probably impossible to come up with a valid performance measurement that would allow you to rank an army from best to worse. It is possible to come up with a comparative performance measurement of armies that have faced each other. This, I believe we have done, using different methodologies and different historical databases. I do believe it would be possible to then determine what the different factors are that make up this difference. I do believe it would be possible to assign values or weights to those factors. I believe this would be very useful to know, in light of the potential training and organizational value of this knowledge.