Category Battle of Britain

Some Background on TDI Data Bases

The Dupuy Institute (TDI) are sitting on a number of large combat databases that are unique to us and are company proprietary. For obvious reasons they will stay that way for the foreseeable future.

The original database of battles came to be called the Land Warfare Data Base (LWDB). It was also called the CHASE database by CAA. It consisted of 601 or 605 engagements from 1600-1973. It covered a lot of periods and lot of different engagement sizes, ranging from very large battles of hundreds of thousand a side to small company-sized actions. The length of battles range from a day to several months (some of the World War I battles like the Somme).

From that database, which is publicly available, we created a whole series of databases totaling some 1200 engagements. There are discussed in some depth in past posts.

Our largest and most developed data is our division-level database covering combat from 1904-1991 of 752 cases: It is discussed here: The Division Level Engagement Data Base (DLEDB) | Mystics & Statistics (

There are a number of other databases we have. They are discussed here: Other TDI Data Bases | Mystics & Statistics (

The cost of independently developing such a database is given here: Cost of Creating a Data Base | Mystics & Statistics (

Part of the reason for this post is that I am in a discussion with someone who is doing analysis based upon the much older 601 case database. Considering the degree of expansion and improvement, including corrections to some of the engagements, this does not seem a good use of their time., especially as we have so greatly expanded the number engagements from 1943 and on.

Now, I did use some of these databases for my book War by Numbers. I am also using them for my follow-up book, currently titled More War by Numbers. So the analysis I have done based upon them is available. I have also posted parts of the 192 Kursk engagements in my first Kursk book and 76 of them in my Prokhorovka book. None of these engagements were in the original LWDB. 

If people want to use the TDI databases for their own independent analysis, they will need to find the proper funding so as to purchase or get access to these databases. 

TDI and the TNDM

The Dupuy Institute does occasionally make use of a combat model developed by Trevor Dupuy called the Tactical Numerical Deterministic Model (TNDM). That model is a development of his older model the Quantified Judgment Model (QJM). 
There is an impression, because the QJM is widely known, that the TNDM is heavily involved in our work. In fact, over 90% of our work has not involved the TNDM. Here a list of major projects/publications that we done since 1993.
Based upon TNDM:
Artillery Suppression Study – study never completed (1993-1995)
Air Model Historical Data feasibility study (1995)
Support contract for South African TNDM (1996?)
International TNDM Newsletter (1996-1998, 2009-2010)
TNDM sale to Finland (2002?)
FCS Study – 2 studies (2006)
TNDM sale to Singapore (2009)
Small-Unit Engagement Database (2011)
Addressed the TNDM:
Bosnia Casualty Estimate (1995) – used the TNDM to evaluate one possible scenario
Casualty Estimation Methodologies Study (2005) – was two of the six methodologies tested
Data for Wargames training course (2016)
War by Numbers (2017) – addressed in two chapters out of 20
Did not use the TNDM: 
Kursk Data Base (1993-1996)
Landmine Study for JCS (1996)
Combat Mortality Study (1998)
Record Keeping Survey (1998-2000)
Capture Rate Studies – 3 studies (1998-2001)
Other Landmine Studies – 6 studies (2000-2001)
Lighter Weight Armor Study (2001)
Urban Warfare – 3 studies (2002-2004)
Base Realignment studies for PA – 3 studies (2003-2005)
Chinese Doctrine Study (2003)
Situational Awareness Study (2004)
Iraq Casualty Estimate (2004-2005)
The use of chemical warfare in WWI – feasibility study (2005?)
Battle of Britain Data Base (2005)
1969 Sino-Soviet Conflict (2006)
MISS – Modern Insurgency Spread Sheets (2006-2009)
Insurgency Studies – 11 studies/reports (2007-2009)
America’s Modern Wars (2015)
Kursk: The Battle of Prokhorovka (2015)
The Battle of Prokhorovka (2019)
Aces at Kursk (2021)
More War by Numbers (2022?)
Our bread and butter was all the studies that “did not use the TNDM.” Basically the capture rate studies, the urban warfare studies and the insurgencies studies kept us steadily funded for year after year. We would have not been able to maintain TDI on the TNDM. We had one contract in excess of $100K in 1994-95 (the Artillery Suppression study) and our next TNDM related contract that was over $100K was in 2005.

Summation of our Validation Posts

This extended series of posts about validation of combat models was originally started by Shawn Woodford’s post on future modeling efforts and the “Base of Sand” problem.

Wargaming Multi-Domain Battle: The Base Of Sand Problem

This post apparently irked some people at TRADOC and they wrote an article in the December issue of the Phalanx referencing his post and criticizing it. This resulted in the following seven responses from me:

Engaging the Phalanx


Validating Attrition

Physics-based Aspects of Combat

Historical Demonstrations?


Engaging the Phalanx (part 7 of 7)

This was probably overkill…..but guys who write 1,662 page books sometimes tend to be a little wordy.

While it is very important to identify a problem, it is also helpful to show the way forward. Therefore, I decided to discuss what data bases were available for validation. After all, I would like to see the modeling and simulation efforts to move forward (and right now, they seem to be moving backward). This led to the following nine posts:

Validation Data Bases Available (Ardennes)

Validation Data Bases Available (Kursk)

The Use of the Two Campaign Data Bases

The Battle of Britain Data Base

Battles versus Campaigns (for Validation)

The Division Level Engagement Data Base (DLEDB)

Battalion and Company Level Data Bases

Other TDI Data Bases

Other Validation Data Bases

There were also a few other validation issues that had come to mind while I was writing these blog posts, so this led to the following series of three posts:

Face Validation

Validation by Use

Do Training Models Need Validation?

Finally, there were a few other related posts that were scattered through this rather extended diatribe. It includes the following six posts:

Paul Davis (RAND) on Bugaboos


TDI Friday Read: Engaging The Phalanx

Combat Adjudication

China and Russia Defeats the USA

Building a Wargamer

That kind of ends this discussion on validation. It kept me busy for while. Not sure if you were entertained or informed by it. It is time for me to move onto another subject, not that I have figured out yet what that will be.

Face Validation

The phrase “face validation” shows up in our blog post earlier this week on Combat Adjudication. It is a phrase I have heard many times over the decades, sometimes by very established Operation Researchers (OR). So what does it mean?

Well, it is discussed in the Department of the Army Pamphlet 5-11: Verification, Validation and Accreditation of Army Models and Simulations: Pamphlet 5-11

Their first mention of it is on page 34: “SMEs [Subject Matter Experts] or other recognized individuals in the field of inquiry. The process by which experts compare M&S [Modeling and Simulation] structure and M&S output to their estimation of the real world is called face validation, peer review, or independent review.”

On page 35 they go on to state: “RDA [Research, Development, and Acquisition]….The validation method typically chosen for this category of M&S is face validation.”

And on page 36 under Technical Methods: “Face validation. This is the process of determining whether an M&S, on the surface, seems reasonable to personnel who are knowledgeable about the system or phenomena under study. This method applies the knowledge and understanding of experts in the field and is subject to their biases. It can produce a consensus of the community if the number of breadth of experience of the experts represent the key commands and agencies. Face validation is a point of departure to determine courses of action for more comprehensive validation efforts.” [I put the last part in bold]

Page 36: “Functional decomposition (sometimes known as piecewise validation)….When used in conjunction with face validation of the overall M&S results, functional decomposition is extremely useful in reconfirming previous validation of a recently modified portions of the M&S.”

I have not done a survey of all army, air force, navy, marine, coast guard or Department of Defense (DOD) regulations. This one is enough.

So, “face validation” is asking one or more knowledgeable (or more senior) people if the model looks good. I guess it really depends on whose the expert is and to what depth they look into it. I have never seen a “face validation” report (validation reports are also pretty rare).

Who’s “faces” do they use? Are they outside independent people or people inside the organization (or the model designer himself)? I am kind of an expert, yet, I have never been asked. I do happen to be one of the more experienced model validation people out there, having managed or directly created six+ validation databases and having conducted five validation-like exercises. When you consider that most people have not done one, should I be a “face” they contact? Or is this process often just to “sprinkle holy water” on the model and be done?

In the end, I gather for practical purposes the process of face validation is that if a group of people think it is good, then it is good. In my opinion, “face validation” is often just an argument that allows people to explain away or simply dismiss the need for any rigorous analysis of the model. The pamphlet does note that “Face validation is a point of departure to determine courses of action for more comprehensive validation efforts.” How often have we’ve seen the subsequent comprehensive validation effort? Very, very rarely. It appears that “face validation” is the end point.
Is this really part of the scientific method?

Battles versus Campaigns (for Validation)

So we created three campaign databases. One of the strangest arguments I have heard against doing validations or testing combat models to historical data, is that this is only one outcome from history. So you don’t know if model is in error or if this was a unusual outcome to the historical event. Someone described it as the N=1 argument. There are lots of reasons why I am not too impressed with this argument that I may enumerate in a later blog post. It certainly might apply to testing the model to just one battle (like the Battle of 73 Easting in 1991), but these are weeks-long campaign databases with hundreds of battles. One can test the model to these hundreds of points in particular in addition to testing it to the overall result.

In the case of the Kursk Data Base (KDB), we have actually gone through the data base and created from it 192 division-level engagements. This covers every single combat action by every single division during the two week offensive around Belgorod. Furthermore, I have listed each and every one of these as an “engagement sheet’ in my book on Kursk. The 192 engagement sheets are a half-page or page-long tabulation of the strengths and losses for each engagement for all units involved. Most sheets cover one day of battle. It took considerable work to assemble these. First one had to figure out who was opposing whom (especially as unit boundaries never match) and then work from there. So, if someone wants to test a model or model combat or do historical analysis, one could simply assemble a database from these 192 engagements. If one wanted more details on the engagements, there are detailed breakdowns of the equipment in the Kursk Data Base and detailed descriptions of the engagements in my Kursk book. My new Prokhorovka book (release date 1 June), which only covers the part of the southern offensive around Prokhorovka from the 9th of July, has 76 of those engagements sheets. Needless to say, these Kursk engagements also make up 192 of the 752 engagements in our DLEDB (Division Level Engagement Data Base).  A picture of that database is shown at the top of this post.

So, if you are conducting a validation to the campaign, take a moment and check the results to each division to each day. In the KDB there were 17 divisions on the German side, and 37 rifle divisions and 10 tank and mechanized corps (a division-sized unit) on the Soviet side. The data base covers 15 days of fighting. So….there are around 900 points of daily division level results to check the results to. I drawn your attention to this graph:

There are a number of these charts in Chapter 19 of my book War by Numbers. Also see:

Validating Attrition

The Ardennes database is even bigger. There was one validation done by CAA (Center for Army Analysis) of its CEM model (Concepts Evaluation Model) using the Ardennes Campaign Simulation Data Bases (ACSDB). They did this as an overall comparison to the campaign. So they tracked the front line trace at the end of the battle, and the total tank losses during the battle, ammunition consumption and other events like that. They got a fairly good result. What they did not do was go into the weeds and compare the results of the engagements. CEM relies on inputs from ATCAL (Attrition Calculator) which are created from COSAGE model runs. So while they tested the overall top-level model, they really did not test ATCAL or COSAGE, the models that feed into it. ATCAL and COSAGE I gather are still in use. In the case of Ardennes you have 36 U.S. and UK divisions and 32 German divisions and brigades over 32 days, so over 2,000 division days of combat. That is a lot of data points to test to.

Now we have not systematically gone through the ACSDB and assembled a record for every single engagement there. There would probably be more than 400 such engagements. We have assembled 57 engagements from the Battle of the Bulge for our division-level database (DLEDB). More could be done.

Finally, during our Battle of Britain Data Base effort, we recommended developing an air combat engagement database of 120 air-to-air engagements from the Battle of Britain. We did examine some additional mission specific data for the British side derived from the “Form F” Combat Reports for the period 8-12 August 1940. This was to demonstrate the viability of developing an engagement database from the dataset. So we wanted to do something similar for the air combat that we had done with division-level combat. An air-to-air engagement database would be very useful if you are developing any air campaign wargame. This unfortunately was never done by us as the project (read: funding) ended.

As it is we actually have three air campaign databases to work from, the Battle of Britain data base, the air component of the Kursk Data Base, and the air component of the Ardennes Campaign Simulation Data Base. There is a lot of material to work from. All it takes it a little time and effort.

I will discuss the division-level data base in more depth in my next post.

The Battle of Britain Data Base

The Battle of Britain data base came into existence at the request of OSD PA&E (Office of the Secretary of Defense, Program Analysis and Evaluation). They contacted us. They were working with LMI (Logistics Management Institute, on of a dozen FFRDCs) to develop an air combat model. They felt that the Battle of Britain would be perfect for helping to develop, test and validate their model. The effort was led by a retired Air Force colonel who had the misfortune of spending part of his career in North Vietnam.

The problem with developing any air campaign database is that, unlike the German army, the Luftwaffe actually followed their orders late in the war to destroy their records. I understand from conversations with Trevor Dupuy that Luftwaffe records were stored in a train and had been moved to the German countryside (to get them away from the bombing and/or advancing armies). They then burned all the records there at the rail siding.

So, when HERO (Trevor Dupuy’s Historical Evaluation Research Organization) did their work on the Italian Campaign (which was funded by the Air Force), they had to find records on the German air activity with the Luftwaffe liaison officers of the German armies involved. The same with Kursk, where one of the few air records we had was with the air liaison officer to the German Second Army. This was the army on the tip of the bulge that was simply holding in place during the battle. It was the only source that gave us a daily count of sorties, German losses, etc. Of the eight or so full wings that were involved in the battle from the VIII Air Corps, we had records for one group of He-111s (there were usually three groups to a wing). We did have good records from the Soviet archives. But it hard to assemble a good picture of the German side of the battle with records from only 1/24th of the units involved. So the very limited surviving files of the Luftwaffe air liaison officers was all we had to work with for Italy and Kursk. We did not even have that for the Ardennes. Luckily the German air force simplified things by flying almost no missions until the disastrous Operation Bodenplatte on 1 January 1945. Of course, we had great records from the U.S. and the UK, but….hard to develop a good database without records from both sides. Therefore, one is left with few well-documented air battles anywhere for use in developing, evaluating and validating an air campaign model.

The exception is the Battle of Britain, which has been so well researched, and extensively written about, that it is possible to assemble an accurate and detailed daily account for both sides for every day of the battle. There are also a few surviving records that can be tapped, including the personal kill records of the pilots, the aircraft loss reports of the quartermaster, and the ULTRA reports of intercepted German radio messages. Therefore, we (mostly Richard Anderson) assembled the Battle of Britain data base from British unit records and the surviving records and the extensive secondary sources for the German side. We have already done considerable preliminary research covering 15 August to 19 September 1940 as a result of our work on DACM (Dupuy Air Combat Model)

The Dupuy Air Campaign Model (DACM)

The database covered the period from 8 August to 30 September 1940. It was programmed in Access by Jay Karamales.  From April to July 2004 we did a feasibility study for LMI. We were awarded a contract from OSD PA&E on 1 September to start work on the database. We sent a two-person research team to the British National Archives in Kew Gardens, London. There we examined 249 document files and copied 4,443 pages. The completed database and supporting documentation was delivered to OSD PA&E in August 2005. It was certainly the easiest of our campaign databases to do.

We do not know if OSD PA&E or LMI ever used the data base, but we think not. The database was ordered while they were still working on the model. After we delivered the database to them, we do not know what happened. We suspect the model was never completed and the effort was halted. The database has never been publically available. PA&E became defunct in 2009 and was replaced by CAPE (Cost Assessment and Program Evaluation). We may be the only people who still have (or can find) a copy of this database.

I will provide a more detailed description of this database in a later post.