Continuing my comments on the article in the December 2018 issue of the Phalanx by Alt, Morey and Larimer (this is part 4 of 7; see Part 1, Part 2, Part 3).
The next sentence in the article is interesting. After saying that validating models incorporating human behavior is difficult (and therefore should not be done?) they then say:
In combat simulations, those model components that lend themselves to empirical validation, such as the physics-based aspects of combat, are developed, validated, and verified using data from an accredited source.
This is good. But, the problem lies that it limits one to only validating models that do not include humans. If one is comparing a weapon system to a weapon system, as they discuss later, this is fine. On the other hand, if one is comparing units in combat to units in combat…then there are invariably humans involved. Even if you are comparing weapon systems versus weapon systems in an operational environment, there are humans involved. Therefore, you have to address human factors. Once you have gone beyond simple weapon versus weapon comparisons, you need to use models that are gaming situations that involved humans. I gather from the previous sentence (see part 3 of 7) and this sentence, that means that they are using un-validated models. Their extended discussions of SMEs (Subject Matter Experts) that follows just reinforces that impression.
But, TRADOC is the training and doctrine command. They are clearly modeling something other than just the “physics-based aspect of combat.”
Chris, the dueling photos perfectly illustrated your point. Nice pairing of photos!
Hey……I am an artist 😉
You’re a man with many talents: dueling photos, dueling banjos, …
A duel in the snow seems apropos to me given that I have been trapped by the Pacific Northwest snowstorms since last Friday. At least it gives me a chance to finally review this blog.