Date of Award

Spring 5-1-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Applied Engineering and Technology Management

First Advisor

M. Affan Badar

Second Advisor

James McKirahan

Third Advisor

A. Mehran Shahhosseini

Abstract

This study uses fixed and variable video game types to measure pretest sensitization as a proxy for repeated and varied threat test scenarios in system performance testing of air and missile defense systems. The pretest sensitization phenomenon exists when repeated exposure to a test condition influences the participant's response. Research shows air and missile defense development correlates with video games, resulting in similar interfaces and computer operating environments. Department of Defense acquisition test and evaluation results must reflect system performance without prior knowledge of the threat scenarios confounding the results. System performance results inform acquisition decisions, such as further funding and development, program canceling, and fielding decisions. Five video game titles are sampled for each type, using the time(s) to complete the game [first completion(s) versus replay(s)] to detect a decrease in the time to complete the game with repeated exposure, indicating pretest sensitization. The www.HowLongToBeat.com data is updated continuously; the study results represent a snapshot from when downloaded. There are 1,128 more first completion(s) than replay(s). Using all completionist data with notes results in a sample size of 1,598. The results confirm pretest sensitization with repeated exposure. Hence, the test scenarios must be varied to determine system performance accurately. Artificial intelligence scenarios that adapt based on soldier response and red air tactics would be optimal. In the absence of varied scenarios, results recommend including a disclaimer indicating pretest sensitization bias is present, impacting the accuracy of the results and adding risk to programmatic decisions. This study demonstrates the need to vary the scenarios to provide accurate system performance results supporting key programmatic decisions.

Share

COinS