Journal of Ergonomics

Journal of Ergonomics
Open Access

ISSN: 2165-7556

+44 1300 500008

Research Article - (2013) Volume 3, Issue 1

Using Physical 3D Objects as Training Media for Military Vehicle Identification

Joseph R Keebler1*, Florian Jentsch2, Lee W. Sciarini3 and Thomas Fincannon2
1Department of Psychology, Wichita State University, USA
2Department of Psychology, University of Central Florida, USA
3Naval Air Warfare Center Training Systems Division<, USA
*Corresponding Author: Joseph R Keebler, Department of Psychology, Wichita State University, USA Email:

Abstract

This work investigates effects of using scaled 3D objects to train for military vehicle recognition, alliance detection, and identification. Past research has demonstrated that training with stereoscopic imagery and objects can lead to stronger learning outcomes when compared to non-stereoscopic training modalities. Therefore, this study sought to investigate the effects of scaled physical object training compared to two current training methodologies, namely military issued cards and military vehicle renderings from a training simulation. An experiment was designed using 1:35 physical scale models, military issued cards containing line drawings, and computer-based renderings. Testing procedures consisted of displaying photographs of the military vehicles the participants were trained on. Results show physical models led to significantly stronger performance compared to cards and images. Limitations and future work are discussed.

Keywords: 3D objects; Military vehicle; Graphic Training Aid (GTA)

The problem of misidentification

Perception and identification of military vehicles is an important function in both live combat and unmanned operations. According to the Unmanned Systems Integrated Roadmap Executive Summary, target identification and designation is the second highest priority for future unmanned systems. Identification has been defined as “a decision about an object’s unique identity requiring subjects to discriminate between similar objects that involves generalizations across some shape changes as well as physical translation and rotation” [1]. Within a military context, it is often referred to as CID (Combat Identification) and has been defined as “the means to positively identify friendly, hostile, and neutral platforms to reduce fratricide due to misidentification, and to maximize effective use of weapons systems” [2]. The absence of the ability for an individual to properly identify military vehicles can lead to costly errors. Sometimes referred to as “misidentification,” these types of errors occur when an object in the world is perceived as a different object [3]. Within the scope of this article, the problem of misidentifying military vehicles in the real world can lead to many types of errors, including human injury or loss of life [4,5].

Misidentification within the military domain is often termed fratricide, or “blue-on-blue” [5]. This refers to misidentification incidences in which allied combat forces fire upon one another. As an example, in the first Gulf War, it has been suggested that overall U.S. and U.K. casualties/injuries due to friendly fire were as high as 25 to 30 percent [6,7]. Although many factors are involved in making these types of mistakes (e.g., command decisions, communication, and fog of war), most often the errors are attributable to the individual war fighter. Therefore, it is pertinent to design training [8] that leads to high levels of performance for identification tasks to best reduce misidentification on the battlefield.

One reason that misidentification is so prominent in military operations is due to the similarities in the visual appearances of military vehicles [9,10]. Military vehicles tend to share structural similarities regardless of their country of origin (Figure 1), and these similarities lead to difficult decision making, especially when individuals are forced to make hasty decisions in the midst of battlefield environments [3,4,9,11]. The majority of military vehicles are so similar in appearance that without knowledge of specific cues, it is nearly impossible to distinguish one from another [4]. Especially from a frontal view, lack of distinguishable cues can lead to quick and often incorrect decision making [5]. It has also been demonstrated that these vehicles have highly similar shapes, sizes, and spatial relationships/locations between their components, making the process of identifying them from one another profoundly difficult [4]. Therefore, this paper examined a novel training method for training expertise, specifically through comparing three-dimensional (3D) objects to standardized methods of training (Figure 1).

ergonomics-M1A1-Abrams-frontal-views

Figure 1: Russian T-72 and a U.S. M1A1 Abrams frontal views (U.S. DoD Graphic Trainin Aid 17-2-103).

Classification of objects and expertise

To design appropriate training for identifying similar vehicles, it is important to understand exactly how the human mind categorizes objects. Insightful research on object categorization has been published in the works of Biederman and Shiffrar [12]. Their work demonstrates how humans classify objects based on familiarity and expertise with the objects in question. One of the critical findings of their work has been that humans tend to group objects along three hierarchical levels: superordinate category (e.g., furniture), basic category (e.g., table), and subordinate category (e.g., coffee table) [12]. Most humans can classify at a basic level almost instantaneously [12,13]. What is significant about this previous research on object categorization is that with expertise, the ability to effectively categorize objects is no longer at the basic level, but at the subordinate level. This indicates that training that strives for expertise should more readily allow for individuals to arrive at correct identification of sub-ordinate categories. Using the domain of military vehicle identification, one may arrive at a super-ordinate level (military vehicle), a basic level (tank), and a subordinate level (M1A1 Abrams). If individuals can be trained in an “expert-like” fashion, then their ability to quickly and effectively classify the vehicles they encounter will be enhanced. These (3D) objects may provide information that simply cannot exist using other media.

Using 3D training materials

3D objects may be one way to effectively train novices to categorize at the subordinate level for military vehicles. It could be argued that 3D objects provide a wealth of visual information (e.g., depth and multiple views) compared to their two-dimensional (2D) counterparts [14,15]. Although research has not been conducted specifically on 3D objects, investigations in the domains of education, psychology, medicine, and military training have all found evidence demonstrating that 3D imagery, when used as a training medium, can produce high performance outcomes [15,16-18]. For example, Kim [16] found that students who were trained using a stereoscopic 3D image of the earth were able to perform significantly higher on a test of plate tectonics knowledge. Other studies have demonstrated that integration of 3D images can be promising in enhancing students’ learning of anatomy [19-21]. Additionally, Nicholson et al. in 2008 [22] found that training using 3D computer-generated models led to significant increases in students’ knowledge of 3D relationships within the ear. Moreover, Hu [21] found that 3D visualizations during surgical planning led to significant decreases in workload and reductions in preparation time. Also, 3D stereoscopic images improve student’s abilities to visualize, giving even more promise to this type of media [19,22]. Previous work by the authors’ [3,14,23,24] examining the effects of objects for training military vehicle identification has been consistent in finding performance-enhancing effects for using physical, scaled objects (e.g., die cast scale models). More so, others have found that using real world objects as props lead to faster acquisition times for learning how to interact with simulation training media [25].

Hypotheses

Based on the need to experimentally establish the effects of using 3D materials (namely, 1:35 scale models) to train military vehicle identification, current methods of training were used as a basis for comparison with the 1:35 scale model training. Specifically, military issued flashcards (GTA 17-2-013) showing multiple canonical 2D (line drawings) views and perspective 2D images from a virtual simulation, called the military deployable virtual training environment (DVTE), were used as comparison media.

Due to the extant training literature supporting the use of 3D stereoscopic imagery and its effects on other highly visual domains (e.g., anatomy education), we hypothesized that the use of 3D objects to study a set of military vehicles would lead to powerful training outcomes. Specifically, our main hypothesis stated the following

Hypothesis HR: Individuals trained to memorize a set of military vehicles using 1:35 scaled replicas were expected to significantly outperform individuals who were instead trained using (HRa) 2D canonical views (in the form of military issued flashcards (GTA 17-2-013)) or (HRb) perspective 2D images (from the military deployable virtual training environment (DVTE)). This effect should be consistent across multiple measures of performance (recognition, alliance categorization, and identification).

Method

Participants

Fifty-five undergraduate students, 36 males and 19 females, were recruited from a large southeastern university. No effects due to gender were found in the sample. Participants’ ages ranged from 18- 33 (M=19), with none of the participants having previous military experience. All participants had either 20/20 vision or corrected 20/20 vision. Participants received course credit as compensation for their time.

Stimuli

As described above, three applied training methods were used (Figure 2): (a) for the 2D canonical views, military issued training cards containing black and white line drawings were used; (b) 2D polygonbased, perspective, virtual views were obtained from a military issued training simulation; and (c) the 3D physical models were commercial, off-the-shelf (COTS) 1:35 die cast scale models of military vehicles.

ergonomics-Example-training-stimuli

Figure 2: Example training stimuli: Left: military issued card; Center: DVTE model; Right: 1/35 scale model. All three show the U.S. M1A1 Main Battle Tank.

Canonical 2D views: The military issued cards (GTA 17-2-13) each contained three images of a vehicle, drawn with black lines on a white background. Each image corresponded to one canonical view: Front, side, and a perspective view from 45 degrees off centerline (between front and side views).

Perspective virtual 2D views: The 2D, perspective, virtual views were obtained from a military virtual simulation, called the deployable virtual training environment [26]. Because the vehicle models in the virtual simulation were pre-fabricated and we could not change the source code, the vehicles’ colors could not be altered for this study. The color of the vehicles differed somewhat from model to model, but was mostly a uniform sand or olive color.

Physical 3D models: We used 1:35 scale models of the same military vehicles as shown in the other conditions as physical 3D models. These were either die-cast metal models that were purchased pre-assembled or models that were built from 1:35 scale plastic model kits. In order to remove color as a possible confounding variable, the scale models were all painted with a matte white finish.

Instrumentation

Participants were given a Dell Laptop INSPIRON with a 16” screen at high resolution. The computer was fitted with MediaLab experimental software. This software was used to display the images in the final measure described above as well as automatically enter participant responses into a Microsoft Excel workbook. Participants in the DVTE based training also used this laptop to interact with a Microsoft power point presentation containing the military vehicle images.

Dependent variables/measures

The experiment consisted of testing the effects of the three training modalities on three dependent variables (DVs): individual’s performance in (a) recognizing the vehicles (correctly specify whether they had seen the vehicles before), (b) assessing the allegiance of the vehicles (know if the vehicle was friendly or enemy), and (c) identifying each vehicle model by name (e.g., M1A1, T-80, Challenger). To measure performance accuracy, a total percentage correct score was derived from a questionnaire filled out by participants as they viewed images of 66 military vehicles presented randomly (Figure 3). Upon completing the ten second viewing time for each photograph, participants would answer whether they had seen it before or not (recognition); what alliance the vehicle was (allegiance); and answer the vehicles name (identification). Correct answers were summed, and percentages were derived from the total of sixty six items. The measure consisted of six photographs of each of the seven vehicles studied by the participants and four vehicles that served as distracters.

ergonomics-Example-test-item

Figure 3: Example test item: View of the Russian BMP vehicle and test questions.

Experimental Design

The experiment used a One-way between subjects ANOVA design, with a three level independent variable. The between-subjects factor (IV) was training modality with three levels (i.e., 2D canonical views (military issued cards), 2D perspective virtual views (DVTE images), and 3D physical objects (i.e., 1:35 scale models)).

Procedure

Participants were greeted and situated at a workstation in a standard laboratory. They were then handed an informed consent and biographical data form asking age, gender, military experience, and visual acuity (corrected and non-corrected). Participants then began training according to their condition.

Training: At the beginning of training, participants were told that they would be studying a set of military vehicles and would later have to know whether they had seen the vehicles before, know the allegiance of the vehicles, and identify the vehicles by name. Each participant was then randomly assigned to one of three conditions (Figure 2): (a) 2D canonical views (training cards), (b) 2D perspective, virtual views (DVTE images), or (c) 3D physical models (1:35 scale models). Participants in the card training condition viewed seven military issued armored vehicle recognition cards from Graphic Training Aid (GTA) 17-02-013. Participants in the DVTE condition viewed polygon representations of the same seven vehicles, and those in the model condition viewed scaled models, also of the same seven vehicles.

The 3D models were presented on a table in a frontal view. Participants in both the card and model groups were allowed to touch and/or move the training media if needed, whereas those in the DVTE condition could move through the presentation freely during the allotted time.

In all three conditions, an 8” × 11” information sheet (Figure 4) accompanied each vehicle. These information sheets were adapted from a military vehicle training developed by Traysys [26]. Each sheet contained the name and allegiance (i.e., friend or enemy) of the vehicle, as well as questions associated with the physical characteristics of the vehicle. A portion of the physical characteristics were global features; that is, they were not unique to the individual vehicles (e.g., wheel count). Conversely, some of these characteristics were considered to be “critical cues” [11], which were distinct visual features of one particular vehicle. Emulating the question and answer format devised by Bramley [27], participants were asked to study each vehicle and attempt to answer the questions on the front of the sheet before looking on the back of the sheet. However, they were able to review sheet card as much as necessary throughout the twenty minutes. After the twenty minute training, the materials were removed and the testing portion of the experiment began.

ergonomics-Back-front-example

Figure 4: Back and front of an example training card for the BMP personnel carrier developed from Bramley’s (1973) training research.

Testing: After training was complete, the Dell laptop was placed in front of the participants and the MediaLab software was started. The MediaLab software, once initiated, provided participants with directions and described the three testing tasks again. The participants then viewed images of 66 military vehicles presented in random order (Figure 3), six photographs of each of the seven vehicles studied by the participants, and six photographs each of four vehicles that served as distracters. Once the presentation began, participants had ten seconds to examine each vehicle photograph, followed by a screen prompting them to answer the questions on their response sheets. Participants were given as much time as needed to fill out the appropriate answers on their questionnaire before progressing to the next item. After all sixty-six photographs were presented and participants were finished filling out the performance measures, they received post-participation information on the experiment and were given course credit.

Results

A One-way ANOVA with planned comparisons (physical models versus cards and simulated vehicle images) was conducted for each of the three performance measures (i.e., recognition, allegiance assessment, and identification). Figure 5 depicts a line graph demonstrating the effects of training modality on performance (e.g. correct items) by task type.

ergonomics-Number-items-correct

Figure 5: Number of items correct × task type × training condition.

Recognition: Performance on the recognition task was not significantly different between conditions (F (2, 52)=1.486, p=.24). Although performance on the recognition task was not significant, the mean for the physical model condition was higher (M=43.65) than the means for the other two conditions (cards, M=41.61; images M=40.47).

Allegiance assessment

There was a significant effect for training modality on performance in assessing the allegiance of the vehicle (i.e., friend/enemy differentiation performance), F (2.52)=3.715, p<.05, Eta2=.125. Planned comparisons demonstrated that model training had higher scores (M=39.15, SD=7.37) than both the card training (M=33.22, SD=8.37) and simulated vehicle training (M=32.884, SD=8.24), (t (52)=2.725, p=.009).

Identification

The scale model training led to significantly higher means compared to the other two training modalities. As predicted, there was a statistically significant difference between the three conditions (F (2,52)=3.228, p<.05, Eta2=.11. The model condition (M=33.3, SD=7.95) had significantly higher identification scores when compared to the simulated vehicle (M=25.3, SD=12.4) and the card training (M=27.22, SD=9.8) conditions (t (52)=2.49, p=.016).

Discussion

The results suggest that there are training benefits when using physical 3D (1:35th scale) models, in agreement with previous research [3,14,23]. The models enhanced novice training on tasks of discriminating and identifying military vehicles above and beyond that of military issued cards and simulation vehicles. Those participants who were trained on the models outperformed the rest of the sample on both friend/enemy and identification tasks. This indicates that the 1:35th scale 3D training appears to be better than the other training methods. Yet, due to our limited sample size and design, further empirical research should aim to replicate these results.

There are multiple reasons that 3D physical models may be better training devices. As discussed in the introduction section, the differences could be related to many possible visual factors associated with 3D, and may include individual difference factors such as higher interest for those participants who interacted with physical models rather than pictures/images or presentation software. Given that the models provide more visual information, the argument could be made that training on models simply creates more powerful referent memories for later use upon testing. Yet, this is outside the scope of this study, and would need to be addressed in future research. Another possible reason, and a potential outlet for future research, is the fact that using the physical models may have led to multi-modal learning. Touching and/or picking up the models could have led to both kinesthetic and proprioceptive information that later helped participants remember the differing vehicles more readily.

Physical models may be a sufficient, cost effective, and practical way to familiarize trainees with real world objects, but more research is needed to definitively conclude this effect. In a short amount of time, trainees in the 1:35th scale model condition were able to distinguish vehicles at a subordinate level of classification, indicating performance that is closer to expertise than the other conditions. Future research will have to focus on exactly how to integrate physical training devices into training protocols. Given some of the effects found in this study, and previous research by Biederman and Shiffrar [12], it is possible that models may be better suited to bring novices to expert levels quickly compared to current methods. Given the limited sample and design of this study, more research should be conducted to advance this notion of training with 1:35th scale models.

Another important outcome is the finding that relatively speedy training times (approximately 20 minutes) can lead to acceptable performance outcomes. Even though 1:35th scale models did lead to significantly stronger results, the means across all three groups demonstrated that the trainees could recognize at least 60% of the vehicles, know the alliance of at least 50% of the vehicles, and identify at least 30% of the vehicles they were tested on. Overall, this implies training is an important factor in learning military vehicles. Within the scope of this study, it appears that 1:35th scale models can lead to better performance outcomes, but this is greatly limited by our sample size and study design. Further development of this type of training could potentially lead to fast acquisition times for trainees studying objects in multiple domains.

Future Work

This research is an incremental step in a new direction for training object identification tasks. From the results found here and from previous literature showing expertise development over very short time intervals (e.g., chicken sexing [12], we believe that training protocols could be developed using scale models to quickly train identification skills. The further development of training paradigms, creation of reliable and valid measures, and the further integration of 3D attributes into the science of simulation and training must all be considered for future work in this area. It is not enough to be able to simply recognize an object. For proper identification to occur, trainees must quickly, reliably, and efficiently match their perceptions to their memory.

Future research will have to further investigate what aspects of physical scale models provided the training outcomes. Investigations using modern stereoscopic 3D computing (e.g., NVIDIA GeForce) systems could investigate whether stereoscopy alone has the same effect as actual objects. Future endeavors in this domain must examine the impact on short- and long-term memory when individuals instead physically handle an object. Future research will also need to focus on separating the haptic effects of using physical scaled model from the stereoscopic visual properties of said media.

Finally, future research will have to aim to measure a sample that is closer to the population of interest (i.e. infantry soldiers). This sample was limited in that undergraduates were measured as a proxy for soldiers and infantrymen and women. Future research will be much more generalizable if a sample from the relevant population is acquired.

References

  1. Palmeri TJ, Gauthier I (2004) Visual object understanding. Nat Rev Neurosci 5: 291-303.
  2. Andrews DH, Herz RP, Wolf MB (2010) Human factors issues in combat identification: Human Factors in Defence. UK: Ashgate
  3. Keebler JR, Sciarini L, Jentsch F, Nicholson D, Fincannon T (2010) A cognitive basis for friend-foe misidentification of vehicles in combat.
  4. Andrews DH, Herz RP, Wolf MB (2009) Human Factors Issues in Combat Identification. Burlington, VT: Ashgate, 113-128.
  5. Briggs RW, Goldberg JH (1995) Battlefield recognition of armored vehicles. Journal of the Human Factors and Ergonomics Society 37: 596-610.
  6. Regan G (1995) Blue on blue: A history of friendly fire. New York, USA: Avon Books.
  7. McCarthy R (2003) Friendly fire kills two UK vehicle crew. The Guardian.
  8. Salas E, Cannon-Bowers JA (2001) The science of training: a decade of progress. Annu Rev Psychol 52: 471-499.
  9. O’Kane BL, Biederman I, Cooper EE, Nystrom B (1997) An account of object identification confusions. J Exp Psychol Appl 3: 21-41.
  10. Hah S, Reisweber D, Picart J, Zwick H (1997) Targetrecognition performance following whole-views, part-views, and both-views training: Engineering Psychology and Cognitive Ergonomics. Job Design and Product Design 2: 135-141.
  11. Biederman I, Shiffrar MM (1987) Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual-learning task. Journal of Experimental Psychology: Learning, Memory, and Cognition 13: 640-645.
  12. Tanaka JW, Taylor M (1991) Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology 23: 457-482.
  13. Biederman I (1987) Recognition by components: A theory of human image understanding. Psychol Rev 94: 115-147.
  14. Keebler JR, Harper-Sciarini M, Curtis M, Schuster D, Jentsch F, et al. (2007) Effects of 2- dimensional and 3-dimensional media exposure training in a tank recognition task. Proceedings of the Human Factors and Ergonomics Society Annual Meeting. 51: 1593-1597.
  15. Kim P (2006) Effects of 3D Virtual Reality of Plate Tectonics on Fifth Grade Students’ Achievement and Attitude Toward Science. Interactive Learning Environments 14: 25-34
  16. Chang KI, Bowyer KW, Flynn PJ (2003) Face recognition using 2D and 3D facial data. ACM Workshop on Multimodal User Authenitcation 25-32.
  17. Friedman A, Spetch ML, Ferrey A (2005) Recognition by humans and pigeons of novel views of 3-D objects and their photographs. J Exp Psychol Gen 134: 149-162.
  18. Pierno AC, Caria A, Castiello U (2004) Comparing effects of 2D and 3D visual cues during aurally aided target acquisition. Hum Factors 46: 728-737.
  19. Brenton H, Hernandez J, Bello F, Strutton P, Purkayastha S, et al. (2005) Using multimedia and Web3D to enhance anatomy teaching. Comput Educ 49: 32-53
  20. Hu Y (2005) The role of three-dimensional visualization in surgical planning on treating lung cancer. Conf Proc IEEE Eng Med Biol Soc 1: 646-649.
  21. Nicholson DT, Chalk C, Funnel WRJ, Daniel SJ (2008) The evidence for virtual reality. Medical Education 42: 224.
  22. Marks SC Jr (2000) The role of three-dimensional information in health care and medical education: the implications for anatomy and dissection. Clin Anat 13: 448-452.
  23. Keebler JR, Sciarini L, Fincannon T, Jentsch F, Nicholson D (2008) Effects of Training Modality on Target Identification in a Virtual Vehicle Recognition Task. Proceedings of the Human Factors and Ergonomics Society 52nd annual meeting. New York, USA.
  24. Hinckley K, Pausch R, Goble JC, Kassell NF (1994) Passive real world interface props for neurosurgical visualization. Proceedings of the ACM CHI ’94 Conference of Human Factors and Computing Systems, New York, USA.
  25. Bramley P (1973) Equipment recognition training- learning whole or individual features? Occupational Psychology 47: 131-139.
Citation: Keebler JR (2013) Using Physical 3D Objects as Training Media for Military Vehicle Identification. J Ergonomics 3:112.

Copyright: © 2013 Keebler JR. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Top