There are significant human factor challenges unique to drivers remotely in charge of a vehicle. Teleoperators will have to build up a mental model of the remote environment facilitated by monitor view and video feed. In Study 1, we employed a qualitative verbal elicitation task, to uncover in its most basic form, what people ‘see’ in a remote scene when they are not constrained by rigid questioning. This enabled the construction of a taxonomy of Situation Awareness (SA) in remote driving contexts. Furthermore, teleoperator SA may be improved by the provision of additional information.
Study 2 investigated whether the presence of a rear-view mirror and the presence of audio delivered a more immersive experience, enhancing SA. We presented 16 videos counterbalanced across four conditions in a 2×2 factorial design (n=94) asking questions at the end of each video designed to measure each level of SA (perception, comprehension, and prediction). We found a significant main effect of rear (F(1,93) = 10.70, P < 0.02, np = 0.01) with no main effect for audio (F < 1) and no interaction (F<1), showing that performance is better when there is no rear view in remote driving scenes.
Results from both studies suggest that acquiring SA is a flexible and fluctuating process of combining comprehension and prediction globally rather than serially (Endsley, 2000; Endsley, 2017; Jones & Endsley, 1996). We suggest that existing theories of SA need to be more sensitively applied to remote driving contexts such as teleoperation of autonomous vehicles.