Supporting Multitracking Performance With Novel Visual, Auditory, and Tactile Displays

IEEE Transactions on Human-Machine Systems(2020)

引用 4|浏览8
暂无评分
摘要
This article investigates performance in multiple concurrent tracking tasks with multisensory displays in a driving context. In many work domains, such as driving, aviation, process control, and medicine, humans perform “tracking” tasks that involve observing continuous variables and providing a control input to achieve and maintain satisfactory levels in those variables. Performance in multiple concurrent tracking tasks (“multitracking”) was studied in driving-like scenarios that challenged participants to track established targets for lateral (lane position) and longitudinal (speed) variables. Novel speed displays were developed to engage nontraditional sensory modalities (e.g., ambient-visual, auditory, or tactile) with relative speed conveyed through simple or multidimensional signal encoding methods. Participants’ speed-tracking and lane-tracking performances were measured concurrently and compared across display configurations within-subjects. Results showed lane tracking performance to be unaffected by display configuration; however, speed tracking and overall performance were significantly improved with novel displays, compared to the baseline configuration. Redundantly encoded auditory displays best-supported multitracking performance, but redundantly encoded tactile displays were not as beneficial as were simple encodings. These results provide insight into the human information processing of semicontinuous multisensory displays and can inform display design in driving and other visually demanding work contexts.
更多
查看译文
关键词
Driving performance,multisensory display,multimodal information processing,tracking
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要