Issues in Human-Automation Interaction Modeling: Presumptive Aspects of Frameworks of Types and Levels of Automation

被引:74
|
作者
Kaber, David B. [1 ]
机构
[1] North Carolina State Univ, Ind & Syst Engn, Raleigh, NC 27695 USA
基金
美国国家航空航天局;
关键词
level of automation; function allocation; human-automation interaction; human information processing models; satisficing behavior;
D O I
10.1177/1555343417737203
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The current cognitive engineering literature includes a broad range of models of human-automation interaction (HAI) in complex systems. Some of these models characterize types and levels of automation (LOAs) and relate different LOAs to implications for human performance, workload, and situation awareness as bases for systems design. However, some have suggested that the LOAs approach has overlooked key issues that need to be considered during the design process. Others are simply unsatisfied with the current state of the art in modeling HAI. In this paper, I argue that abandoning an existing framework with some utility for design makes little sense unless the cognitive engineering community can provide the broader design community with other sound alternatives. On this basis, I summarize issues with existing definitions of LOAs, including (a) presumptions of human behavior with automation and (b) imprecision in defining behavioral constructs for assessment of automation. I propose steps for advances in LOA frameworks. I provide evidence of the need for precision in defining behavior in use of automation as well as a need for descriptive models of human performance with LOAs. I also provide a survey of other classes of HAI models, offering insights into ways to achieve descriptive formulations of taxonomies of LOAs to support conceptual and detailed systems design. The ultimate objective of this line of research is reliable models for predicting human and system performance to serve as a basis for design.
引用
收藏
页码:7 / 24
页数:18
相关论文
共 50 条
  • [41] The MPC elucidator: A case study in the design for human-automation interaction
    Guerlain, S
    Jamieson, GA
    Bullemer, P
    Blair, R
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2002, 32 (01): : 25 - 40
  • [42] The Human-Automation Behavioral Interaction Task (HABIT) analysis framework
    Baird, Isabelle
    Fendley, Mary E.
    Warren, Rik
    Human Factors and Ergonomics In Manufacturing, 2022, 32 (06): : 452 - 461
  • [43] Evaluating Effective Connectivity of Trust in Human-Automation Interaction: A Dynamic Causal Modeling (DCM) Study
    Huang, Jiali
    Choo, Sanghyun
    Pugh, Zachary H.
    Nam, Chang S.
    HUMAN FACTORS, 2022, 64 (06) : 1051 - 1069
  • [44] Special issue on human-automation coagency
    Inagaki, T.
    COGNITION TECHNOLOGY & WORK, 2012, 14 (01) : 1 - 2
  • [45] Measuring Human-Automation Function Allocation
    Pritchett, Amy R.
    Kim, So Young
    Feigh, Karen M.
    JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2014, 8 (01) : 52 - 77
  • [46] MODELING STRATEGIC BEHAVIOR IN HUMAN-AUTOMATION INTERACTION - WHY AN AID CAN (AND SHOULD) GO UNUSED
    KIRLIK, A
    HUMAN FACTORS, 1993, 35 (02) : 221 - 242
  • [47] Decision Referrals in Human-Automation Teams
    Kaza, Kesav
    Le Ny, Jerome
    Mahajan, Aditya
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2842 - 2847
  • [48] Affective Processes in Human-Automation Interactions
    Merritt, Stephanie M.
    HUMAN FACTORS, 2011, 53 (04) : 356 - 370
  • [49] A human-automation interface model to guide automation design of system functions
    Kennedy, Joshua S.
    McCauley, Michael E.
    Naval Engineers Journal, 2007, 119 (01): : 109 - 124
  • [50] PREDICTING THE EFFECTS OF AUTOMATION RELIABILITY RATES ON HUMAN-AUTOMATION TEAM PERFORMANCE
    Hillesheim, Anthony J.
    Rusnock, Christina F.
    2016 WINTER SIMULATION CONFERENCE (WSC), 2016, : 1802 - 1813