Human-AI interaction and ethics of AI: how well are we following the guidelines

被引:0
|
作者
Li, Fan [1 ]
Lu, Yuan [1 ]
机构
[1] Eindhoven Univ Technol, Dept Ind Design, Eindhoven, Netherlands
关键词
Human-AI (HAI) interaction; Ethics by design; User experience(UX); Guidelines for HAI interaction; Ethics Guidelines for Trustworthy; AI (EGTAI);
D O I
10.1145/3565698.3565773
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the benefits of AI-enabled solutions in different industrial sectors, their technology acceptance remains challenging. The acceptance of AI technologies depends on both the Human-AI (HAI) interaction and the ethics of AI. HAI interaction significantly affects the acceptance of AI-enabled solutions. Many guidelines have been developed to support HAI interaction design, including Microsoft's Guidelines for HAI interaction. On the other hand, many ethics by design guidelines were developed, such as the Ethics Guidelines for Trustworthy AI (EGTAI) developed by European Commission. However, there is less discussion about the possible relations between these two sets of guidelines for developing AI-enabled solutions. This study aims to analyze how current AI-enabled solutions comply with these two guidelines using a case study approach. To realize this aim, we conducted a co-evaluation workshop investigating how two existing AI-enabled apps, Strava and CoronaMelder, comply with these two guidelines. In this workshop, four participants with prior knowledge of designing with AI were asked to analyze the two cases by identifying whether these guidelines were met. The workshop results implied that when HAI interactions are designed according to the HAI interaction guidelines, they do not necessarily align with the EGTAI guidelines and vice versa.
引用
收藏
页码:96 / 104
页数:9
相关论文
共 50 条
  • [31] Human-AI interaction in remanufacturing: exploring shop floor workers' behavioural patterns within a specific human-AI system
    Suesse, Thomas
    Kobert, Maria
    Kries, Caroline
    LABOUR AND INDUSTRY, 2023, 33 (03): : 344 - 363
  • [32] How empty is Trustworthy AI? A discourse analysis of the Ethics Guidelines of Trustworthy AI
    Stamboliev, Eugenia
    Christiaens, Tim
    CRITICAL POLICY STUDIES, 2025, 19 (01) : 39 - 56
  • [33] Human-AI interaction in remanufacturing: exploring shop floor workers' behavioural patterns within a specific human-AI system
    Suesse, Thomas
    Kobert, Maria
    Kries, Caroline
    LABOUR AND INDUSTRY, 2023,
  • [34] Deconstructing Human-AI Collaboration: Agency, Interaction, and Adaptation
    Holter, Steffen
    El-Assady, Mennatallah
    COMPUTER GRAPHICS FORUM, 2024, 43 (03)
  • [35] Fairness, Relationship, and Identity Construction in Human-AI Interaction
    Dong, Jie
    JOURNAL OF SOCIOLINGUISTICS, 2024, 28 (05) : 35 - 37
  • [36] A Human-AI interaction paradigm and its application to rhinocytology
    Desolda, Giuseppe
    Dimauro, Giovanni
    Esposito, Andrea
    Lanzilotti, Rosa
    Matera, Maristella
    Zancanaro, Massimo
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 155
  • [37] CollEagle; Tangible Human-AI Interaction for Collocated Collaboration
    Adan, Olaf
    Houben, Steven
    HHAI 2023: AUGMENTING HUMAN INTELLECT, 2023, 368 : 416 - 418
  • [38] The rise of hybrids: plastic knowledge in human-AI interaction
    La Sala, Antonio
    Fuller, Ryan
    Riolli, Laura
    Temperini, Valerio
    JOURNAL OF KNOWLEDGE MANAGEMENT, 2024, 28 (10) : 3023 - 3045
  • [39] The case for human-AI interaction as system 0 thinking
    Chiriatti, Massimo
    Ganapini, Marianna
    Panai, Enrico
    Ubiali, Mario
    Riva, Giuseppe
    NATURE HUMAN BEHAVIOUR, 2024, 8 (10): : 1829 - 1830
  • [40] My AI Friend: How Users of a Social Chatbot Understand Their Human-AI Friendship
    Brandtzaeg, Petter Bae
    Skjuve, Marita
    Folstad, Asbjorn
    HUMAN COMMUNICATION RESEARCH, 2022, 48 (03) : 404 - 429