Trust in AI and Its Role in the Acceptance of AI Technologies

被引:160
|
作者
Choung, Hyesun [1 ]
David, Prabu [2 ,3 ]
Ross, Arun [4 ]
机构
[1] Michigan State Univ, Coll Commun Arts & Sci, E Lansing, MI 48824 USA
[2] Michigan State Univ, Dept Media & Informat, E Lansing, MI 48824 USA
[3] Michigan State Univ, Dept Commun, E Lansing, MI 48824 USA
[4] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
关键词
ANTHROPOMORPHISM INCREASES TRUST; MODEL; AUTOMATION; METAANALYSIS; ASSISTANTS; EXTENSION; MATTER; TAM;
D O I
10.1080/10447318.2022.2050543
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As AI-enhanced technologies become common in a variety of domains, there is an increasing need to define and examine the trust that users have in such technologies. Given the progress in the development of AI, a correspondingly sophisticated understanding of trust in the technology is required. This paper addresses this need by explaining the role of trust in the intention to use AI technologies. Study 1 examined the role of trust in the use of AI voice assistants based on survey responses from college students. A path analysis confirmed that trust had a significant effect the on intention to use AI, which operated through perceived usefulness and participants' attitude toward voice assistants. In Study 2, using data from a representative sample of the U.S. population, different dimensions of trust were examined using exploratory factor analysis, which yielded two dimensions: human-like trust and functionality trust. The results of the path analyses from Study 1 were replicated in Study 2, confirming the indirect effect of trust and the effects of perceived usefulness, ease of use, and attitude on intention to use. Further, both dimensions of trust shared a similar pattern of effects within the model, with functionality-related trust exhibiting a greater total impact on usage intention than human-like trust. Overall, the role of trust in the acceptance of AI technologies was significant across both studies. This research contributes to the advancement and application of the TAM in AI-related applications and offers a multidimensional measure of trust that can be utilized in the future study of trustworthy AI.
引用
收藏
页码:1727 / 1739
页数:13
相关论文
共 50 条
  • [31] Can We Trust AI?
    Herman, Liz
    Chellappa, Rama
    Niiler, Eric
    TECHNICAL COMMUNICATION, 2023, 70 (03)
  • [32] Generative AI, Innovation, and Trust
    Piller, Frank T.
    Srour, Mahdi
    Marion, Tucker J.
    JOURNAL OF APPLIED BEHAVIORAL SCIENCE, 2024,
  • [33] Limits of trust in medical AI
    Hatherley, Joshua James
    JOURNAL OF MEDICAL ETHICS, 2020, 46 (07) : 478 - 481
  • [34] Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems
    Kinney, Marjorie
    Anastasiadou, Maria
    Naranjo-Zolotov, Mijail
    Santos, Vitor
    HELIYON, 2024, 10 (07)
  • [35] Impacts on Trust of Healthcare AI
    LaRosa, Emily
    Danks, David
    PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18), 2018, : 210 - 215
  • [36] Trust and trustworthiness in AI ethics
    Karoline Reinhardt
    AI and Ethics, 2023, 3 (3): : 735 - 744
  • [37] Trust, trustworthiness and AI governance
    Christian Lahusen
    Martino Maggetti
    Marija Slavkovik
    Scientific Reports, 14 (1)
  • [38] Negotiating trust in AI-enabled navigation technologies: imaginaries, ecologies, habits
    Roberts, Tom
    Lapworth, Andrew
    Koh, Lucy
    Ghasri, Milad
    SOCIAL & CULTURAL GEOGRAPHY, 2025, 26 (02) : 160 - 178
  • [39] The problem with trust: on the discursive commodification of trust in AI
    Krueger, Steffen
    Wilson, Christopher
    AI & SOCIETY, 2023, 38 (04) : 1753 - 1761
  • [40] The problem with trust: on the discursive commodification of trust in AI
    Steffen Krüger
    Christopher Wilson
    AI & SOCIETY, 2023, 38 : 1753 - 1761