Three Paradoxes to Reconcile to Promote Safe, Fair, and Trustworthy AI in Education

被引:0
|
作者
Slama, Rachel [1 ]
Toutziaridi, Amalia Christina [2 ]
Reich, Justin [2 ]
机构
[1] RAND Corp, Boston, MA 02116 USA
[2] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
Education; Human-centered design; Responsible AI; Teacher Perspectives; Tutoring; BELIEFS;
D O I
10.1145/3657604.3664658
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Incorporating recordings of teacher-student conversations into the training of LLMs has the potential to improve AI tools. Although AI developers are encouraged to put "humans in the loop" of their AI safety protocols, educators do not typically drive the data collection or design and development processes underpinning new technologies. To gather insight into privacy concerns, the adequacy of safety procedures, and potential benefits of recording and aggregating data at scale to inform more intelligent tutors, we interviewed a pilot sample of teachers and administrators using a scenario-based, semi-structured interview protocol. Our preliminary findings reveal three "paradoxes" for the field to resolve to promote safe, fair, and trustworthy AI. We conclude with recommendations for education stakeholders to reconcile these paradoxes and advance the science of learning.
引用
收藏
页码:295 / 299
页数:5
相关论文
共 43 条
  • [1] Trustworthy AI for safe medicines
    Jens-Ulrich Stegmann
    Rory Littlebury
    Markus Trengove
    Lea Goetz
    Andrew Bate
    Kim M. Branson
    Nature Reviews Drug Discovery, 2023, 22 : 855 - 856
  • [2] Trustworthy AI for safe medicines
    Stegmann, Jens-Ulrich
    Littlebury, Rory
    Trengove, Markus
    Goetz, Lea
    Bate, Andrew
    Branson, Kim M.
    NATURE REVIEWS DRUG DISCOVERY, 2023, 22 (10) : 855 - 856
  • [3] Advances in Explainable, Fair, and Trustworthy AI
    Islam, Sheikh Rabiul
    Russell, Ingrid
    Eberle, William
    Talbert, Douglas
    Hasan, Md Golam Moula Mehedi
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2024, 33 (03)
  • [4] AI assurance: towards trustworthy, explainable, safe, and ethical AI
    Wang, Jialei
    Fu, Li
    AI & SOCIETY, 2024, 39 (06) : 3065 - 3066
  • [5] Introduction to Decision Making with Sustainable, Fair and Trustworthy AI Minitrack
    Juric, Radmila
    Steele, Robert
    Proceedings of the Annual Hawaii International Conference on System Sciences, 2024,
  • [6] To Authenticity, and Beyond! Building Safe and Fair Generative AI Upon the Three Pillars of Provenance
    Collomosse, John
    Parsons, Andy
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2024, 44 (03) : 82 - 90
  • [7] Foveate, Attribute, and Rationalize: Towards Physically Safe and Trustworthy AI
    Mei, Alex
    Levy, Sharon
    Wang, William Yang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 11021 - 11036
  • [8] Tutorial: Human-Centered AI: Reliable, Safe and Trustworthy
    Ben Shneiderman
    26TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES (IUI '21 COMPANION), 2021, : 7 - 8
  • [9] Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review
    Kuznietsov, Anton
    Gyevnar, Balint
    Wang, Cheng
    Peters, Steven
    Albrecht, Stefano V.
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [10] Explainable, Interpretable, Trustworthy, Responsible, Ethical, Fair, Verifiable AI... What's Next?
    Meo, Rosa
    Nai, Roberto
    Sulis, Emilio
    ADVANCES IN DATABASES AND INFORMATION SYSTEMS, ADBIS 2022, 2022, 13389 : 25 - 34