Three Paradoxes to Reconcile to Promote Safe, Fair, and Trustworthy AI in Education

被引:0
|
作者
Slama, Rachel [1 ]
Toutziaridi, Amalia Christina [2 ]
Reich, Justin [2 ]
机构
[1] RAND Corp, Boston, MA 02116 USA
[2] MIT, 77 Massachusetts Ave, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
Education; Human-centered design; Responsible AI; Teacher Perspectives; Tutoring; BELIEFS;
D O I
10.1145/3657604.3664658
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Incorporating recordings of teacher-student conversations into the training of LLMs has the potential to improve AI tools. Although AI developers are encouraged to put "humans in the loop" of their AI safety protocols, educators do not typically drive the data collection or design and development processes underpinning new technologies. To gather insight into privacy concerns, the adequacy of safety procedures, and potential benefits of recording and aggregating data at scale to inform more intelligent tutors, we interviewed a pilot sample of teachers and administrators using a scenario-based, semi-structured interview protocol. Our preliminary findings reveal three "paradoxes" for the field to resolve to promote safe, fair, and trustworthy AI. We conclude with recommendations for education stakeholders to reconcile these paradoxes and advance the science of learning.
引用
收藏
页码:295 / 299
页数:5
相关论文
共 43 条