Explainable AI to understand study interest of engineering students

被引:3
|
作者
Ghosh, Sourajit [1 ]
Kamal, Md. Sarwar [2 ]
Chowdhury, Linkon [3 ]
Neogi, Biswarup [1 ]
Dey, Nilanjan [4 ]
Sherratt, Robert Simon [5 ]
机构
[1] JIS Univ, Comp Sci & Engn, Kolkata, India
[2] Univ Technol Sydney, Fac Engn & Informat Technol, Sydney, Australia
[3] East Delta Univ Technol, Comp Sci & Engn, Chattogram, Bangladesh
[4] Techno Int New Town, Comp Sci & Engn, Kolkata, India
[5] Univ Reading, Dept Biomed Engn, Reading, England
关键词
Explainable AI; Belief rule base; SP-LIME; PCA;
D O I
10.1007/s10639-023-11943-x
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Students are the future of a nation. Personalizing student interests in higher education courses is one of the biggest challenges in higher education. Various AI and ML approaches have been used to study student behaviour. Existing AI and ML algorithms are used to identify features for various fields, such as behavioural analysis, economic analysis, image processing, and personalized medicine. However, there are major concerns about the interpretability and understandability of the decision made by a model. This is because most AI algorithms are black-box models. In this study, explain- able AI (XAI) aims to break the black box nature of an algorithm. In this study, XAI is used to identify engineering students' interests, and BRB and SP-LIME are used to explain which attributes are critical to their studies. We also used (PCA) for feature selection to identify the student cohort. Clustering the cohort helps to analyse the between influential features in terms of engineering discipline selection. The results show that there are some valuable factors that influence their study and, ultimately, the future of a nation.
引用
收藏
页码:4657 / 4672
页数:16
相关论文
共 50 条
  • [1] Explainable AI to understand study interest of engineering students
    Sourajit Ghosh
    Md. Sarwar Kamal
    Linkon Chowdhury
    Biswarup Neogi
    Nilanjan Dey
    Robert Simon Sherratt
    Education and Information Technologies, 2024, 29 : 4657 - 4672
  • [2] Understand and Testing AI Decisions with Explainable AI
    Chung K.
    VDI Berichte, 2022, 2022 (2405): : 249 - 256
  • [3] Explainable AI for Software Engineering
    Tantithamthavorn, Chakkrit
    Jiarpakdee, Jirayus
    2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 1 - 2
  • [4] Explainable AI: introducing trust and comprehensibility to AI engineering
    Burkart, Nadia
    Brajovic, Danilo
    Huber, Marco F.
    AT-AUTOMATISIERUNGSTECHNIK, 2022, 70 (09) : 787 - 792
  • [5] Explainable AI for applications in production engineering
    Kick M.K.
    Stadter C.
    Weiß T.
    Backenstos M.
    Zäh M.F.
    WT Werkstattstechnik, 2022, 112 (03): : 173 - 177
  • [6] Using Explainable AI to Understand Team Formation and Team Impact
    Xu H.
    Saar-Tsechansky M.
    Song M.
    Ding Y.
    Proceedings of the Association for Information Science and Technology, 2023, 60 (01) : 469 - 478
  • [7] From Explainable AI to Explainable Simulation: Using Machine Learning and XAI to understand System Robustness
    Feldkamp, Niclas
    Strassburger, Steffen
    PROCEEDINGS OF THE 2023 ACM SIGSIM INTERNATIONAL CONFERENCE ON PRINCIPLES OF ADVANCED DISCRETE SIMULATION, ACMSIGSIM-PADS 2023, 2023, : 96 - 106
  • [8] Creative Explainable AI Tools to Understand Algorithmic Decision-Making
    Bhat, Maalvika
    PROCEEDINGS OF THE 16TH CONFERENCE ON CREATIVITY AND COGNITION, C&C 2024, 2024, : 10 - 16
  • [9] Harnessing machine learning models and explainable AI to understand MOOC continuance intention
    Sharma, Vinod
    Mahajan, Yogesh
    Kapse, Manohar
    Deb, Saikat
    INFORMATION DISCOVERY AND DELIVERY, 2025,
  • [10] Explainable AI for Glaucoma Prediction Analysis to Understand Risk Factors in Treatment Planning
    Kamal, Md. Sarwar
    Dey, Nilanjan
    Chowdhury, Linkon
    Hasan, Syed Irtija
    Santosh, K. C.
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71