Preprocessing and Artificial Intelligence for Increasing Explainability in Mental Health

被引:3
|
作者
Angerri, X. [1 ,2 ]
Gibert, Karina [1 ]
机构
[1] Univ Politecn Cataluna, BarcelonaTech, Intelligent Data Sci & Artificial Intelligence Res, Barcelona 08034, Spain
[2] Univ Politecn Cataluna, Groundfloor Campus Nord,Nexus 2 Bldg ,C Jordi Giro, Barcelona 08034, Spain
关键词
Data science; intelligent decision support; health; COVID19; mental health; traffic light panels; preprocessing; explainable AI;
D O I
10.1142/S0218213023400110
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper shows the added value of using the existing specific domain knowledge to generate new derivated variables to complement a target dataset and the benefits of including these new variables into further data analysis methods. The main contribution of the paper is to propose a methodology to generate these new variables as a part of preprocessing, under a double approach: creating 2nd generation knowledge-driven variables, catching the experts criteria used for reasoning on the field or 3rd generation data-driven indicators, these created by clustering original variables. And Data Mining and Artificial Intelligence techniques like Clustering or Traffic light Panels help to obtain successful results. Some results of the project INSESS-COVID19 are presented, basic descriptive analysis gives simple results that even though they are useful to support basic policy-making, especially in health, a much richer global perspective is acquired after including derivated variables. When 2nd generation variables are available and can be introduced in the method for creating 3rd generation data, added value is obtained from both basic analysis and building new data-driven indicators.
引用
收藏
页数:36
相关论文
共 50 条
  • [11] Artificial Intelligence for Mental Health and Mental Illnesses: an Overview
    Sarah Graham
    Colin Depp
    Ellen E. Lee
    Camille Nebeker
    Xin Tu
    Ho-Cheol Kim
    Dilip V. Jeste
    Current Psychiatry Reports, 2019, 21
  • [12] Explainability, Public Reason, and Medical Artificial Intelligence
    Da Silva, Michael
    ETHICAL THEORY AND MORAL PRACTICE, 2023, 26 (05) : 743 - 762
  • [13] Artificial intelligence explainability: the technical and ethical dimensions
    McDermid, John A.
    Jia, Yan
    Porter, Zoe
    Habli, Ibrahim
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2021, 379 (2207):
  • [14] Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
    Amann, Julia
    Blasimme, Alessandro
    Vayena, Effy
    Frey, Dietmar
    Madai, Vince I.
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2020, 20 (01)
  • [15] Artificial intelligence in pharmacovigilance: A regulatory perspective on explainability
    Pinheiro, Luis Correia
    Kurz, Xavier
    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, 2022, 31 (12) : 1308 - 1310
  • [16] Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
    Julia Amann
    Alessandro Blasimme
    Effy Vayena
    Dietmar Frey
    Vince I. Madai
    BMC Medical Informatics and Decision Making, 20
  • [17] Explainability as a User Requirement for Artificial Intelligence Systems
    Jovanovic, Mladan
    Schmitz, Mia
    COMPUTER, 2022, 55 (02) : 90 - 94
  • [18] Explainability, Public Reason, and Medical Artificial Intelligence
    Michael Da Silva
    Ethical Theory and Moral Practice, 2023, 26 : 743 - 762
  • [19] War, emotions, mental health, and artificial intelligence
    Cosic, Kresimir
    Kopilas, Vanja
    Jovanovic, Tanja
    FRONTIERS IN PSYCHOLOGY, 2024, 15
  • [20] Editorial: Artificial intelligence and mental health care
    Simoes, Jorge P.
    ten Klooster, Peter
    Neff, Patrick K.
    Niemann, Uli
    Kraiss, Jannis
    FRONTIERS IN PUBLIC HEALTH, 2024, 12