Preprocessing and Artificial Intelligence for Increasing Explainability in Mental Health

被引:3
|
作者
Angerri, X. [1 ,2 ]
Gibert, Karina [1 ]
机构
[1] Univ Politecn Cataluna, BarcelonaTech, Intelligent Data Sci & Artificial Intelligence Res, Barcelona 08034, Spain
[2] Univ Politecn Cataluna, Groundfloor Campus Nord,Nexus 2 Bldg ,C Jordi Giro, Barcelona 08034, Spain
关键词
Data science; intelligent decision support; health; COVID19; mental health; traffic light panels; preprocessing; explainable AI;
D O I
10.1142/S0218213023400110
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper shows the added value of using the existing specific domain knowledge to generate new derivated variables to complement a target dataset and the benefits of including these new variables into further data analysis methods. The main contribution of the paper is to propose a methodology to generate these new variables as a part of preprocessing, under a double approach: creating 2nd generation knowledge-driven variables, catching the experts criteria used for reasoning on the field or 3rd generation data-driven indicators, these created by clustering original variables. And Data Mining and Artificial Intelligence techniques like Clustering or Traffic light Panels help to obtain successful results. Some results of the project INSESS-COVID19 are presented, basic descriptive analysis gives simple results that even though they are useful to support basic policy-making, especially in health, a much richer global perspective is acquired after including derivated variables. When 2nd generation variables are available and can be introduced in the method for creating 3rd generation data, added value is obtained from both basic analysis and building new data-driven indicators.
引用
收藏
页数:36
相关论文
共 50 条
  • [1] Explainability and artificial intelligence in medicine
    Reddy, Sandeep
    LANCET DIGITAL HEALTH, 2022, 4 (04):
  • [2] Against explainability requirements for ethical artificial intelligence in health care
    Suzanne Kawamleh
    AI and Ethics, 2023, 3 (3): : 901 - 916
  • [3] Artificial Intelligence and Mental Health
    Kirkpatrick, Keith
    COMMUNICATIONS OF THE ACM, 2022, 65 (05) : 32 - 34
  • [4] Designing Explainability of an Artificial Intelligence System
    Ha, Taehyun
    Lee, Sangwon
    Kim, Sangyeon
    PROCEEDINGS OF THE TECHNOLOGY, MIND, AND SOCIETY CONFERENCE (TECHMINDSOCIETY'18), 2018,
  • [5] A manifesto on explainability for artificial intelligence in medicine
    Combi, Carlo
    Amico, Beatrice
    Bellazzi, Riccardo
    Holzinger, Andreas
    Moore, Jason H.
    Zitnik, Marinka
    Holmes, John H.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2022, 133
  • [6] Causability and explainability of artificial intelligence in medicine
    Holzinger, Andreas
    Langs, Georg
    Denk, Helmut
    Zatloukal, Kurt
    Mueller, Heimo
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2019, 9 (04)
  • [7] Artificial intelligence in the service of mental health
    Aime, X.
    EUROPEAN PSYCHIATRY, 2015, 30 (08) : S21 - S21
  • [8] Artificial and Human Intelligence in Mental Health
    Sigman, Mariano
    Slezak, Diego Fernandez
    Drucaroff, Lucas
    Ribeiro, Sidarta
    Carrillo, Facundo
    AI MAGAZINE, 2021, 42 (01) : 39 - 46
  • [9] On artificial intelligence and global mental health
    Montag, Christian
    Ali, Raian
    Al-Thani, Dena
    Hall, Brian J.
    ASIAN JOURNAL OF PSYCHIATRY, 2024, 91
  • [10] Artificial Intelligence for Mental Health and Mental Illnesses: an Overview
    Graham, Sarah
    Depp, Colin
    Lee, Ellen E.
    Nebeker, Camille
    Tu, Xin
    Kim, Ho-Cheol
    Jeste, Dilip V.
    CURRENT PSYCHIATRY REPORTS, 2019, 21 (11)