Regulatory solutions to alleviate the risks of generative AI models in qualitative research

被引:2
|
作者
Pillai, Vishnu Sivarudran [1 ]
Matus, Kira [2 ]
机构
[1] GITAM, Kautilya Sch Publ Policy, Hyderabad 502329, Telangana, India
[2] Hong Kong Univ Sci & Technol, Div Publ Policy, Div Environm & Sustainabil, Clearwater Bay, Hong Kong, Peoples R China
关键词
Generative AI; regulation; qualitative research; public policy; risk analysis;
D O I
10.1080/17516234.2024.2399098
中图分类号
K9 [地理];
学科分类号
0705 ;
摘要
Generative AI models, with their enhanced capacity for conversation, will soon find widespread applications in qualitative research, especially in the disciplines of social science and public policy. Although researchers guarantee the confidentiality of the data, the tools they use for data analysis are largely their choice and remain unregulated, raising serious ethical concerns. Prior research has established the potentially hazardous effects of such transformative architecture on research integrity and ethics; however, the interventions required to alleviate the risks that impact the 3Rs - Reviewers, Researchers, and Research Respondents - have not yet been studied. Initially, we analysed the potential risks associated with Large Language Models (such as GPTs) by examining scientific publications. We then had a 'risk workshop' with four qualitative researchers, followed by open-ended interviews with seven individuals from the 3 R impact groups to develop the various risk scenarios. We compare these risks to the AI-related policies of the European Union, Singapore, the United States, the United Kingdom and China to identify regulatory gaps. The research output illustrates potential regulatory interventions on a continuum, with nodality-based soft laws at one end and more extensive regulatory interventions (hard laws) at the other for various LLM applications in qualitative research.
引用
收藏
页数:24
相关论文
共 50 条
  • [41] Addressing the risks of generative AI for the judiciary: The accountability framework(s) under the EU AI Act
    Carnat, Irina
    COMPUTER LAW & SECURITY REVIEW, 2024, 55
  • [42] Learning to Make Rare and Complex Diagnoses With Generative AI Assistance: Qualitative Study of Popular Large Language Models
    Abdullahi, Tassallah
    Singh, Ritambhara
    Eickhoff, Carsten
    JMIR MEDICAL EDUCATION, 2024, 10
  • [43] Generative AI smartphones: From entertainment to potentially serious risks in radiology
    Duron, Loic
    Soyer, Philippe
    Lecler, Augustin
    DIAGNOSTIC AND INTERVENTIONAL IMAGING, 2025, 106 (02) : 76 - 78
  • [44] A commentary on sexting, sextortion, and generative AI: Risks, deception, and digital vulnerability
    Pater, Jessica
    Mcdaniel, Brandon T.
    Nova, Fayika Farhat
    Drouin, Michelle
    O'Connor, Kimberly
    Zytko, Douglas
    FAMILY RELATIONS, 2025,
  • [45] Open generative AI models areaway forward for science
    Spirling, Arthur
    NATURE, 2023, 616 (7957) : 413 - 413
  • [46] Large Language Models and Generative AI, Oh My!
    Zyda, Michael
    COMPUTER, 2024, 57 (03) : 127 - 132
  • [47] Multi-Modal Generative AI with Foundation Models
    Liu, Ziwei
    PROCEEDINGS OF THE 1ST WORKSHOP ON LARGE GENERATIVE MODELS MEET MULTIMODAL APPLICATIONS, LGM3A 2023, 2023, : 5 - 5
  • [48] Accelerating breast MRI acquisition with generative AI models
    Okolie, Augustine
    Dirrichs, Timm
    Huck, Luisa Charlotte
    Nebelung, Sven
    Arasteh, Soroosh Tayebi
    Nolte, Teresa
    Han, Tianyu
    Kuhl, Christiane Katharina
    Truhn, Daniel
    EUROPEAN RADIOLOGY, 2025, 35 (02) : 1092 - 1100
  • [49] Regulating ChatGPT and other Large Generative AI Models
    Hacker, Philipp
    Engel, Andreas
    Mauer, Marco
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 1112 - 1123
  • [50] Using Generative AI Models to Support Cybersecurity Analysts
    Balogh, Stefan
    Mlyncek, Marek
    Vranak, Oliver
    Zajac, Pavol
    ELECTRONICS, 2024, 13 (23):