The Ethical AI Lawyer: What is Required of Lawyers When They Use Automated Systems?

被引:4
|
作者
Rogers, Justine [1 ]
Bell, Felicity [1 ]
机构
[1] Univ New South Wales, Kensington, NSW, Australia
来源
LAW TECHNOLOGY AND HUMANS | 2019年 / 1卷 / 01期
关键词
Lawyers; legal practice; professional ethics; Artificial Intelligence; AI; DEFINING ISSUES TEST; LEGAL PROFESSION; TECHNOLOGY; SERVICES; FUTURE; VALUES; FIRMS; WORK;
D O I
10.5204/lthj.v1i0.1324
中图分类号
D9 [法律]; DF [法律];
学科分类号
0301 ;
摘要
This article focuses on individual lawyers' responsible use of artificial intelligence (AI) in their practice. More specifically, it examines the ways in which a lawyer's ethical capabilities and motivations are tested by the rapid growth of automated systems, both to identify the ethical risks posed by AI tools in legal services, and to uncover what is required of lawyers when they use this technology. To do so, we use psychologist James Rest's Four-component Model of Morality (FCM), which represents the necessary elements for lawyers to engage in professional conduct when utilising AL We examine issues associated with automation that most seriously challenge each component in context, as well as the skills and resolve lawyers need to adhere to their ethical duties. Importantly, this approach is grounded in social psychology. That is, by looking at human 'thinking and doing' (i.e., lawyers' motivations and capacity when using AI), this offers a different, complementary perspective to the typical, legislative approach in which the law is analysed for regulatory gaps.
引用
收藏
页码:80 / 99
页数:20
相关论文
共 14 条
  • [1] When things go wrong: the recall of AI systems as a last resort for ethical and lawful AI
    Alessio Tartaro
    AI and Ethics, 2025, 5 (1): : 253 - 262
  • [2] Ethical reasoning methods for ICT: What they are and when to use them
    Espana, Sergio
    van der Maaten, Chris
    Gulden, Jens
    Pastor, Oscar
    DATA & KNOWLEDGE ENGINEERING, 2025, 155
  • [3] Interaction Between Human and AI Systems - When Automated Systems Move Towards Autonomous
    George, Geena Alexander
    Landryova, Lenka
    2019 20TH INTERNATIONAL CARPATHIAN CONTROL CONFERENCE (ICCC), 2019, : 283 - 288
  • [4] A conceptual ethical framework to preserve natural human presence in the use of AI systems in education
    Isop, Werner Alexander
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2025, 7
  • [5] Wrong, Strong, and Silent: What Happens when Automated Systems With High Autonomy and High Authority Misbehave?
    Dekker, Sidney W. A.
    Woods, David D.
    JOURNAL OF COGNITIVE ENGINEERING AND DECISION MAKING, 2024, 18 (04) : 339 - 345
  • [6] Ethical Applications of Big Data-Driven AI on Social Systems: Literature Analysis and Example Deployment Use Case
    Garcia, Paulo
    Darroch, Francine
    West, Leah
    BrooksCleator, Lauren
    INFORMATION, 2020, 11 (05)
  • [7] Doctors' perception on the ethical use of AI-enabled clinical decision support systems for antibiotic prescribing recommendations in Singapore
    Huang, Zhilian
    Lim, Hannah Yee-Fen
    Ow, Jing Teng
    Sun, Shirley Hsiao-Li
    Chow, Angela
    FRONTIERS IN PUBLIC HEALTH, 2024, 12
  • [8] What influences the decision to use automated public transport? Using UTAUT to understand public acceptance of automated road transport systems
    Madigan, Ruth
    Louw, Tyron
    Wilbrink, Marc
    Schieben, Anna
    Merat, Natasha
    TRANSPORTATION RESEARCH PART F-TRAFFIC PSYCHOLOGY AND BEHAVIOUR, 2017, 50 : 55 - 64
  • [9] When to Use What: An In-Depth Comparative Empirical Analysis of OpenIE Systems for Downstream Applications
    Pei, Kevin
    Jindal, Ishan
    Chang, Kevin Chen-Chuan
    Zhai, Chengxiang
    Li, Yunyao
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 929 - 949