Could artificial intelligence write mental health nursing care plans?

被引:25
|
作者
Woodnutt, Samuel [1 ]
Allen, Chris [1 ]
Snowden, Jasmine [1 ]
Flynn, Matt [1 ]
Hall, Simon [1 ]
Libberton, Paula [1 ]
Purvis, Francesca [1 ]
机构
[1] Univ Southampton, Sch Hlth Sci, Southampton, Hants, England
关键词
art of nursing; nursing role; quality of care; self-harm; therapeutic relationships;
D O I
10.1111/jpm.12965
中图分类号
R47 [护理学];
学科分类号
1011 ;
摘要
Background: Artificial intelligence (AI) is being increasingly used and discussed in care contexts. ChatGPT has gained significant attention in popular and scientific literature although how ChatGPT can be used in care-delivery is not yet known. Aims: To use artificial intelligence (ChatGPT) to create a mental health nursing care plan and evaluate the quality of the output against the authors' clinical experience and existing guidance. Materials & Methods: Basic text commands were input into ChatGPT about a fictitious person called 'Emily' who presents with self-injurious behaviour. The output from ChatGPT was then evaluated against the authors' clinical experience and current (national) care guidance. Results: ChatGPT was able to provide a care plan that incorporated some principles of dialectical behaviour therapy, but the output had significant errors and limitations and thus there is a reasonable likelihood of harm if used in this way. Discussion: AI use is increasing in direct-care contexts through the use of chatbots or other means. However, AI can inhibit clinician to care-recipient engagement, 'recycle' existing stigma, and introduce error, which may thus diminish the ability for care to uphold personhood and therefore lead to significant avoidable harms. Conclusion: Use of AI in this context should be avoided until a point where policy and guidance can safeguard the wellbeing of care recipients and the sophistication of AI output has increased. Given ChatGPT's ability to provide superficially reasonable outputs there is a risk that errors may go unnoticed and thus increase the likelihood of patient harms. Further research evaluating AI output is needed to consider how AI may be used safely in care delivery.
引用
收藏
页码:79 / 86
页数:8
相关论文
共 50 条