Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum
-
- John W. Ayers
- Qualcomm Institute, University of California San Diego, La Jolla
-
- Adam Poliak
- Department of Computer Science, Bryn Mawr College, Bryn Mawr, Pennsylvania
-
- Mark Dredze
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland
-
- Eric C. Leas
- Qualcomm Institute, University of California San Diego, La Jolla
-
- Zechariah Zhu
- Qualcomm Institute, University of California San Diego, La Jolla
-
- Jessica B. Kelley
- Human Longevity, La Jolla, California
-
- Dennis J. Faix
- Naval Health Research Center, Navy, San Diego, California
-
- Aaron M. Goodman
- Division of Blood and Marrow Transplantation, Department of Medicine, University of California San Diego, La Jolla
-
- Christopher A. Longhurst
- Department of Biomedical Informatics, University of California San Diego, La Jolla
-
- Michael Hogarth
- Department of Biomedical Informatics, University of California San Diego, La Jolla
-
- Davey M. Smith
- Division of Infectious Diseases and Global Public Health, Department of Medicine, University of California San Diego, La Jolla
説明
<jats:sec id="ab-ioi230030-4"><jats:title>Importance</jats:title><jats:p>The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians.</jats:p></jats:sec><jats:sec id="ab-ioi230030-5"><jats:title>Objective</jats:title><jats:p>To evaluate the ability of an AI chatbot assistant (ChatGPT), released in November 2022, to provide quality and empathetic responses to patient questions.</jats:p></jats:sec><jats:sec id="ab-ioi230030-6"><jats:title>Design, Setting, and Participants</jats:title><jats:p>In this cross-sectional study, a public and nonidentifiable database of questions from a public social media forum (Reddit’s r/AskDocs) was used to randomly draw 195 exchanges from October 2022 where a verified physician responded to a public question. Chatbot responses were generated by entering the original question into a fresh session (without prior questions having been asked in the session) on December 22 and 23, 2022. The original question along with anonymized and randomly ordered physician and chatbot responses were evaluated in triplicate by a team of licensed health care professionals. Evaluators chose “which response was better” and judged both “the quality of information provided” (<jats:italic>very poor</jats:italic>, <jats:italic>poor</jats:italic>, <jats:italic>acceptable</jats:italic>, <jats:italic>good</jats:italic>, or <jats:italic>very good</jats:italic>) and “the empathy or bedside manner provided” (<jats:italic>not empathetic</jats:italic>, <jats:italic>slightly empathetic</jats:italic>, <jats:italic>moderately empathetic</jats:italic>, <jats:italic>empathetic</jats:italic>, and <jats:italic>very empathetic</jats:italic>). Mean outcomes were ordered on a 1 to 5 scale and compared between chatbot and physicians.</jats:p></jats:sec><jats:sec id="ab-ioi230030-7"><jats:title>Results</jats:title><jats:p>Of the 195 questions and responses, evaluators preferred chatbot responses to physician responses in 78.6% (95% CI, 75.0%-81.8%) of the 585 evaluations. Mean (IQR) physician responses were significantly shorter than chatbot responses (52 [17-62] words vs 211 [168-245] words; <jats:italic>t</jats:italic> = 25.4; <jats:italic>P</jats:italic> &lt; .001). Chatbot responses were rated of significantly higher quality than physician responses (<jats:italic>t</jats:italic> = 13.3; <jats:italic>P</jats:italic> &lt; .001). The proportion of responses rated as <jats:italic>good</jats:italic> or <jats:italic>very good</jats:italic> quality (≥ 4), for instance, was higher for chatbot than physicians (chatbot: 78.5%, 95% CI, 72.3%-84.1%; physicians: 22.1%, 95% CI, 16.4%-28.2%;). This amounted to 3.6 times higher prevalence of <jats:italic>good</jats:italic> or <jats:italic>very good</jats:italic> quality responses for the chatbot. Chatbot responses were also rated significantly more empathetic than physician responses (<jats:italic>t</jats:italic> = 18.9; <jats:italic>P</jats:italic> &lt; .001). The proportion of responses rated <jats:italic>empathetic</jats:italic> or <jats:italic>very empathetic</jats:italic> (≥4) was higher for chatbot than for physicians (physicians: 4.6%, 95% CI, 2.1%-7.7%; chatbot: 45.1%, 95% CI, 38.5%-51.8%; physicians: 4.6%, 95% CI, 2.1%-7.7%). This amounted to 9.8 times higher prevalence of <jats:italic>empathetic</jats:italic> or <jats:italic>very empathetic</jats:italic> responses for the chatbot.</jats:p></jats:sec><jats:sec id="ab-ioi230030-8"><jats:title>Conclusions</jats:title><jats:p>In this cross-sectional study, a chatbot generated quality and empathetic responses to patient questions posed in an online forum. Further exploration of this technology is warranted in clinical settings, such as using chatbot to draft responses that physicians could then edit. Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.</jats:p></jats:sec>
収録刊行物
-
- JAMA Internal Medicine
-
JAMA Internal Medicine 183 (6), 589-, 2023-06-01
American Medical Association (AMA)