Letter to Editor

ChatGPT 4 may inappropriately answer questions about managing critically ill patients

Volume: 16 Number: 4 January 1, 2026
TR EN

ChatGPT 4 may inappropriately answer questions about managing critically ill patients

Abstract

Intensive care patients have extremely complex pathologies, making their treatment management difficult. In this study, we aim to evaluate ChatGPT 4, a large language model, regarding its knowledge of critically ill patient management. Scenarios involving mechanical ventilation were created by an intensivist, anesthesiologist, and neurologist working in intensive care units, including head trauma, pulmonary embolism, myocardial infarction, chronic obstructive pulmonary disease, chronic kidney disease, and infective endocarditis, septic shock, and status epilepticus. Questions about patient management related to these scenarios were proposed to ChatGPT 4. Although ChatGPT 4 answered most of the questions correctly, it still requires medical input.

Keywords

Supporting Institution

This research received no external funding.

Ethical Statement

This study does not require copyrighting for ChatGPT 4 outputs. According to OpenAI's Content Policy and Terms of Use, ChatGPT users own all outputs they create with large language models, including the text and images. We obtained ChatGPT 4’s responses to our designed questions. ChatGPT's answers to our questions are available at: https://github.com/barisaakan/llm-patients

References

  1. Lu Y, Wu H, Qi S, Cheng K. Artificial Intelligence in Intensive Care Medicine: Toward a ChatGPT/GPT-4 Way? Ann Biomed Eng. 2023;51:1898-903.
  2. https://github.com/barisaakan/llm-patients
  3. Monjezi M, Jamaati H. The effects of pressure- versus volume-controlled ventilation on ventilator work of breathing. Biomed Eng Online. 2020;19:72.
  4. Chiumello D, Meli A, Pozzi T, Lucenteforte M, Simili P, Sterchele E, Coppola S. Different Inspiratory Flow Waveform during Volume-Controlled Ventilation in ARDS Patients. J Clin Med. 2021;10:4756.
  5. McDonald EG, Aggrey G, Aslan AT, Casias M, Cortes-Penfield N, Dong MQD et al. Guidelines for Diagnosis and Management of Infective Endocarditis in Adults: A WikiGuidelines Group Consensus Statement. JAMA Netw Open. 2023;6:e2326366.
  6. Zhu X, Wang Z, Ferrari MW, Ferrari-Kuehne K, Hsi et al. Management of anticoagulation in patients with infective endocarditis. Thromb Res. 2023;229:15-25.
  7. Ibrahim F, Murr NI. Lafora Disease 2024. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025.
  8. Parihar R, Ganesh S. Lafora progressive myoclonus epilepsy: Disease mechanism and therapeutic attempts. J Biosci. 2024;49:22.

Details

Primary Language

English

Subjects

Intensive Care

Journal Section

Letter to Editor

Publication Date

January 1, 2026

Submission Date

August 11, 2025

Acceptance Date

October 13, 2025

Published in Issue

Year 2025 Volume: 16 Number: 4

APA
Gokcinar, D., Akan, B., Kurşun, O., Gökçınar, A. H., & Akan, B. (2026). ChatGPT 4 may inappropriately answer questions about managing critically ill patients. Turkish Journal of Clinics and Laboratory, 16(4), 688-692. https://doi.org/10.18663/tjcl.1762561
AMA
1.Gokcinar D, Akan B, Kurşun O, Gökçınar AH, Akan B. ChatGPT 4 may inappropriately answer questions about managing critically ill patients. TJCL. 2026;16(4):688-692. doi:10.18663/tjcl.1762561
Chicago
Gokcinar, Derya, Belgin Akan, Oğuzhan Kurşun, Abdullah Halit Gökçınar, and Barış Akan. 2026. “ChatGPT 4 May Inappropriately Answer Questions about Managing Critically Ill Patients”. Turkish Journal of Clinics and Laboratory 16 (4): 688-92. https://doi.org/10.18663/tjcl.1762561.
EndNote
Gokcinar D, Akan B, Kurşun O, Gökçınar AH, Akan B (January 1, 2026) ChatGPT 4 may inappropriately answer questions about managing critically ill patients. Turkish Journal of Clinics and Laboratory 16 4 688–692.
IEEE
[1]D. Gokcinar, B. Akan, O. Kurşun, A. H. Gökçınar, and B. Akan, “ChatGPT 4 may inappropriately answer questions about managing critically ill patients”, TJCL, vol. 16, no. 4, pp. 688–692, Jan. 2026, doi: 10.18663/tjcl.1762561.
ISNAD
Gokcinar, Derya - Akan, Belgin - Kurşun, Oğuzhan - Gökçınar, Abdullah Halit - Akan, Barış. “ChatGPT 4 May Inappropriately Answer Questions about Managing Critically Ill Patients”. Turkish Journal of Clinics and Laboratory 16/4 (January 1, 2026): 688-692. https://doi.org/10.18663/tjcl.1762561.
JAMA
1.Gokcinar D, Akan B, Kurşun O, Gökçınar AH, Akan B. ChatGPT 4 may inappropriately answer questions about managing critically ill patients. TJCL. 2026;16:688–692.
MLA
Gokcinar, Derya, et al. “ChatGPT 4 May Inappropriately Answer Questions about Managing Critically Ill Patients”. Turkish Journal of Clinics and Laboratory, vol. 16, no. 4, Jan. 2026, pp. 688-92, doi:10.18663/tjcl.1762561.
Vancouver
1.Derya Gokcinar, Belgin Akan, Oğuzhan Kurşun, Abdullah Halit Gökçınar, Barış Akan. ChatGPT 4 may inappropriately answer questions about managing critically ill patients. TJCL. 2026 Jan. 1;16(4):688-92. doi:10.18663/tjcl.1762561