TY - JOUR
T1 - Doctors identify hemorrhage better during chart review when assisted by artificial intelligence
AU - Laursen, Martin Sundahl
AU - Pedersen, Jannik Skyttegaard
AU - Hansen, Rasmus Søgaard
AU - Savarimuthu, Thiusius R.
AU - Lynggaard, Rasmus Bank
AU - Vinholt, Pernille
PY - 2023/8
Y1 - 2023/8
N2 - Objectives: This study evaluated if medical doctors could identify more hemorrhage events during chart review in a clinical setting when assisted by an artificial intelligence (AI) model, and medical doctors’ perception of using the AI model.Methods: To develop the AI model, sentences from 900 electronic health records were labeled as positive or negative for hemorrhage and categorized into one of twelve anatomical locations. The AI model was evaluated on a test cohort consisting of 566 admissions. Using eye-tracking technology, we investigated medical doctors’ reading workflow during manual chart review. Moreover, we performed a clinical use study where medical doctors read two admissions with and without AI assistance to evaluate performance when, and perception of using the AI model.Results: The AI model had a sensitivity of 93.7% and a specificity of 98.1% on the test cohort. In the use studies, we found that medical doctors missed more than 33% of relevant sentences when doing chart review without AI assistance. Hemorrhage events described in paragraphs were more often overlooked compared to bullet-pointed hemorrhage mentions. With AI assisted chart review, medical doctors identified 48 and 49 percentage points more hemorrhage events than without assistance in two admissions, and they were generally positive towards using the AI model as a supporting tool.Conclusions: Medical doctors identified more hemorrhage events with AI assisted chart review and they were generally positive towards using the AI model.
AB - Objectives: This study evaluated if medical doctors could identify more hemorrhage events during chart review in a clinical setting when assisted by an artificial intelligence (AI) model, and medical doctors’ perception of using the AI model.Methods: To develop the AI model, sentences from 900 electronic health records were labeled as positive or negative for hemorrhage and categorized into one of twelve anatomical locations. The AI model was evaluated on a test cohort consisting of 566 admissions. Using eye-tracking technology, we investigated medical doctors’ reading workflow during manual chart review. Moreover, we performed a clinical use study where medical doctors read two admissions with and without AI assistance to evaluate performance when, and perception of using the AI model.Results: The AI model had a sensitivity of 93.7% and a specificity of 98.1% on the test cohort. In the use studies, we found that medical doctors missed more than 33% of relevant sentences when doing chart review without AI assistance. Hemorrhage events described in paragraphs were more often overlooked compared to bullet-pointed hemorrhage mentions. With AI assisted chart review, medical doctors identified 48 and 49 percentage points more hemorrhage events than without assistance in two admissions, and they were generally positive towards using the AI model as a supporting tool.Conclusions: Medical doctors identified more hemorrhage events with AI assisted chart review and they were generally positive towards using the AI model.
KW - artificial intelligence
KW - decision support systems
KW - electronic health records
KW - hemorrhage
KW - natural language processing
KW - Physicians
KW - Artificial Intelligence
KW - Humans
KW - Hospitalization
KW - Hemorrhage/diagnosis
KW - Electronic Health Records
U2 - 10.1055/a-2121-8380
DO - 10.1055/a-2121-8380
M3 - Journal article
C2 - 37399838
SN - 1869-0327
VL - 14
SP - 743
EP - 751
JO - Applied Clinical Informatics
JF - Applied Clinical Informatics
IS - 4
ER -