AI Will Not Replace Human Doctors in Latest Study: Radiologists Achieve Higher Accuracy in Medical Image Interpretation
Doctors are not going anywhere anytime soon because of Artificial Intelligence. This statement stems from a new report published in the BMJ. The researchers working in the UK say that a medical bot failed a standard litmus test for all doctors and medical professionals in the field.
The failure to pass the test suggests that AI is still far-off and cannot perform complex doctors’ work at the current stage. However, AI has many applications in the medical field, such as interpreting, MRIs, and X-rays.
Man Versus Machine
In a recent study, radiologists contrasted the performance of an artificial intelligence (AI) tool to that of radiologists in interpreting medical images. The study included 26 participants, all of whom were radiologists and had previously excelled in the FRCR or the Fellowship of the Royal College of Radiology examination, a required qualification for becoming a radiology consultant in the United Kingdom.
The researchers designed the mock exams from one of the three modules of the FRCR exam to evaluate the accuracy and speed of the radiologists and the AI tool in rapid image interpretation. The human participants were between 31 and 40, had completed their training, and passed the FRCR exam within the past year.
Researchers conducted the study to evaluate the potential use of the AI tool in radiology and to determine how it compares to human radiologists in terms of accuracy and speed. The study involved ten practice exams, each of which had 30 radiographs with the same level of complexity as the actual Fellowship of the Royal College of Radiology (FRCR) exam.
Candidates had 35 minutes to interpret at least 90% of the images to pass the mock exam. Then, the researchers trained the AI tool to evaluate radiographs of the chest and bones for various problems, such as fractures, swelling, and collapsing lungs.
The Results
The study’s results show that the Artificial Intelligence tool passed only two out of the ten simulated FRCR exams and had an average accuracy of below 80% after excluding unreadable images from the study.
On the flip side, the radiologist averaged four out of the ten mock exams with an average accuracy of 85%. These results suggest while the AI tool could interpret a significant number of medical images accurately, human radiologists could still achieve slightly higher accuracy overall.
The researchers suggested that review and training would be necessary for the AI, particularly for images deemed uninterpretable, such as the axial skeleton radiographs.
The researchers also noted that the use of AI in healthcare has the potential to improve diagnostic accuracy and efficiency. Still, it is essential to familiarize doctors and the public with AI’s limits and make these limitations more evident.
The study was one of the more thorough comparisons between radiologists and AI and provided a wide range of scores and outcomes for analysis. However, the report stated that the researchers did not invigilate or time the mock exams, so the radiologists may have felt less pressure to perform well than they would in an examination.
Conclusion
Even with all the technology available, AI is still far off, but the potential of the technology is endless, and it is just a matter of time before the future catches up.