Artificial intelligence (AI) can play an important supporting function in healthcare diagnostics. However, the way AI systems work is often opaque, affecting their relevance. Researchers from Radboudumc show in a new publication how they can make the way AI works transparent and allow AI to diagnose more like a doctor. In doing so, the researchers aim to increase the relevance of AI systems in clinical practice.
In practice, doctors spend a lot of time looking at X-rays or biopsies to detect abnormalities. We previously wrote about AI systems that can detect diagnostic medical images. These include CAD4COVID for assessing lung photos of patients with COVID-19 infection and a system by DeepMind, part of Google's parent company Alphabet, in collaboration with the UK's Moorfields Eye Hospital for diagnosing 50 eye diseases.
Deep learning
Such systems use deep learning in most cases. A deep learning algorithm consists of a network of electronic 'neurons', each of which learns to recognise one aspect of the desired image. In this process, each neuron forms its own layer, each time passing on its result to the next layer. The system is self-learning and, as it sees more images, recognises patterns and thus abnormalities better and better. By learning which diseases these features belong to, the system can make diagnoses.
Using such deep learning systems offers important advantages. In some cases, for example, the system is as good or even better than experienced doctors at recognising abnormalities. In addition, they can complete their training much faster than humans and AI algorithms can work 24 hours a day without breaks. The algorithms can therefore significantly support doctors and give them more time to perform tasks that add more value.
'Deep learning systems are lazy'
At the same time, deep learning systems also have their limitations. For instance, AI systems are often not transparent about how they analyse images. In addition, the Radboudumc calls the systems 'lazy'. For example, an AI algorithm only looks at what is needed for a particular diagnosis and then stops. In practice, this means that a scan by an AI algorithm will not identify all abnormalities, even if the diagnosis is correct. However, a doctor looks at the bigger picture.
To reduce these limitations and make AI systems more attractive to clinical practice, Cristina González-Gonzalo developed a two-pronged improvement of diagnostic AI. González-Gonzalo is a PhD candidate at A-eye Research and the Diagnostic Image Analysis Group at Radboudumc. She did this on the basis of eye scans showing retinal abnormalities, specifically diabetic retinopathy and age-related macular degeneration. These are abnormalities that can be easily recognised by both a doctor and AI.
Forced further search
At the same time, they are anomalies that often occur in groups. With conventional AI algorithms, this would mean that AI would only diagnose one or a few spots, and then stop the analysis. González-Gonzalo developed a process in which the AI system goes through the image over and over again, learning to ignore the spots that have been passed over before and constantly focusing its attention on new spots. The system is thus forced to search further. Also, thanks to González-Gonzalo's process, the AI system shows which parts of the eye scan have been identified as suspicious, making the diagnostic process more transparent.
In practice, the AI system does this by 'covering' abnormalities detected on medical images; with healthy tissue from around the abnormality. During the according round of analysis, this prevents the system from recognising these abnormalities, and only finds new ones. By combining the results of all rounds of analysis, a final diagnosis can be made.
More information can be found in the article 'Iterative augmentation of visual evidence for weakly-supervised lesion localisation in deep interpretability frameworks: application to colour fundus images' in IEEE by Cristina González-Gonzalo, Bart Liefers, Bram van Ginneken and Clara I. Sánchez.
Author: Wouter Hoeffnagel