Researchers at Helmholtz Zentrum München together with LMU University Eye Hospital Munich and the Technical University of Munich (TUM) created a novel deep learning method that makes automated screenings for eye diseases such as diabetic retinopathy more efficient. Reducing the amount of expensive annotated image data that is required for the training of the algorithm, the method is attractive for clinics. In the use case of diabetic retinopathy, the researchers developed a screening algorithm that needs 75 percent less annotated data and achieves the same diagnostic performance of human experts.
In recent years, clinics have taken first steps towards artificial intelligence and deep learning to automate medical screenings. However, training a deep learning algorithm for accurate screening and diagnosis prediction requires large sets of annotated data and clinics often struggle with expensive expert labeling. Researchers were therefore looking for ways to reduce the need for costly annotated data while still maintaining the high performance of the algorithm.
Use case diabetic retinopathy
Diabetic retinopathy is a diabetes-related eye disease damaging the retina and can ultimately lead to blindness. Measuring the retinal thickness is an important procedure to diagnose the disease in risk patients. To do so, most clinics take photographs of the fundus—the surface of the back of the eye. In order to automate the screening of these images, clinics started to apply deep learning algorithms. These algorithms require large sets of fundus images with expensive annotations in order to be trained to screen correctly.
The LMU University Eye Hospital Munich owns a population-size data set containing over 120,000 unannotated fundus and co-registered OCT images. OCT (optical coherence tomography) allows for precise information about the retinal thickness but is not commonly available in every eye care center. The LMU provided their