Synthesizing images of the eye fundus is a challenging task that has been previously approached by formulating complex models of the anatomy of the eye. New images can then be generated by sampling a suitable parameter space. In this work, we propose a method that learns to synthesize eye fundus images directly from data. For that, we pair true eye fundus images with their respective vessel trees, by means of a vessel segmentation technique. These pairs are then used to learn a mapping from a binary vessel tree to a new retinal image. For this purpose, we use a recent image-to-image translation technique, based on the idea of adversarial learning. Experimental results show that the original and the generated images are visually different in terms of their global appearance, in spite of sharing the same vessel tree. Additionally, a quantitative quality analysis of the synthetic retinal images confirms that the produced images retain a high proportion of the true image set quality.
The diagram below summarizes our approach:
After training, we can use the Generator to synthesize an eye fundus image given a vessel network. Here are some of our results:
The first row shows real test images, the second row displays the test vessel networks and, in the last row, we show the retinas that were generated from the vessel networks. The last column shows a failure case, when the input vessel network is almost non-existent. In those cases, the method is not able to reliably generate consistent texture, although it still can synthesize a meaningful field of view.