When dealing with the optic disc and cup in the optical nerve head images, their joint segmentation confronts two critical problems. One is that the spatial layout of the vessels in the optic nerve head images is variant. The other is that the landmarks for the optic cup boundaries are spatially sparse and at small spatial scale. To solve these two problems, we propose a spatial-aware joint segmentation method by explicitly considering the spatial locations of the pixels and learning the multi-scale spatially dense features. We formulate the joint segmentation task from a probabilistic perspective, and derive a spatial-aware maximum conditional probability framework and the corresponding error function. Accordingly, we provide an end-to-end solution by designing a spatial-aware neural network. It consists of three modules: the atrous CNN module to extract the spatially dense features, the pyramid filtering module to produce the spatial-aware multi-scale features, and the spatial-aware segmentation module to predict the labels of pixels. We validate the state-of-the-art performances of our spatial-aware segmentation method on two public datasets, i.e., ORIGA and DRISHTI. Based on the segmentation masks, we quantify the cup-to-disk values and apply them to the glaucoma screening. High correlation between the cup-to-disk values and the risks of the glaucoma is validated on the dataset ORIGA.
A1 Journal article – refereed
Place of publication:
Qing Liu, Xiaopeng Hong, Shuo Li, Zailiang Chen, Guoying Zhao, Beiji Zou, A spatial-aware joint optic disc and cup segmentation method, Neurocomputing, Volume 359, 2019, Pages 285-297, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2019.05.039
Read the publication here: