Deep Distance Regression


Paper

The paper ‘Segmentation of Nuclei in Histopathology Images by deep regression of the distance map’ by Peter Naylor, Thomas Walter, Fabien Reyal and Marick Laé has been published in IEEE transactions on medical imaging, 2018.

Data publicaly available

The data used for the study can be found here. The code can be found here. The results involving the comparaison of the different methods can be found here.

Summary of the paper

We focused on the problem of touching nuclei in histopathology images, i.e. instance segmentation applied to nuclei. We transform the standard classification problem into a regression problem. In other words, instead of predicting categorial variables we try to infer the distance between a given pixel and it’s distance to the border, see figure 1. To recover the segmentation we apply a post-processing scheme that goes as follows: we join two maximum a posteriori of the distance map if and only if we can find a path that does not decrease of a certain size between them. This type of post-processing is ideally suited for the previous task. We believe that formulating the problem into a distance regression problem helps the model learn the concept of a cell better. Indead, the main difference with the standard classification arises in areas where the model is uncertain and will try to maximise per pixel probabilities. This situation does not arise with the distance regression and therefor creates nice and smooth segmentation results as one can see in figure 2 and 3.

Figure 1: Distance Regression Estimation

B is the space of binary files and Bd is the distance transform of the previous space. Figure 2: Comparaison of different methods, some examples

Figure 3: Comparaison of different methods, boxplots

U-Net: refers to the method in the paper: Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. “U-net: Convolutional networks for biomedical image segmentation.” International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.

FCN: refers to the method in the paper: Long, Jonathan, Evan Shelhamer, and Trevor Darrell. “Fully convolutional networks for semantic segmentation.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

Mask r-cnn: refers to the method in the paper: He, Kaiming, et al. “Mask r-cnn.” Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017.

[32]: refers to the method in the paper: N. Kumar, R. Verma, S. Sharma, S. Bhargava et al. “A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology,”, IEEE Transactions on Medical Imaging, 2017.


Large scale machine learning


You can find all the desired information on the main organizers git hub page: Chloe-Agathe Azencott. Pratical session of 4 days given to M2 students in Mines-paristech.


Introduction to machine learning


You can find all the desired information on the lecturer git hub page: Chloe-Agathe Azencott. I gave a 1h30 research talk with Joseph Boyd. This course was given in central supelec. The material related to the research talk for my part can be found here.


Isbi 2017


Paper

The conference paper Nuclei Segmentation in Histopathology Images Using Deep Neural Networks by Peter Naylor, Marick Laé, Fabien Reyal and Thomas Walter has been published in Biomedical Imaging (ISBI 2017), 2017 IEEE 14th International Symposium on (pp. 933-936).

Data publicaly available

The data used for the study can be found here here. The files come as a zip file and has many subdirectories. Patients histopathological data can be found under “Slide_id” and the ground truth can be found under “GT_id” where id is the patients id. Histopathology data are under a standard RGB files, however the ground truth is under an itk-snap format which is nii.gz. One could use the nibabel to open such images.

Summary of the paper

We note 3 contributions:

We used ITK-snap for the annotation.

Figure 1: Random samples from the dataset

Table 1: Metric comparaison of methods

Figure 2: Visual comparaison of methods

FCN: refers to the method in the paper: Long, Jonathan, Evan Shelhamer, and Trevor Darrell. “Fully convolutional networks for semantic segmentation.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.

PangNet: refers to the method in the paper: Pang Baochuan, Zhang Yi, Chen Qianqing, Gao Zhifan, Peng Qinmu and You Xinge, “Cell nucleus segmentation in color histopathological imagery using convolutional networks,” in Pattern Recognition (CCPR), 2010 Chinese Conference on. IEEE, 2010, pp. 1–5.

DeconvNet: refers to the method in the paper: Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, “Learning deconvolution network for semantic segmentation,” arXiv preprint arXiv :1505.04366, 2015.

ITK-snap: refers to the software developed in the paper: Paul A. Yushkevich, Joseph Piven, Heather Cody Hazlett, Rachel Gimpel Smith, Sean Ho, James C. Gee, and Guido Gerig, “User-guided 3D active contour segmentation of anatomical structures : Significantly improved efficiency and reliability,” Neuroimage, vol. 31, no. 3, pp. 1116–1128, 2006.


Bio-info, SVM and Graph-kernels


You can find all the desired information on my colleagues web pages: Benoit Playe Course given at Centrale Supelec for M2 students.


Pagination