The analysis and annotation of tissue data from histopathological whole slide images is a complex task requiring knowledge of the domain when labeling is done by hand. The design of robust computer vision architectures automating this task can be tedious because of the lack of labeled data for testing and training, further complicated by the fact of notable differences in the number of training examples per class. In this thesis, methods to overcome those constraints are developed either by making use of pre-trained architectures from the current state of the art convolutional networks used in the ILSVRC classification task as well as unsupervised convolutional models for feature extraction. Misclassification rates using previous approaches reached between 14 and 16 percent accuracy the ILU-224 dataset using histogram-based features in LAB colorspace along with a random forest classifier. Since classical approaches using hand-crafted features can become involved when it comes to the right feature design, a robust approach, easily portable to different domains of medical computer vision problems, is desirable. The recent success of deep neural networks motivates the investigation of methods to automatically construct good feature represent- ations directly from data and thereby benefit from already established, complex architec- tures originally developed for the use in other domains. In this thesis, various methods for the design of deep architectures for tissue classification are presented. By using transfer learning and unsupervised feature learning, it is shown that powerful state of the art models with millions of parameters can be finetuned to outperform previous approaches despite the lack of sufficient labeled training examples. Several models such as the 16-layer VGGnet, the GoogLeNet model with some exten- sion, convolutional restrict boltzman machines and convolutional denoising autoencoders were trained on the ILUMINATE-9 dataset. Along with evidence provided on how the training policy for the networks should be designed, a whole model zoo, trained on the ILUMINATE-9 dataset, is provided along with this thesis. Results from unsupervised learning techniques show that the learned low-level filters re- semble the ones already incorporated in current CNN architectures. Pretraining with weights obtained with unsupervised is therefore not necessary, since weights adapted to the ImageNet dataset provide good results also for histopathological image analysis. On the ILU-224 whole slide image, classification performance is improved from from 84.2% to 96.5% in terms of accuracy when using transfer learning with weights from the VGG 16-layer model. Concerning generalization performance, the approaches discussed are able to outperform the previously tested classifiers, however no clear winner emerged during the experiments. An ensemble of the models finally derived is used to label a range of previously weakly annotated whole slide images than can now be further examined.
If you are interested in reading the full text, please contact me via email