Mirosław Kordos
Schedae Informaticae, Volume 25, 2016, pp. 153 - 164
https://doi.org/10.4467/20838476SI.16.012.6193Mirosław Kordos
Schedae Informaticae, Volume 24, 2015, pp. 41 - 51
https://doi.org/10.4467/20838476SI.15.004.3026Deep learning is a field of research attracting nowadays much attention, mainly because deep architectures help in obtaining outstanding results on many vision, speech and natural language processing – related tasks. To make deep learning effective, very often an unsupervised pretraining phase is applied. In this article, we present experimental study evaluating usefulness of such approach, testing on several benchmarks and different percentages of labeled data, how Contrastive Divergence (CD), one of the most popular pretraining methods, influences network generalization.