FAQ
logo of Jagiellonian University in Krakow

Effectiveness of Unsupervised Training in Deep Learning Neural Networks

Publication date: 11.04.2016

Schedae Informaticae, 2015, Volume 24, pp. 41 - 51

https://doi.org/10.4467/20838476SI.15.004.3026

Authors

,
Andrzej Rusiecki
Wroclaw University of Technology, Poland
All publications →
Mirosław Kordos
University of Bielsko-Biala Department of Computer Science
All publications →

Titles

Effectiveness of Unsupervised Training in Deep Learning Neural Networks

Abstract

Deep learning is a field of research attracting nowadays much attention, mainly because deep architectures help in obtaining outstanding results on many vision, speech and natural language processing – related tasks. To make deep learning effective, very often an unsupervised pretraining phase is applied. In this article, we present experimental study evaluating usefulness of such approach, testing on several benchmarks and different percentages of labeled data, how Contrastive Divergence (CD), one of the most popular pretraining methods, influences network generalization.

References

[1] Bengio Y., Lamblin P., Popovici D., Larochelle H., et al., Greedy layer-wise training of deep networks. Advances in neural information processing systems, 2007, 19, pp. 153.
[2] Salakhutdinov R., Hinton G., Semantic hashing. In: Proceedings of the 2007 Workshop on Information Retrieval and applications of Graphical Models (SIGIR 2007), 2007.
[3] Erhan D., Bengio Y., Courville A., Manzagol P.A., Vincent P., Bengio S., Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 2010, 11, pp. 625–660.
[4] Vincent P., Larochelle H., Lajoie I., Bengio Y., Manzagol P.A., Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 2010, 11, pp. 3371–3408.
[5] Bengio Y., Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2009, 2(1), pp. 1–127.
[6] Carreira-Perpinan M.A., Hinton G., On contrastive divergence learning. In: AISTATS. vol. 10., Citeseer, 2005, pp. 33–40.
[7] Hinton G.E., Salakhutdinov R.R., A better way to pretrain deep boltzmann machines. In: Advances in Neural Information Processing Systems, 2012, pp. 2447– 2455.
[8] Tieleman T., Hinton G., Using fast weights to improve persistent contrastive divergence. In: Proceedings of the 26th Annual International Conference on Machine Learning, ACM, 2009, pp. 1033–1040.
[9] Erhan D., Manzagol P.A., Bengio Y., Bengio S., Vincent P., The difficulty of training deep architectures and the effect of unsupervised pre-training. In: International Conference on artificial intelligence and statistics, 2009, pp. 153–160.
[10] Riedmiller M., Braun H., A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: Neural Networks, 1993., IEEE International Conference on, IEEE, 1993, pp. 586–591.
[11] Sutskever I., Tieleman T., On the convergence properties of contrastive divergence. In: International Conference on Artificial Intelligence and Statistics, 2010, pp. 789–795.
[12] Geras K.J., Sutton C., Scheduled denoising autoencoders. arXiv preprint arXiv:1406.3269, 2014.
[13] LeCun Y., Bottou L., Bengio Y., Haffner P., Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11), pp. 2278–2324. [14] Blake C., Merz C.J., {UCI} repository of machine learning databases, 1998. [15] Software and datasets used in the paper. http://www.kordos.com/datasets, Accessed: 2014-12-30.

Information

Information: Schedae Informaticae, 2015, Volume 24, pp. 41 - 51

Article type: Original article

Titles:

Polish:

Effectiveness of Unsupervised Training in Deep Learning Neural Networks

English:

Effectiveness of Unsupervised Training in Deep Learning Neural Networks

Authors

Wroclaw University of Technology, Poland

University of Bielsko-Biala Department of Computer Science

Published at: 11.04.2016

Article status: Open

Licence: None

Percentage share of authors:

Andrzej Rusiecki (Author) - 50%
Mirosław Kordos (Author) - 50%

Article corrections:

-

Publication languages:

English