FAQ
logo of Jagiellonian University in Krakow

Looking for the Right Time to Shift Strategy in the Exploration-exploitation Dilemma

Publication date: 11.04.2016

Schedae Informaticae, 2015, Volume 24, pp. 73 - 82

https://doi.org/10.4467/20838476SI.15.007.3029

Authors

Filipo S. Perotto
IRIT University of Toulouse 1 Capitole
All publications →

Titles

Looking for the Right Time to Shift Strategy in the Exploration-exploitation Dilemma

Abstract

Balancing exploratory and exploitative behavior is an essential dilemma faced by adaptive agents. The challenge of finding a good trade-off between exploration (learn new things) and exploitation (act optimally based on what is already known) has been largely studied for decision-making problems where the agent must learn a policy of actions. In this paper we propose the engaged climber method, designed for solving the exploration-exploitation dilemma. The solution consists in explicitly creating two different policies (for exploring or for exploiting), and to determine the good moments to shift from the one to the other by the use of notions like engagement and curiosity.

References

[1] Sutton R., Barto A., Introduction to Reinforcement Learning. 1st edn. MIT Press, Cambridge, MA, USA, 1998.
[2] Sigaud O., Buffet O., eds. Markov Decision Processes in Artificial Intelligence. iSTE - Wiley, 2010.
[3] Wiering M., Otterlo M., Reinforcement learning and markov decision processes. In: Reinforcement Learning: State-of-the-Art. Springer, Berlin/Heidelberg 2012, pp. 3–42.
[4] Auer P., Cesa-Bianchi N., Fischer P., Finite-time analysis of the multiarmed bandit problem. 2002, 47(2–3), pp. 235–256.
[5] Tokic M., Adaptive epsilon-greedy exploration in reinforcement learning based on value difference. In: KI 2010, Berlin, Heidelberg, Springer-Verlag, 2010, pp. 203–210.
[6] Tokic M., Palm G., Value-difference based exploration: Adaptive control between epsilon-greedy and softmax. In: KI 2011, Berlin, Heidelberg, Springer-Verlag, 2011, pp. 335–346.
[7] Meuleau N., Bourgine P., Exploration of multi-state environments: Local measures and back-propagation of uncertainty. May 1999, 35(2), pp. 117–154.
[8] Kearns M., Singh S., Near-optimal reinforcement learning in polynomial time. November 2002, 49(2-3), pp. 209–232.
[9] Brafman R., Tennenholtz M., R-max – a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res., 2002, 3, pp. 213–231.
[10] Poupart P., Vlassis N., Hoey J., Regan K., An analytic solution to discrete bayesian reinforcement learning. In: Proc. of the 23rd ICML. ICML ’06, New York, ACM, 2006, pp. 697–704.
[11] Kaelbling L., Littman M., Moore A., Reinforcement learning: A survey. May 1996, 4(1), pp. 237–285.
[12] Guez A., Silver D., Dayan P., Scalable and efficient bayes-adaptive reinforcement learning based on monte-carlo tree search. J. Artif. Int. Res., 2013, 48, pp. 841– 883.

Information

Information: Schedae Informaticae, 2015, Volume 24, pp. 73 - 82

Article type: Original article

Titles:

Polish:

Looking for the Right Time to Shift Strategy in the Exploration-exploitation Dilemma

English:

Looking for the Right Time to Shift Strategy in the Exploration-exploitation Dilemma

Authors

IRIT University of Toulouse 1 Capitole

Published at: 11.04.2016

Article status: Open

Licence: None

Percentage share of authors:

Filipo S. Perotto (Author) - 100%

Article corrections:

-

Publication languages:

English