TY - JOUR TI - LOSSGRAD: Automatic Learning Rate in Gradient Descent AU - Wójcik, Bartosz AU - Maziarka, Łukasz AU - Tabor, Jacek TI - LOSSGRAD: Automatic Learning Rate in Gradient Descent AB - In this paper, we propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal step-size in gradient descent), which automatically modifies the step-size in gradient descent during neural networks training. Given a function f, a point x, and the gradient ▽xf of f, we aim to find the step-size h which is (locally) optimal, i.e. satisfies: h = arg min f(x - t▽xf).             t≥0   Making use of quadratic approximation, we show that the algorithm satisfies the above assumption. We experimentally show that our method is insensitive to the choice of initial learning rate while achieving results comparable to other methods. VL - 2018 IS - Volume 27 PY - 2018 SN - 1732-3916 C1 - 2083-8476 SP - 47 EP - 57 DO - 10.4467/20838476SI.18.004.10409 UR - https://ejournals.eu/en/journal/schedae-informaticae/article/lossgrad-automatic-learning-rate-in-gradient-descent KW - gradient descent KW - optimization methods KW - adaptive step size KW - dynamic learning rate KW - neural networks