ART

Stochastic gradient Langevin dynamics (SGLD), is an optimization technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which introduces additional noise to the stochastic gradient estimator used in SGD to optimize a differentiable objective function.[1] Unlike traditional SGD, SGLD can be used for Bayesian learning, since the method produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many contexts which require optimization, and is most notably applied in machine learning problems.

Formal definition

Given some parameter vector \( \theta \) , its prior distribution \( {\displaystyle p(\theta )} \), and a set of data points \ {\displaystyle X=\{x_{i}\}_{i=1}^{N}} \) , Stochastic Gradient Langevin dynamics samples from the posterior distribution \( {\displaystyle p(\theta \mid X)\propto p(\theta )\prod _{i=1}^{N}p(x_{i}\mid \theta )} \) by updating the chain:

\( {\displaystyle \Delta \theta _{t}={\frac {\varepsilon _{t}}{2}}\left(\nabla \log p(\theta _{t})+{\frac {N}{n}}\sum _{i=1}^{n}\nabla \log p(x_{t_{i}}\mid \theta _{t})\right)+\eta _{t}} \)

where n<N is a positive integer, \( {\displaystyle \eta _{t}\sim {\mathcal {N}}(0,\varepsilon _{t})} \) is Gaussian noise, \( {\displaystyle p(x\mid \theta )} is the likelihood of the data given the parameter vector \( \theta \) , and our step sizes \( {\displaystyle \varepsilon _{t}} \)satisfy the following conditions:

\( {\displaystyle \sum _{t=1}^{\infty }\varepsilon _{t}=\infty \quad \sum _{t=1}^{\infty }\varepsilon _{t}^{2}<\infty } \)

For early iterations of the algorithm, each parameter update mimics Stochastic Gradient Descent; however, as the algorithm approaches a local minima or maxima, the gradient shrinks to zero and the chain produces samples surrounding the maximum a posteriori mode allowing for posterior inference. This process generates approximate samples from the posterior as by balancing variance from the injected Gaussian noise and stochastic gradient computation.[citation needed]
Application

SGLD is applicable in any optimization context for which it is desirable to quickly obtain posterior samples instead of a maximum a posteriori mode. In doing so, the method maintains the computational efficiency of stochastic gradient descent when compared to traditional gradient descent while providing additional information regarding the landscape around the critical point of the objective function. In practice, SGLD can be applied to the training of Bayesian Neural Networks in Deep Learning, a task in which the method provides a distribution over model parameters. By introducing information about the variance of these parameters, SGLD characterizes the generalizability of these models at certain points in training.[2] Additionally, obtaining samples from a posterior distribution permits uncertainty quantification by means of confidence intervals, a feature which is not possible using traditional stochastic gradient descent.[citation needed]
Variants and associated algorithms

If gradient computations are exact, SGLD reduces down to the Langevin Monte Carlo algorithm,[3] first coined in the literature of lattice field theory. This algorithm is also a reduction of Hamiltonian Monte Carlo, consisting of a single leapfrog step proposal rather than a series of steps.[4] Since SGLD can be formulated as a modification of both stochastic gradient descent and MCMC methods, the method lies at the intersection between optimization and sampling algorithms; the method maintains SGD's ability to quickly converge to regions of low cost while providing samples to facilitate posterior inference.[citation needed]

Considering relaxed constraints on the step sizes \( {\displaystyle \varepsilon _{t}} \) such that they do not approach zero asymptotically, SGLD fails to produce samples for which the Metropolis Hastings rejection rate is zero, and thus a MH rejection step becomes necessary.[1] The resulting algorithm, dubbed the Metropolis Adjusted Langevin algorithm,[5] requires the step:

\( {\displaystyle {\frac {p(\mathbf {\theta } ^{t}\mid \mathbf {\theta } ^{t+1})p^{*}\left(\mathbf {\theta } ^{t}\right)}{p\left(\mathbf {\theta } ^{t+1}\mid \mathbf {\theta } ^{t}\right)p^{*}(\mathbf {\theta } ^{t+1})}}<u,\ u\sim {\mathcal {U}}[0,1]} \)

where \ {\displaystyle p(\theta ^{t}\mid \theta ^{t+1})} \) is a normal distribution centered one gradient descent step from \ {\displaystyle \theta ^{t}} \) and \ p(\theta ) \) is our target distribution.
Mixing rates and algorithmic convergence

Recent contributions have proven upper bounds on mixing times for both the traditional Langevin algorithm and the Metropolis adjusted Langevin algorithm.[5] Released in Ma et al., 2018, these bounds define the rate at which the algorithms converge to the true posterior distribution, defined formally as:

\( {\displaystyle \tau (\varepsilon ;p^{0})=\min \left\{k\mid \left\|p^{k}-p^{*}\right\|_{\mathrm {V} }\leq \varepsilon \right\}} \)

where \( {\displaystyle \varepsilon \in (0,1)} \) is an arbitrary error tolerance, \( p^0 \) is some initial distribution, \( p^{*} \) is the posterior distribution, and \( {\displaystyle ||*||_{TV}} \) is the total variation norm. Under some regularity conditions of an L-Lipschitz smooth objective function U(x) which is m-strongly convex outside of a region of radius R {\displaystyle R} R with condition number \( {\displaystyle \kappa ={\frac {L}{m}}} \), we have mixing rate bounds:

\( {\displaystyle \tau _{ULA}(\varepsilon ,p^{0})\leq {\mathcal {O}}\left(e^{32LR^{2}}\kappa ^{2}{\frac {d}{\varepsilon ^{2}}}\ln \left({\frac {d}{\varepsilon ^{2}}}\right)\right)} \)

\( {\displaystyle \tau _{MALA}(\varepsilon ,p^{0})\leq {\mathcal {O}}\left(e^{16LR^{2}}\kappa ^{3/2}d^{1/2}\left(d\ln \kappa +\ln \left({\frac {1}{\varepsilon }}\right)\right)^{3/2}\right)} \)

where \( {\displaystyle \tau _{ULA}} \) and \( {\displaystyle \tau _{MALA}} \) refer to the mixing rates of the Unadjusted Langevin Algorithm and the Metropolis Adjusted Langevin Algorithm respectively. These bounds are important because they show computational complexity is polynomial in dimension d conditional on \( {\displaystyle LR^{2}} \) being \( {\displaystyle {\mathcal {O}}(\log d)} \) .
References

Welling, Max; Teh, Yee Whye (2011). "Bayesian Learning via Stochastic Gradient Langevin Dynamics" (PDF). ICML'11 Proceedings of the 28th International Conference on International Conference on Machine Learning: 681–688.
Chaudhari, Pratik; Choromanska, Anna; Soatto, Stefano; LeCun, Yann; Baldassi, Carlo; Borgs, Christian; Chayes, Jennifer; Sagun, Levent; Zecchina, Riccardo (2017). "Entropy-sgd: Biasing gradient descent into wide valleys". ICLR’2017. arXiv:1611.01838. Bibcode:2016arXiv161101838C.
Kennedy, A. D. (1990). "The theory of hybrid stochastic algorithms". Probabilistic Methods in Quantum Field Theory and Quantum Gravity. Plenum Press. pp. 209–223. ISBN 0-306-43602-7.
Neal, R. (2011). "MCMC Using Hamiltonian Dynamics". Handbook of Markov Chain Monte Carlo. CRC Press. ISBN 978-1-4200-7941-8.
Ma, Y. A.; Chen, Y.; Jin, C.; Flammarion, N.; Jordan, M. I. (2018). "Sampling Can Be Faster Than Optimization". arXiv:1811.08413 [stat.ML].

Undergraduate Texts in Mathematics

Graduate Texts in Mathematics

Graduate Studies in Mathematics

Mathematics Encyclopedia

World

Index

Hellenica World - Scientific Library

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License