ART

Successive parabolic interpolation is a technique for finding the extremum (minimum or maximum) of a continuous unimodal function by successively fitting parabolas (polynomials of degree two) to a function of one variable at three unique points or, in general, a function of n variables at 1+n(n+3)/2 points, and at each iteration replacing the "oldest" point with the extremum of the fitted parabola.

Advantages

Only function values are used, and when this method converges to an extremum, it does so with an order of convergence of approximately 1.325. The superlinear rate of convergence is superior to that of other methods with only linear convergence (such as line search). Moreover, not requiring the computation or approximation of function derivatives makes successive parabolic interpolation a popular alternative to other methods that do require them (such as gradient descent and Newton's method).
Disadvantages

On the other hand, convergence (even to a local extremum) is not guaranteed when using this method in isolation. For example, if the three points are collinear, the resulting parabola is degenerate and thus does not provide a new candidate point. Furthermore, if function derivatives are available, Newton's method is applicable and exhibits quadratic convergence.
Improvements

Alternating the parabolic iterations with a more robust method (golden section search is a popular choice) to choose candidates can greatly increase the probability of convergence without hampering the convergence rate.
See also

Inverse quadratic interpolation is a related method that uses parabolas to find roots rather than extrema.
Simpson's rule uses parabolas to approximate definite integrals.

References

Michael Heath (2002). Scientific Computing: An Introductory Survey (2nd ed.). New York: McGraw-Hill. ISBN 0-07-239910-4.

vte

Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions

Golden-section search Interpolation methods Line search Nelder–Mead method Successive parabolic interpolation

Gradients
Convergence

Trust region Wolfe conditions

Quasi–Newton

Berndt–Hall–Hall–Hausman Broyden–Fletcher–Goldfarb–Shanno and L-BFGS Davidon–Fletcher–Powell Symmetric rank-one (SR1)

Other methods

Conjugate gradient Gauss–Newton Gradient Levenberg–Marquardt Powell's dog leg method Truncated Newton

Hessians

Newton's method


Graph of a strictly concave quadratic function with unique maximum.
Constrained nonlinear
General

Barrier methods Penalty methods

Differentiable

Augmented Lagrangian methods Sequential quadratic programming Successive linear programming

Convex optimization
Convex
minimization

Cutting-plane method Reduced gradient (Frank–Wolfe) Subgradient method

Linear and
quadratic
Interior point

Affine scaling Ellipsoid algorithm of Khachiyan Projective algorithm of Karmarkar

Basis-exchange

Simplex algorithm of Dantzig Revised simplex algorithm Criss-cross algorithm Principal pivoting algorithm of Lemke

Combinatorial
Paradigms

Approximation algorithm Dynamic programming Greedy algorithm Integer programming
Branch and bound/cut

Graph
algorithms
Minimum
spanning tree

Borůvka Prim Kruskal

Shortest path

Bellman–Ford
SPFA Dijkstra Floyd–Warshall

Network flows

Dinic Edmonds–Karp Ford–Fulkerson Push–relabel maximum flow

Metaheuristics

Evolutionary algorithm Hill climbing Local search Simulated annealing Tabu search

Software

Undergraduate Texts in Mathematics

Graduate Texts in Mathematics

Graduate Studies in Mathematics

Mathematics Encyclopedia

World

Index

Hellenica World - Scientific Library

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License