ART

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms,[1] whereas mathematical optimization is in general NP-hard.[2][3][4]

Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design,[5] data analysis and modeling, finance, statistics (optimal experimental design),[6] and structural optimization, where the approximation concept has proven to be efficient.[7][8] With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming.[9]

Definition

A convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a convex set. A function f mapping some subset of \( \mathbb {R} ^{n} \) into \( {\displaystyle \mathbb {R} \cup \{\pm \infty \}} \) is convex if its domain is convex and for all \( \theta \in [0,1] \) and all x,y in its domain, the following condition holds: \( {\displaystyle f(\theta x+(1-\theta )y)\leq \theta f(x)+(1-\theta )f(y)} \). A set S is convex if for all members \( x,y\in S \) and all \( \theta \in [0,1] \), we have that \( {\displaystyle \theta x+(1-\theta )y\in S} \).

Concretely, a convex optimization problem is the problem of finding some \( {\displaystyle \mathbf {x^{\ast }} \in C} \) attaining

\( {\displaystyle \inf\{f(\mathbf {x} ):\mathbf {x} \in C\}}, \)

where the objective function \( {\displaystyle f:{\mathcal {D}}\subseteq \mathbb {R} ^{n}\to \mathbb {R} } \) is convex, as is the feasible set C.[10] [11] If such a point exists, it is referred to as an optimal point or solution; the set of all optimal points is called the optimal set. If f is unbounded below over C or the infimum is not attained, then the optimization problem is said to be unbounded. Otherwise, if C is the empty set, then the problem is said to be infeasible.[12]

Standard form

A convex optimization problem is in standard form if it is written as

\( {\displaystyle {\begin{aligned}&{\underset {\mathbf {x} }{\operatorname {minimize} }}&&f(\mathbf {x} )\\&\operatorname {subject\ to} &&g_{i}(\mathbf {x} )\leq 0,\quad i=1,\dots ,m\\&&&h_{i}(\mathbf {x} )=0,\quad i=1,\dots ,p,\end{aligned}}} \)

where \( \mathbf {x} \in \mathbb {R} ^{n} \) is the optimization variable, the function \( {\displaystyle f:{\mathcal {D}}\subseteq \mathbb {R} ^{n}\to \mathbb {R} } \) is convex, \( {\displaystyle g_{i}:\mathbb {R} ^{n}\to \mathbb {R} } \), \( {\displaystyle i=1,\ldots ,m} \), are convex, and \( {\displaystyle h_{i}:\mathbb {R} ^{n}\to \mathbb {R} } \), \( {\displaystyle i=1,\ldots ,p} \), are affine.[12] This notation describes the problem of finding x\( \mathbf {x} \in \mathbb {R} ^{n} \) that minimizes f ( x ) {\displaystyle f(\mathbf {x} )} f(\mathbf {x} ) among all \( \mathbf {x} \) satisfying \( {\displaystyle g_{i}(\mathbf {x} )\leq 0} \), \( {\displaystyle i=1,\ldots ,m} \) and \( {\displaystyle h_{i}(\mathbf {x} )=0} \), \( {\displaystyle i=1,\ldots ,p} \). The function f is the objective function of the problem, and the functions \( g_{i} \) and \( h_{i} \) are the constraint functions.

The feasible set C of the optimization problem consists of all points \( \mathbf{x} \in \mathcal{D} \) satisfying the constraints. This set is convex because \( {\mathcal {D}} \) is convex, the sublevel sets of convex functions are convex, affine sets are convex, and the intersection of convex sets is convex.[13]

A solution to a convex optimization problem is any point \( {\displaystyle \mathbf {x} \in C} \) attaining \( } {\displaystyle \inf\{f(\mathbf {x} ):\mathbf {x} \in C\}} \). In general, a convex optimization problem may have zero, one, or many solutions.

Many optimization problems can be equivalently formulated in this standard form. For example, the problem of maximizing a concave function f can be re-formulated equivalently as the problem of minimizing the convex function -f. The problem of maximizing a concave function over a convex set is commonly called a convex optimization problem.
Properties

The following are useful properties of convex optimization problems:[14][12]

every local minimum is a global minimum;
the optimal set is convex;
if the objective function is strictly convex, then the problem has at most one optimal point.

These results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.
Uncertainty Analysis

Ben-Hain and Elishakoff[15] (1990), Elishakoff et al.[16] (1994) applied convex analysis to model uncertainty.
Examples

The following problem classes are all convex optimization problems, or can be reduced to convex optimization problems via simple transformations:[12][17]
A hierarchy of convex optimization problems. (LP: linear program, QP: quadratic program, SOCP second-order cone program, SDP: semidefinite program, CP: cone program.)

Least squares
Linear programming
Convex quadratic minimization with linear constraints
Quadratic minimization with convex quadratic constraints
Conic optimization
Geometric programming
Second order cone programming
Semidefinite programming
Entropy maximization with appropriate constraints

Lagrange multipliers

Consider a convex minimization problem given in standard form by a cost function f(x) and inequality constraints \( g_i(x)\leq 0 \) for \( {\displaystyle 1\leq i\leq m} \). Then the domain \( {\mathcal {X}}\) is:

\( {\displaystyle {\mathcal {X}}=\left\{x\in X\vert g_{1}(x),\ldots ,g_{m}(x)\leq 0\right\}.} \)

The Lagrangian function for the problem is

\( {\displaystyle L(x,\lambda _{0},\lambda _{1},\ldots ,\lambda _{m})=\lambda _{0}f(x)+\lambda _{1}g_{1}(x)+\cdots +\lambda _{m}g_{m}(x).} \)

For each point x in X that minimizes f over X, there exist real numbers \( {\displaystyle \lambda _{0},\lambda _{1},\ldots ,\lambda _{m},} \)called Lagrange multipliers, that satisfy these conditions simultaneously:

x minimizes \( {\displaystyle L(y,\lambda _{0},\lambda _{1},\ldots ,\lambda _{m})} \) over all y\( {\displaystyle y\in X,} \)
λ \( {\displaystyle \lambda _{0},\lambda _{1},\ldots ,\lambda _{m}\geq 0,} \) with at least one \( {\displaystyle \lambda _{k}>0,} \)
\( {\displaystyle \lambda _{1}g_{1}(x)=\cdots =\lambda _{m}g_{m}(x)=0} \) (complementary slackness).

If there exists a "strictly feasible point", that is, a point z {\displaystyle z} z satisfying

\( {\displaystyle g_{1}(z),\ldots ,g_{m}(z)<0,} \)

then the statement above can be strengthened to require that \( {\displaystyle \lambda _{0}=1}. \)

Conversely, if some x in X satisfies (1)–(3) for scalars \( {\displaystyle \lambda _{0},\ldots ,\lambda _{m}} \) with \( {\displaystyle \lambda _{0}=1} \) then x is certain to minimize f over X.
Algorithms

Convex optimization problems can be solved by the following contemporary methods:[18]

Bundle methods (Wolfe, Lemaréchal, Kiwiel), and
Subgradient projection methods (Polyak),
Interior-point methods,[1] which make use of self-concordant barrier functions [19] and self-regular barrier functions.[20]
Cutting-plane methods
Ellipsoid method
Subgradient method
Dual subgradients and the drift-plus-penalty method

Subgradient methods can be implemented simply and so are widely used.[21] Dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, but takes a time average of the primal variables.
Extensions

Extensions of convex optimization include the optimization of biconvex, pseudo-convex, and quasiconvex functions. Extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity, also known as abstract convex analysis.
See also

Duality
Karush–Kuhn–Tucker conditions
Optimization problem
Proximal gradient method

Notes

Nesterov & Nemirovskii 1994
Murty, Katta; Kabadi, Santosh (1987). "Some NP-complete problems in quadratic and nonlinear programming". Mathematical Programming. 39 (2): 117–129. doi:10.1007/BF02592948. hdl:2027.42/6740. S2CID 30500771.
Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.
Quadratic programming with one negative eigenvalue is NP-hard, Panos M. Pardalos and Stephen A. Vavasis in Journal of Global Optimization, Volume 1, Number 1, 1991, pg.15-22.
Boyd & Vandenberghe 2004, p. 17
Chritensen/Klarbring, chpt. 4.
Boyd & Vandenberghe 2004
Schmit, L.A.; Fleury, C. 1980: Structural synthesis by combining approximation concepts and dual methods. J. Amer. Inst. Aeronaut. Astronaut 18, 1252-1260
Boyd & Vandenberghe 2004, p. 8
Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1996). Convex analysis and minimization algorithms: Fundamentals. p. 291. ISBN 9783540568506.
Ben-Tal, Aharon; Nemirovskiĭ, Arkadiĭ Semenovich (2001). Lectures on modern convex optimization: analysis, algorithms, and engineering applications. pp. 335–336. ISBN 9780898714913.
Boyd & Vandenberghe 2004, chpt. 4
Boyd & Vandenberghe 2004, chpt. 2
Rockafellar, R. Tyrrell (1993). "Lagrange multipliers and optimality" (PDF). SIAM Review. 35 (2): 183–238. CiteSeerX 10.1.1.161.7209. doi:10.1137/1035044.
Ben Haim Y. and Elishakoff I., Convex Models of Uncertainty in Applied Mechanics, Elsevier Science Publishers, Amsterdam, 1990
I. Elishakoff, I. Lin Y.K. and Zhu L.P., Probabilistic and Convex Modeling of Acoustically Excited Structures, Elsevier Science Publishers, Amsterdam, 1994
Agrawal, Akshay; Verschueren, Robin; Diamond, Steven; Boyd, Stephen (2018). "A rewriting system for convex optimization problems" (PDF). Control and Decision. 5 (1): 42–60. arXiv:1709.04494. doi:10.1080/23307706.2017.1397554. S2CID 67856259.
For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by Ruszczyński, Bertsekas, and Boyd and Vandenberghe (interior point).
Nesterov, Yurii; Arkadii, Nemirovskii (1995). Interior-Point Polynomial Algorithms in Convex Programming. Society for Industrial and Applied Mathematics. ISBN 978-0898715156.
Peng, Jiming; Roos, Cornelis; Terlaky, Tamás (2002). "Self-regular functions and new search directions for linear and semidefinite optimization". Mathematical Programming. 93 (1): 129–171. doi:10.1007/s101070200296. ISSN 0025-5610. S2CID 28882966.

Bertsekas

References

Bertsekas, Dimitri P.; Nedic, Angelia; Ozdaglar, Asuman (2003). Convex Analysis and Optimization. Belmont, MA.: Athena Scientific. ISBN 978-1-886529-45-8.
Bertsekas, Dimitri P. (2009). Convex Optimization Theory. Belmont, MA.: Athena Scientific. ISBN 978-1-886529-31-1.
Bertsekas, Dimitri P. (2015). Convex Optimization Algorithms. Belmont, MA.: Athena Scientific. ISBN 978-1-886529-28-1.

Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (PDF). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved October 15, 2011.

Borwein, Jonathan, and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer.
Christensen, Peter W.; Anders Klarbring (2008). An introduction to structural optimization. 153. Springer Science & Business Media. ISBN 9781402086663.

Hiriart-Urruty, Jean-Baptiste, and Lemaréchal, Claude. (2004). Fundamentals of Convex analysis. Berlin: Springer.
Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1993). Convex analysis and minimization algorithms, Volume I: Fundamentals. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 305. Berlin: Springer-Verlag. pp. xviii+417. ISBN 978-3-540-56850-6. MR 1261420.
Hiriart-Urruty, Jean-Baptiste; Lemaréchal, Claude (1993). Convex analysis and minimization algorithms, Volume II: Advanced theory and bundle methods. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. 306. Berlin: Springer-Verlag. pp. xviii+346. ISBN 978-3-540-56852-0. MR 1295240.
Kiwiel, Krzysztof C. (1985). Methods of Descent for Nondifferentiable Optimization. Lecture Notes in Mathematics. New York: Springer-Verlag. ISBN 978-3-540-15642-0.
Lemaréchal, Claude (2001). "Lagrangian relaxation". In Michael Jünger and Denis Naddef (ed.). Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000. Lecture Notes in Computer Science. 2241. Berlin: Springer-Verlag. pp. 112–156. doi:10.1007/3-540-45586-8_4. ISBN 978-3-540-42877-0. MR 1900016. S2CID 9048698.
Nesterov, Yurii; Nemirovskii, Arkadii (1994). Interior Point Polynomial Methods in Convex Programming. SIAM.
Nesterov, Yurii. (2004). Introductory Lectures on Convex Optimization, Kluwer Academic Publishers
Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.

Ruszczyński, Andrzej (2006). Nonlinear Optimization. Princeton University Press.
Schmit, L.A.; Fleury, C. 1980: Structural synthesis by combining approximation concepts and dual methods. J. Amer. Inst. Aeronaut. Astronaut 18, 1252-1260

Undergraduate Texts in Mathematics

Graduate Texts in Mathematics

Graduate Studies in Mathematics

Mathematics Encyclopedia

World

Index

Hellenica World - Scientific Library

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License