### - Art Gallery -

In computational complexity theory, Yao's principle (also called Yao's minimax principle or Yao's lemma) states that the expected cost of a randomized algorithm on the worst-case input is no better than the expected cost for a worst-case probability distribution on the inputs of the deterministic algorithm that performs best against that distribution. Thus, to establish a lower bound on the performance of randomized algorithms, it suffices to find an appropriate distribution of difficult inputs, and to prove that no deterministic algorithm can perform well against that distribution. This principle is named after Andrew Yao, who first proposed it.

Yao's principle may be interpreted in game theoretic terms, via a two-player zero-sum game in which one player, Alice, selects a deterministic algorithm, the other player, Bob, selects an input, and the payoff is the cost of the selected algorithm on the selected input. Any randomized algorithm R may be interpreted as a randomized choice among deterministic algorithms, and thus as a strategy for Alice. By von Neumann's minimax theorem, Bob has a randomized strategy that performs at least as well against R as it does against the best pure strategy Alice might choose; that is, Bob's strategy defines a distribution on the inputs such that the expected cost of R on that distribution (and therefore also the worst case expected cost of R) is no better than the expected cost of any single deterministic algorithm against the same distribution.

Statement

The formulation below states the principle for Las Vegas randomized algorithms, i.e., distributions over deterministic algorithms that are correct on every input but have varying costs. It is straightforward to adapt the principle to Monte Carlo algorithms, i.e., distributions over deterministic algorithms that have bounded costs but can be incorrect on some inputs.

Consider a problem over the inputs $${\mathcal {X}}$$, and let $${\mathcal {A}}$$ be the set of all possible deterministic algorithms that correctly solve the problem. For any algorithm $$a\in {\mathcal {A}}$$ and input $$x\in {\mathcal {X}}$$, let $$c(a,x)\geq 0$$ be the cost of algorithm a {\displaystyle a} a run on input x .

Let p be a probability distribution over the algorithms $${\mathcal {A}}$$, and let A denote a random algorithm chosen according to p. Let q be a probability distribution over the inputs $${\mathcal {X}}$$, and let X denote a random input chosen according to q. Then,

$${\underset {x\in {\mathcal {X}}}{\max }}\ \mathbf {E} [c(A,x)]\geq {\underset {a\in {\mathcal {A}}}{\min }}\ \mathbf {E} [c(a,X)].}$$

That is, the worst-case expected cost of the randomized algorithm is at least the cost of the best deterministic algorithm against input distribution q {\displaystyle q} q.
Proof

Let $$C={\underset {x\in {\mathcal {X}}}{\max }}\ \mathbf {E} [c(A,x)]}$$ and $$D={\underset {a\in {\mathcal {A}}}{\min }}\ \mathbf {E} [c(a,X)]}$$. We have

$$C=\sum _{x}q_{x}C\geq \sum _{x}q_{x}\mathbf {E} [c(A,x)]=\sum _{x}q_{x}\sum _{a}p_{a}c(a,x)=\sum _{a}p_{a}\sum _{x}q_{x}c(a,x)=\sum _{a}p_{a}\mathbf {E} [c(a,X)]\geq \sum _{a}p_{a}D=D.}$$

As mentioned above, this theorem can also be seen as a very special case of the Minimax theorem.
References

Borodin, Allan; El-Yaniv, Ran (2005), "8.3 Yao's principle: A technique for obtaining lower bounds", Online Computation and Competitive Analysis, Cambridge University Press, pp. 115–120, ISBN 9780521619462
Yao, Andrew (1977), "Probabilistic computations: Toward a unified measure of complexity", Proceedings of the 18th IEEE Symposium on Foundations of Computer Science (FOCS), pp. 222–227, doi:10.1109/SFCS.1977.24

Fortnow, Lance (October 16, 2006). "Favorite theorems: Yao principle". Computational Complexity.