Background

A linear game is a generalization of a two-person zero-sum matrix game [Karlin]. Classically, such a game involves a matrix \(A \in \mathbb{R}^{n \times n}\) and a set (the unit simplex) of “strategies” \(\Delta = \operatorname{conv}\left( \left\lbrace e_{1}, e_{2}, \ldots, e_{n} \right\} \right)\) from which two players are free to choose. If the players choose \(x,y \in \Delta\) respectively, then the game is played by evaluating \(y^{T}Ax\) as the “payoff” to the first player. His payoff comes from the second player, so that their total sums to zero.

Each player will try to maximize his payoff in this scenario, or—what is equivalent—try to minimize the payoff of his opponent. In fact, the existence of optimal strategies is guaranteed [Karlin] for both players. The value of the matrix game \(A\) is the payoff resulting from optimal play,

\[v\left(A\right) = \underset{x \in \Delta}{\max}\ \underset{y\in \Delta}{\min} \left( y^{T}Ax \right) = \underset{y \in \Delta}{\min}\ \underset{x \in \Delta}{\max} \left( y^{T}Ax \right).\]

The payoff to the first player in this case is \(v\left(A\right)\). Corresponding to \(v\left(A\right)\) is an optimal strategy pair \(\left(\bar{x}, \bar{y}\right) \in \Delta \times \Delta\) such that

\[\bar{y}^{T}Ax \le v\left(A\right) = \bar{y}^{T}A\bar{x} \le y^{T}A\bar{x} \text{ for all } \left(x,y\right) \in \Delta \times \Delta.\]

The relationship between \(A\), \(\bar{x}\), \(\bar{y}\), and \(v\left(A\right)\) has been studied extensively [Kaplansky] [Raghavan]. Gowda and Ravindran [GowdaRav] were motivated by these results to ask if the matrix \(A\) can be replaced by a linear transformation \(L\), and whether or not the unit simplex \(\Delta\) can be replaced by a more general set—a base of a symmetric cone. In fact, one can go all the way to asymmetric (but still proper) cones. But, since Dunshire can only handle the symmetric case, we will pretend from now on that our cones need to be symmetric. This simplifies some definitions.

What is a linear game?

In the classical setting, the interpretation of the strategies as probabilities results in a strategy set \(\Delta \subseteq \mathbb{R}^{n}_{+}\) that is compact, convex, and does not contain the origin. Moreover, any nonzero \(x \in \mathbb{R}^{n}_{+}\) is a unique positive multiple \(x = \lambda b\) of some \(b \in \Delta\). Several existence and uniqueness results are predicated on those properties.

Definition. Suppose that \(K\) is a cone and \(B \subseteq K\) does not contain the origin. If any nonzero \(x \in K\) can be uniquely represented \(x = \lambda b\) where \(\lambda > 0\) and \(b \in B\), then \(B\) is a base of \(K\).

The set \(\Delta\) is a compact convex base for the proper cone \(\mathbb{R}^{n}_{+}\). Every \(x \in \Delta\) also has entries that sum to unity, which can be abbreviated by the condition \(\left\langle x,\mathbf{1} \right\rangle = 1\) where \(\mathbf{1} = \left(1,1,\ldots,1\right)^{T}\) happens to lie in the interior of \(\mathbb{R}^{n}_{+}\). In fact, the two conditions \(x \in \mathbb{R}^{n}_{+}\) and \(\left\langle x,\mathbf{1} \right\rangle = 1\) define \(\Delta\). This is no coincidence; whenever \(K\) is a symmetric cone and \(e_{2} \in \operatorname{int}\left(K\right)\), the set \(\left\lbrace x \in K \ \middle|\ \left\langle x,e_{2} \right\rangle = 1 \right\rbrace\) is a compact convex base of \(K\). This motivates a generalization where \(\mathbb{R}^{n}_{+}\) is replaced by a symmetric cone.

Definition. Let \(V\) be a finite-dimensional real Hilbert space. A linear game in \(V\) is a tuple \(\left(L,K,e_{1},e_{2}\right)\) where \(L : V \rightarrow V\) is linear, the set \(K\) is a symmetric cone in \(V\), and the points \(e_{1}\) and \(e_{2}\) belong to \(\operatorname{int}\left(K\right)\).

Definition. The strategy sets for our linear game are

\[\begin{split}\begin{aligned} \Delta_{1}\left(L,K,e_{1},e_{2}\right) &= \left\lbrace x \in K\ \middle|\ \left\langle x,e_{2} \right\rangle = 1 \right\rbrace\\ \Delta_{2}\left(L,K,e_{1},e_{2}\right) &= \left\lbrace y \in K\ \middle|\ \left\langle y,e_{1} \right\rangle = 1 \right\rbrace. \end{aligned}\end{split}\]

Since \(e_{1},e_{2} \in \operatorname{int}\left(K\right)\), these are bases for \(K\). We will usually omit the arguments and write \(\Delta_{i}\) to mean \(\Delta_{i}\left(L,K,e_{1},e_{2}\right)\).

To play the game \(\left(L,K,e_{1},e_{2}\right)\), the first player chooses an \(x \in \Delta_{1}\), and the second player independently chooses a \(y \in \Delta_{2}\). This completes the turn, and the payoffs are determined by applying the payoff operator \(\left(x,y\right) \mapsto \left\langle L\left(x\right), y \right\rangle\). The payoff to the first player is \(\left\langle L\left(x\right), y \right\rangle\), and since we want the game to be zero-sum, the payoff to the second player is \(-\left\langle L\left(x\right), y \right\rangle\).

The payoff operator is continuous in both arguments because it is bilinear and the ambient space is finite-dimensional. We constructed the strategy sets \(\Delta_{1}\) and \(\Delta_{2}\) to be compact and convex; as a result, Karlin’s [Karlin] general min-max Theorem 1.5.1, guarantees the existence of optimal strategies for both players.

Definition. A pair \(\left(\bar{x},\bar{y}\right) \in \Delta_{1} \times \Delta_{2}\) is an optimal pair for the game \(\left(L,K,e_{1},e_{2}\right)\) if it satisfies the saddle-point inequality,

\[\left\langle L\left(x\right),\bar{y} \right\rangle \le \left\langle L\left( \bar{x}\right), \bar{y} \right\rangle \le \left\langle L\left(\bar{x}\right),y \right\rangle \text{ for all } \left(x,y\right) \in \Delta_{1} \times \Delta_{2}.\]

At an optimal pair, neither player can unilaterally increase his payoff by changing his strategy. The value \(\left\langle L \left( \bar{x} \right) , \bar{y} \right\rangle\) is unique (by the same min-max theorem); it is shared by all optimal pairs. There exists at least one optimal pair \(\left(\bar{x},\bar{y}\right)\) of the game \(\left(L,K,e_{1},e_{2}\right)\) and its value is \(v\left(L,K,e_{1},e_{2}\right) = \left\langle L\left(\bar{x}\right), \bar{y} \right\rangle\).

Thanks to Karlin [Karlin], we have an equivalent characterization of a game’s value that does not require us to have a particular optimal pair in mind,

\[v\left( L,K,e_{1},e_{2} \right) = \underset{x \in \Delta_{1}}{\max}\ \underset{y\in \Delta_{2}}{\min}\ \left\langle L\left(x\right),y \right\rangle = \underset{y\in \Delta_{2}}{\min}\ \underset{x \in \Delta_{1}}{\max}\ \left\langle L\left(x\right),y \right\rangle.\]

Linear games reduce to two-person zero-sum matrix games in the right setting.

Example. If \(K = \mathbb{R}^{n}_{+}\) in \(V = \mathbb{R}^{n}\) and \(e_{1} = e_{2} = \left(1,1,\ldots,1\right)^{T} \in \operatorname{int}\left(K\right)\), then \(\Delta_{1} = \Delta_{2} = \Delta\). For any \(L \in \mathbb{R}^{n \times n}\), the linear game \(\left( L,K,e_{2},e_{2} \right)\) is a two-person zero-sum matrix game. Its payoff is \(\left(x,y\right) \mapsto y^{T}Lx\), and its value is

\[v\left( L,K,e_{1},e_{2} \right) = \underset{x \in \Delta}{\max}\ \underset{y\in \Delta}{\min}\ \left( y^{T}Lx \right) = \underset{y\in \Delta}{\min}\ \underset{x \in \Delta}{\max}\ \left( y^{T}Lx \right).\]

Cone program formulation

As early as 1947, von Neumann knew [Dantzig] that a two-person zero-sum matrix game could be expressed as a linear program. It turns out that the two concepts are equivalent, but turning a matrix game into a linear program is the simpler transformation. Cone programs are generalizations of linear programs where the cone is allowed to differ from the nonnegative orthant. We will see that any linear game can be formulated as a cone program, and if we’re lucky, be solved.

Our main tool in formulating our cone program is the following theorem. It closely mimics a similar result in classical game theory where the cone is the nonnegative orthant and the result gives rise to a linear program. The symbols \(\preccurlyeq\) and \(\succcurlyeq\) indicate inequality with respect to the cone ordering; that is, \(x \succcurlyeq y \iff x - y \in K\).

Theorem. In the game \(\left(L,K,e_{1},e_{2}\right)\), we have \(L^{*}\left( q \right) \preccurlyeq \nu e_{2}\) and \(L \left( p \right) \succcurlyeq \nu e_{1}\) for \(\nu \in \mathbb{R}\) if and only if \(\nu\) is the value of the game \(\left(L,K,e_{1},e_{2}\right)\) and \(\left(p,q\right)\) is optimal for it.

The proof of this theorem is not difficult, and the version for \(e_{1} \ne e_{2}\) can easily be deduced from the one given by Gowda and Ravidran [GowdaRav]. Let us now restate the objectives of the two players in terms of this theorem. Player one would like to,

\[\begin{split}\begin{aligned} &\text{ maximize } &\nu &\\ &\text{ subject to }& p &\in K&\\ & & \nu &\in \mathbb{R}&\\ & & \left\langle p,e_{2} \right\rangle &= 1&\\ & & L\left(p\right) &\succcurlyeq \nu e_{1}.& \end{aligned}\end{split}\]

Player two, on the other hand, would like to,

\[\begin{split}\begin{aligned} &\text{ minimize } &\omega &\\ &\text{ subject to }& q &\in K&\\ & & \omega &\in \mathbb{R}&\\ & & \left\langle q,e_{1} \right\rangle &= 1&\\ & & L^{*}\left(q\right) &\preccurlyeq \omega e_{2}.& \end{aligned}\end{split}\]

The CVXOPT library can solve symmetric cone programs in the following primal/dual format:

\[\begin{split}\text{primal} = \left\{ \begin{aligned} \text{ minimize } & c^{T}x \\ \text{ subject to } & Gx + s = h \\ & Ax = b \\ & s \in C, \end{aligned} \right.\end{split}\]
\[\begin{split}\text{dual} = \left\{ \begin{aligned} \text{ maximize } & -h^{T}z - b^{T}y \\ \text{ subject to } & G^{T}z + A^{T}y + c = 0 \\ & z \in C. \end{aligned} \right.\end{split}\]

We will now pull a rabbit out of the hat, and choose the matrices/vectors in these primal/dual programs so as to reconstruct the goals of the two players. Let,

\[\begin{split}\begin{aligned} C &= K \times K\\ x &= \begin{bmatrix} \nu \\ p \end{bmatrix} \\ y &= \begin{bmatrix} \omega \end{bmatrix} \\ b &= \begin{bmatrix} 1 \end{bmatrix} \\ h &= 0 \\ c &= \begin{bmatrix} -1 \\ 0 \end{bmatrix} \\ z &= \begin{bmatrix} z_{1} \\ q \end{bmatrix} \\ A &= \begin{bmatrix} 0 & e_{2}^{T} \end{bmatrix} \\ G &= \begin{bmatrix} 0 & -I\\ e_{1} & -L \end{bmatrix}. \end{aligned}\end{split}\]

Substituting these values into the primal/dual CVXOPT cone programs recovers the objectives of player one (primal) and player two (dual) exactly. Therefore, we can use this formulation to solve a linear game, and that is precisely what Dunshire does.