My friend FMB has started a blog called "The Daily Trout" with some very nice hands in the first week, linked on the right. If he keeps up the Daily I'll be extremely impressed. His recent posts beginning with "Names withheld to protect the innocent" had a hand that, first off, is a cool bridge story (featured in a book on Helgemo), and also brought up some really nice game theory. I had so much to say I'm continuing the discussion here. Check out the hand at his site first, then continue here if interested. This is intended to be accessible to anyone with a passing knowledge of equilibrium in game theory, on the level occasionally found in The Bridge World; I hope it doesn't get too dense.
It was with a bit of guilt that I wrote my first comment, which I knew only gave the solution assuming average-trick maximization when BAM play might be different. I knew the BAM analysis would be more complex and wasn’t sure anyone would care, but I am very glad FMB did care, because now that he got me to think about it more, this is a great hand from a theory point of view! As he points out, this is really a 4-player game: let’s call the players S1,E1 (team 1) and S2,E2 (team 2). Each South can plan to go up (U) or duck (D) when East keeps a club, and each East can bare (B) or not bare (N) the K when he has it. There are 3 distinct ways to look at equilibrium in this game:
I) Correlated team equilibrium
Each team picks an overall strategy which can be any mixture of the 4 pairs of South and East strategies, UB, UN, DB and DN. It’s an equilibrium if the other team has no better-than-even response.
This becomes in effect a two-“player” symmetric zero-sum game, thinking of each team as a player. I had to turn to Matlab to invert the 4x4 payoff matrix, and eventually found that the unique equilibrium was .5 UN, .25 DB, .25 DN. Notice that:
a) The percentage of B by the Easts (.25) and U by the Souths (.5) is the same as in the average-trick maximization analysis that looked at just one table.
b) S1 and E1, also S2 and E2, must correlate their strategies to never play UB. This may or may not be possible, but leave that aside for the moment.
c) No one ever wins the board by two tricks, because this would require both teams to play B with one D and one U, not possible if no team ever plays UB. This is why the average-trick equilibrium translates into a BAM equilibrium here – when the trick difference is always -1, 0 or 1 it gets converted linearly to the BAM scale.
d) Comment c provides some intuition for why UB is avoided; a team that plays UB would be in “danger” of winning the board by two tricks, which wastes some of their average trick total. Even more intuitively, if your teammate South is playing Up, there is some chance as East that you have already won the board by not baring, so baring could be pointlessly risking a valuable trick for a valueless one.
II) Independent team equilibrium. Each team announces their randomization which must consist of *independent* randomizations by S and E. It's an equilibrium if the other team has no better-than-even response.
Fascinating to me is that there is no type II equilibrium! If you believe me that the equilibrium in I is unique for its type, this is actually pretty immediate. A team has a better-than-even response to any randomization that isn’t the correlated one in I, which implies that one of the four pure strategies must be a winning response, and pure strategies of course satisfy independence.
How is this possible? Doesn’t it violate Nash’s Existence Theorem? No. Why? When Nash’s Theorem is applied to games of more than two players, you have to be careful about the definition of equilibrium. A 4-tuple of strategies is a Nash equilibrium if no *single* player can improve his outcome by deviating. Nash doesn’t recognize teams, so his theorem doesn’t apply to the equilibrium concept defined above (II), which says equilibrium can be broken by two players deviating together. Nevertheless concept II is fairly natural, and there is something disturbing about the fact that there is no such equilibrium. Think about it: If S1 and E1 huddle and decide on a randomization for each, then *whatever* they decided, team 2 has a winning response if they just know the frequencies. This is totally opposed to the usual intuition for mixed-strategy equilibrium.
III) 4-player independent Nash equilibrium. Each player announces a randomization – again, teams cannot correlate. It's an equilibrium if no player can improve his lot by changing his plan, with both his opponents *and* his teammate held fixed.
Nash tells us that such a thing definitely exists. FMB must have been referring to this type when he said East should bare 1-sqrt(.5) of the time. I haven’t been able to verify his calculations yet; I get some equations that are a bit hairy when I try to solve for the type III equilibrium.
All of this brings up the issues: Can teams correlate? Is it legal for them to correlate? I’ll comment on that sometime later, but for now will note that being able to correlate seems to provide some edge, although it is difficult to define how much: the two-team game where one team can correlate and the other must be independent has no equilibrium! Frankly, I find the conditions in II more natural bridgewise than I or III, but that's the one that leads to non-existence!