My friend FMB has started a blog called "The Daily Trout" with some very nice hands in the first week, linked on the right. If he keeps up the Daily I'll be extremely impressed. His recent posts beginning with "Names withheld to protect the innocent" had a hand that, first off, is a cool bridge story (featured in a book on Helgemo), and also brought up some really nice game theory. I had so much to say I'm continuing the discussion here. Check out the hand at his site first, then continue here if interested. This is intended to be accessible to anyone with a passing knowledge of equilibrium in game theory, on the level occasionally found in The Bridge World; I hope it doesn't get too dense.
It was with a bit of guilt that I wrote my first comment, which I knew only gave the solution assuming average-trick maximization when BAM play might be different. I knew the BAM analysis would be more complex and wasn’t sure anyone would care, but I am very glad FMB did care, because now that he got me to think about it more, this is a great hand from a theory point of view! As he points out, this is really a 4-player game: let’s call the players S1,E1 (team 1) and S2,E2 (team 2). Each South can plan to go up (U) or duck (D) when East keeps a club, and each East can bare (B) or not bare (N) the K when he has it. There are 3 distinct ways to look at equilibrium in this game:
I) Correlated team equilibrium
Each team picks an overall strategy which can be any mixture of the 4 pairs of South and East strategies, UB, UN, DB and DN. It’s an equilibrium if the other team has no better-than-even response.
This becomes in effect a two-“player” symmetric zero-sum game, thinking of each team as a player. I had to turn to Matlab to invert the 4x4 payoff matrix, and eventually found that the unique equilibrium was .5 UN, .25 DB, .25 DN. Notice that:
a) The percentage of B by the Easts (.25) and U by the Souths (.5) is the same as in the average-trick maximization analysis that looked at just one table.
b) S1 and E1, also S2 and E2, must correlate their strategies to never play UB. This may or may not be possible, but leave that aside for the moment.
c) No one ever wins the board by two tricks, because this would require both teams to play B with one D and one U, not possible if no team ever plays UB. This is why the average-trick equilibrium translates into a BAM equilibrium here – when the trick difference is always -1, 0 or 1 it gets converted linearly to the BAM scale.
d) Comment c provides some intuition for why UB is avoided; a team that plays UB would be in “danger” of winning the board by two tricks, which wastes some of their average trick total. Even more intuitively, if your teammate South is playing Up, there is some chance as East that you have already won the board by not baring, so baring could be pointlessly risking a valuable trick for a valueless one.
II) Independent team equilibrium. Each team announces their randomization which must consist of *independent* randomizations by S and E. It's an equilibrium if the other team has no better-than-even response.
Fascinating to me is that there is no type II equilibrium! If you believe me that the equilibrium in I is unique for its type, this is actually pretty immediate. A team has a better-than-even response to any randomization that isn’t the correlated one in I, which implies that one of the four pure strategies must be a winning response, and pure strategies of course satisfy independence.
How is this possible? Doesn’t it violate Nash’s Existence Theorem? No. Why? When Nash’s Theorem is applied to games of more than two players, you have to be careful about the definition of equilibrium. A 4-tuple of strategies is a Nash equilibrium if no *single* player can improve his outcome by deviating. Nash doesn’t recognize teams, so his theorem doesn’t apply to the equilibrium concept defined above (II), which says equilibrium can be broken by two players deviating together. Nevertheless concept II is fairly natural, and there is something disturbing about the fact that there is no such equilibrium. Think about it: If S1 and E1 huddle and decide on a randomization for each, then *whatever* they decided, team 2 has a winning response if they just know the frequencies. This is totally opposed to the usual intuition for mixed-strategy equilibrium.
III) 4-player independent Nash equilibrium. Each player announces a randomization – again, teams cannot correlate. It's an equilibrium if no player can improve his lot by changing his plan, with both his opponents *and* his teammate held fixed.
Nash tells us that such a thing definitely exists. FMB must have been referring to this type when he said East should bare 1-sqrt(.5) of the time. I haven’t been able to verify his calculations yet; I get some equations that are a bit hairy when I try to solve for the type III equilibrium.
All of this brings up the issues: Can teams correlate? Is it legal for them to correlate? I’ll comment on that sometime later, but for now will note that being able to correlate seems to provide some edge, although it is difficult to define how much: the two-team game where one team can correlate and the other must be independent has no equilibrium! Frankly, I find the conditions in II more natural bridgewise than I or III, but that's the one that leads to non-existence!
Why all the shootings?
5 years ago
17 comments:
On III, I can explain my calculation:
If we have an equilibrium, then a player always going up should expect to break even against a player always playing low. The player always playing low will:
a) win the board if the SK is on the left, 1/3 of the time.
b) tie the board if the SK is on the right (2/3), RHO does not bare (p), and RHO at the other table also does not bare (p, assumed independent).
So score = 1/3 + 1/2 * 2/3 * p*p = 1/2, where p = frequency that the defense doesn't bare = sqrt(1/2).
On I, the idea of correlation among teammates is very interesting. They certainly ought to be able to generate some correlated random numbers from the cards in dummy, for example. I look forward to your followup on this idea.
On II, I think I understand, but demonstrating a specific response to my proposed strategy (bare 1-sqrt(0.5); go up 0.5) would be cool. Maybe I'll work it out while trying to sleep.
Jon,
Do I know who fmb is?
I don't think you can have the team correlation strategy in I that produces the equilibrium you gave. You would need to make the correlated strategy after knowledge of one side's result from the randomization to avoid UB.
What about correlated strategies where one side of the team is not randomized? I would think that that's more common than one would think. For example, if you were playing BAM with Barry Crane and partner as teammates, you'd know that they follow his "rules", and can plan certain strategies around that.
Not to take this topic off the math, but is it fair/legal for you to know that your partners will always play for queens over jacks in the minors and queens under jacks in the majors while your opponents don't have this information?
Not really trying to be anonymous, made my profile public and put more info on my blog.
--Franco Baseggio, aka FMB
Pretender,
You're right to question the legality of agreements like the Q over J one you mention...I believe they are definitely against the spirit of the laws, but not the letter -- at least, not specifically mentioned.
Also true, what I mean by correlation in (I) is a rather difficult sort that requires you know your teammates' ex post randomization, but this is actually not impossible. As Franco suggested, you and your teammates could agree on a coding scheme where dummy's low spot cards code for a real number in [0,1] which is used as the basis for your randomizations. You could make this code unbreakable (even in theory) by randomizing the code you agree on in your pre-game team meeting. This again brings up the legal question of whether you can agree things with your teammates that the opponents do not know.
Still have to work out some things about the last kind of equilibrium...
Hello again,
Franco is completely right that his strategies (go up .5, bare 1-sqrt(.5)) are an equilibrium by my definition III. He found a big computational shortcut I had missed. A couple more points:
As I said, this isn't an equilibrium by definition I or II. In fact, both UN and DB achieve better than even against it, .5143 and .5345 respectively. If the teammates could get together, they would just have to agree on one of these deterministic combinations (wouldn't have to do correlated randomizations) in order to win against the independent randomization. But each of the 4 players individually is powerless to do better than even.
Last comment: Franco's argument shows that his equilibrium is the only *symmetric* type III equilibrium. I don't know at the moment if there are asymmetric equilibria (I know there are not for type I.) Also, we could say that .0345 is the "edge" which this independent randomization concedes to an opposing team which knows what it is and can coordinate their response. I know any independent randomization concedes such a non-zero edge -- I don't know the minimum possible. I'm very curious (I hope one or two readers still are,) so I'm still working on this.
I can answer the last problem I posed (used numerical methods.) The independent randomization (Up 4/7, Bare 1/4) holds the other team to an edge of 1/84, i.e. they can average .512, which they can achieve by any strategy other than UB. This is the minimum possible edge for the opponents if you must announce an independent randomization. To put it another way, if this board were played repeatedly with Team 1 forced to announce an independent randomization, after which Team 2 can do as they please, Team 2 should have a "51.2% game." With correlation allowed, there would be no edge. These strategies are not an equilibrium in any sense I mentioned before, incidentally.
So, in a matchpoint game (of expert game theorists) there is an equilibrium strategy, since you really won't be coordinating.
It seems like there's probably a rule of thumb (for coordinating) revolving around the idea of not winning the board twice (if your teammates are going to go up and you have the K, baring it risks 1/2 a board with no upside. Note that UB is the only strategy that loses to non-coordinators).
As for whether this coordination would be legal (similarly playing Q/J in majors only), I believe it is within the spirit: disclosure is about preventing secret communication about the cards that you're dealt.
Though I guess that means NW, NE, SW, and SE need to have distinct random seeds that they don't share with their partner, so that (say) East's falsecards don't communicate anything to West. I suppose, also, that the existence of coordination might need to be disclosed, since this might affect the odds of falsecarding, which does tell one defender something about the other's hand.
Some "research" questions:
1. What's the situation with the biggest vig for coordination? I feel like it's probably something like this position with different prior odds for the location of the K.
2. Is there a situation where the uncoordinated equilibrium strategy can be beaten by a coordinated one, but not a pure one? My intuition says no, but I don't have a proof right now.
For what it's worth, I've replicated the calculations, except that I didn't do Jonathan's minimax calculation for Scenario II, and I get the same results.
Asymmetric equilibria would be very cool, and I don't see why they couldn't occur in theory. Once you move out of LP territory (Scenario I) and into quadratic or even higher forms (Scenario III), you can probably produce multiple zeros in [0, 1]^N. Whether or not they're all stable I don't know. I have a feeling that some generalized form of Nash should apply.
I think it's perfectly fine for teammates to correlate. I think of it as an extension of coordination within a single suit at the same table. Here we would let East and West coordinate, as they are assumed to have perfect knowledge of the layout, and so they know not only that partner is randomizing but can tell which choice he has made. (This assumes a somewhat rare case where both East and West would randomize in the same suit...)
The team would have to disclose their correlated strategy.
I think it's clear that an uncorrelated strategy is best beaten by a pure strategy (if it can be beaten at all). The optimization for a correlated strategy is linear, and there is no reason to mix -- there will always be a vertex that is at least as good as any edge in the simplex, and we're not concerned with finding a strategy that is resilient to the opponent's further response.
Soren
Hi,
Certainly true that if a mixture loses to anything, it loses to a pure strategy.
An asymmetric equilibrium of the 4-player game *would* be cool, and I can't see any easy proof against it, but my numerical search strongly suggests the symmetric eq. is the only one. I ran a Newton algorithm with a thousand random starting points, and they all converged to the symmetric equilibrium.
I guess, despite what I said earlier, I agree with Franco and Soren that coordination among teammates is in the spirit of the game in the purest sense (and perhaps it's come across: I'm a bit of a purist.) I do think any agreement would have to be disclosed, though, which makes the whole thing too obscure for practical purposes. (Yes, we passed that point a while ago.)
Just to make things a bit more complex, suppose this is the ending:
....AQ
....2
....-
....-
54.....K6
3......-
-......-
-......2
....732
....-
....-
....-
South doesn't know the location of the SK, and East doesn't know the location of the H3. So East doesn't know what South's prior is for where the SK is, and therefore doesn't know how often to falsecard (presuming that contract status is not important).
I'm guessing the optimum is not particularly harder to compute, nor particularly more interesting. But maybe this twist adds something.
Also, note that coordination plans relying on the dummy only work when there isn't some random variation in which side declares.
I have worked some more on Scenario III, which corresponds to duplicate bridge.
I’ll use r to mean the probability that East has the spade king. In our example it is 2/3.
It turns out that East should bare the king with probability 1 +/- sqrt(2 – 1/r). One root is always spurious (greater than 1) and the other is OK as long as r >= 1/2. So asymmetric equilibria don’t exist. Of course, if West is more likely to have the spade king, declarer will always duck.
There might exist other set-ups where the solutions are not symmetrical around 1 but around some other number between 0 and 1 instead. That would be the recipe for asymmetric equilibria.
It occurs to me that there are implications from the earlier play of the hand. If West has the KJ of spades, the best he can achieve in the three-card ending is to score the king some of the time. In this case, East should have played a spade when in with the club ace in order to score the king all the time. Therefore, if West has the king, then East should have the jack, and the real likelihood that East has the king is 5/7.
Furthermore, in those universes where East holds Jxxx, he must pitch the jack 40% of the time in order to make it equally likely that the he has the king given pitches of x-x-J and x-x-x. I think Jonathan was correct that Helgemo would blank the king much more than a quarter of the time in practice. Similarly, I think he would have pitched the jack from Jxxx much more often than 40% of the time – it’s the sexy play. Therefore the fact that he didn’t suggests that West had it, and therefore West doesn’t have as much room for the king, probabilistically speaking.
A few comments ago, I answered the question of what independent randomization by team 1 holds the opposing team to the smallest vig, if they respond optimally. Franco wondered for what value of r this vig is largest (r = prior probability of king on right.) I can answer this now...the optimal r = (9+2*sqrt(3))/23, about .542, and the edge for team 2 is .0307, i.e they score 53.07% on average.
Where does this come from? Basically, for a given r between .5 and 1, there are two locally optimal strategies for team 1, determined by: (i) team 2 gets an equal edge from UN,DB or DN (ii) team 2 gets an equal edge from UB,UN or DB. When r > .542, (i) is the global optimum, as in the r=2/3 case we worked on all along, but when r<.542 then (ii) is the global optimum. The maximum vig occurs at the cutoff, where team 1 has two very different strategies to hold the responders to the 53.07% vig. Roughly these are (i) (.63 Up,.42 Bare) and (ii) (.13,.84). Note that always playing DB here would only concede an edge of r=54.2%; team 1 actually can only cut that by 1.1%.
Also: I can give the correlated equilibrium strategies for a general r. Basically you take the strategies that are average-tricks optimal in a single-table game (.5 Up, (1-r)/2r Bare) and coordinate to never play UB which might "win the board twice." That gives (for r> .5) (0 UB, .5 UN, (1-r)/2r DB, (2r-1)/2r DN) -- this equilibrium is always unique. When r<.5, as Franco pointed out, the pure strategy DB is uniquely optimal (at r=.5, multiple equilibria including the pure one.)
Franco and Soren both made the good point that the equilibrium where players all act independently (which I called type III) makes sense for a pair game. But consider this scenario: on Friday the Game Theorist Pairs begins; everyone has studied this position and is all set to play according to
(.5,1-sqrt(2-1/r)) if it occurs. No one has any individual incentive to change this plan. But...about half the Game Theorists also belong to the Masonic Lodge, which has a meeting Thursday night. The Masonic game theorists all agree to play D or B, should this position arise against non-Masons. Then the Masons will score better than average, the non-Masons worse. The idea is that an equilibrium that isn't "coalition-proof" isn't very stable. ("Coalition-proof" means just what you would thihk; that no *set* of players can change their actions and make themselves all better off. As here, coalition-proof equilibria may not exist.) Unlike in a straightforward two-player case, equilibrium behavior doesn't guarantee you a "fair" average payoff against any possible behavior by anyone else.
The coalition seems illegal to me.
So it does sound like the general rule of "don't win the board twice" works for teams. They just need a way of figuring out who will try to win it.
Yes, you could make a blanket agreement on your BAM team that in any situation with close choices between risky and safe actions, "NS try to win the board on even boards, EW on odd boards." There are bidding situations where this could be relevant also. It doesn't seem quite right to have this as a secret agreement, though.
The Masonic coalition does seem very shady, but it would be tricky to write a law against it. Try "no group of players shall meet and discuss how they will play certain hands, with the intention that they will score higher (and hence others lower.)." Well, that outlaws people teaching each other to play better bridge! The law would have to reference the fact that the agreed strategy is effective *only* because it is secret...that is what seems to make it wrong.
GL at the Washington NABC. I won't be there (went to Houston, going to San Diego), but I'll be checking the DBs and pulling for you.
I think it would be sufficient to prohibit agreeing to vary your play between opponents in group and out.
Post a Comment