Scoring method employed in trials

Started by gwynndavis, Jan 23, 2023, 03:55 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

gwynndavis

I broach a topic under this head with trepidation, but this is not about selectors, and nor is it a contribution to the debate (endlessly fascinating no doubt) on the merits of selectorial discretion versus a reliance upon trial results. But if we are to have trials (which I personally favour, although that is not the point), then they need to be scored fairly.

I noted the result of this week's Lady Milne trial with only modest interest. Then I looked at the scores, and in particular the scores for Session 4, which saw a considerable turnaround in fortunes. And what I saw was concerning - or at least, it concerned me.

The conventional means of scoring a Pairs trial is by X-imps - each pair's result on each board being imped against the datum (average) of the scores for that board. Where each board is played at just three tables, as happened at the weekend, then the datum on each board will comprise the average of the scores at just those tables. The computer program does this in the blink of an eye. Doubtless one would like more tables upon which to construct the datum, and (in an ideal world) more boards, but one weekend's play and three tables per board was all they had. Like all bridge scoring, X-imping has random elements, but those random elements are generated by the players participating in the event.

This trial appears to have been scored using a different method. If you look at the score for each board it is accompanied in Session 4 by something called 'The Expert View'. This 'Expert View' gives a projected score for each board. I can only assume that this score is arrived at on the basis of a double dummy program - Deep Finesse or something similar. I don't know if the Lady Milne trialists knew in advance that this was how the event was to be scored. It meant that the players were marked NOT against the efforts of their peers, playing in the same event, but against a double dummy program with sight of all four hands. The distortions that this produces can be quite extreme, leading to results that bear no relation to normal bridge scoring, or indeed to normal bridge understanding.

To give one example: Session 4, Board 4. The full hand  - the results on the board and the imps scored - is available on the WBU results page. Basically Bd 4 is a tricky 3NT for North/South. Eight tricks are readily available; a skilled declarer might hope to make nine, or the defence might be generous, but that ninth trick is not obvious and even a strong player might go down. 'The Expert View' says that N/S make 3NT. The actual results for N/S at the three table in the trial were:
3N-1 (-50)
3N-1 (-50)
1N+2 (+150)

Not much in it in terms of scoring, you might think, but N/S at the first two tables scored minus 12 imps, and N/S at the third table scored minus 10. The three East/Wests scored, correspondingly, plus 12, plus 12, and plus 10.

'The Expert View' only makes an appearance for Session 4, but earlier sessions are subject to the same distortions. Take Session 1, Board 17. This is, I would say, a fairly routine 6H for North/South in a good standard event. These were the actual scores on the board:
4H +2
6H =
4H +2

The two pairs who missed the slam scored -11 imps, as one might expect, and their opponents +11. And what of the one pair who bid and made the slam? They scored -2, and their direct opponents +2. What?! Minus 2 imps for bidding and making a slam missed at the other two tables in the event.

I would go out on a limb and say that whoever thinks this is a fair method of scoring has no bridge understanding. There was a move, I think three years ago, to score a Camrose trial against 'Par'. It was never made entirely clear whether 'Par' was to be based on double dummy analysis or was to be arrived at by human agency, whether before the hands were played or, heaven help us, in light of the actual scores recorded. I thought the idea was nuts, and said so at the time. I believe it was quietly dropped.

I am not on the inside track, so I have no idea why 'The Expert View' came to be employed for the LM trial. But, if we are to have trials, let us have trials that, within the inevitable constraints, are scored fairly. We had normal X-imp scoring for the Camrose trial this year, so why introduce a double dummy scoring comparison for the Lady Milne?


Simon Richards

The Selectors are due to meet on Friday morning and this will be one of a number of topics for discussion at that meeting. The references to "The Expert View" reported for Session 4 had already been noted This has been raised with the Director and we are waiting for feedback on how the scoring that was used in the Trial has been reported on Bridgewebs.

As entries to the Lady Milne Trial were limited (and one pair had to withdraw at the last minute for understandable reasons) there had been concern expressed about the randomness that may occur with Cross-IMP scoring across such a small field, so it had previously been agreed by the Selectors to use hands derived from a historic high quality (but relatively obscure) Ladies event in coming up with more meaningful datums. It should also be noted that the data from results in the trial has also being reviewed and analysed by the Selectors and the data has been "diced and sliced" using other means. There were also Selectors present at the trial, myself included, to observe what went on. 
 

Simon Richards

The Bridgewebs site has been updated and reference to 'The Expert View'  are now removed although there are still some minor glitches.

gwynndavis

Thank you Simon. This is a helpful explanation. X-imps, especially with just three tables per board, do of course present problems of their own. I suspect that importing the datum from another event at which the hands had been played may generate even more anomalies, although I confess I haven't studied the subject in any depth. As you say, it is open to you to conduct other kinds of analysis.


Tony Haworth

May I draw your attention to an often ignored anomaly when using x-imps for a teams event.
But firstly - x-imps when employed for a pairs event when all-play-all is fine.
However when employed to determine a pair's performance in a teams event, and in particular for a low number of teams, you must remember that this is not an all-play-all situation. In this year's Camrose trials there were eight teams (I think), i.e. 16 pairs. However since you entered as a team, you only actually played 14 of the other pairs. This can affect a pair's total x-imp score, particularly if a strong pair is teamed with a weaker pair - they never get the chance to play against their weaker team-mates, whereas all other players do.
Personally I don't really see the point in a team's entry, when it is then run as an almost all-play-all (but missing your own team-mates).
Tony

Simon Richards

The comments are noted but I am not sure how relevant this was to the recent Lady Milne Trial. The Lady Milne trial was principally scored against a datum provided from a historic, but relatively obscure Ladies event. However the results using other scoring mechanisms (including use of the three tablecross-IMP scores) were available to the Selectors and these were used when considering the selection of the team.

I am sure we would all welcome greater player involvement in the trials process in the future but I fear that, going forward, we will continue to struggle to attract a greater level of interest that would be prefered.