The World Handicap System (WHS) seems confused about what the Course Rating should measure. In the WHS’s Rules of Handicapping (p.103) the Course Rating is defined as the expected score of a scratch player. By “expected score” it is assumed the WHS means “average score.” The WHS also defines a scratch player as one with a 0.0 Handicap Index (p.15). To have a 0.0 Handicap Index a player’s low 8 differentials out of 20 must equal 0.0. In equation form:
Avg. of 8 low differentials
= 0.0 = ((Score1 – CR) + (Score2 – CR)…+ (Score8 – CR))/8 x 113/SR
CR =
Course Rating
SR = Slope Rating
For this equation to hold. the average of the player’s 8 low scores must equal the Course Rating. His expected score, therefore, would be higher than the Course Rating. So, what is the Course Rating? Is it a scratch player’s expected score of the average of his 8 best scores?
Complicating matters is the USGA Course Rating Model does
not use either of the WHS’s definitions of the Course Rating. That Model estimates the Course Rating as the
average of the better half of scores of a scratch player’s latest 20 scores. Has the Course Rating Model been updated to
reflect the new definition (or definitions) of the Course Rating? There is no evidence of any change in the
Model.
Does any of this affect the accuracy of the Course
Rating? Probably not. A Course Rating is
an imaginary number that cannot be measured like height, weight, and
temperature. The Model provides estimates
of the Course Rating that should be consistent, but not necessarily
accurate. By that it is meant that if
two courses are similar in distance and obstruction, the Course Ratings should
be roughly the same. The confusion about
Course Ratings in the WHS will not affect handicaps, but does suggest the WHS
needs to hire a better copy editor for its next edition.
No comments:
Post a Comment