An update to the way the National Rankings are calculated

News
Dave Hebden writes …
When people ask me about how the rankings are calculated, they normally soon regret having asked the question. The system has evolved over the years and does contain some complexity, but I think overall the end result does seem to reflect pretty well a player’s success in tournaments. Anomalies will always exist in the tables, but each of these tends to get corrected as further results get included.
When I started the rankings in 1981 the fives calendar looked very different from today. At that time we had just five open singles events plus the University competition. The first singles event of the season was the West of England held in November, and the last was the Scottish usually held in May. So we had a summer break from competitive fives of 5 months or so. Today’s fives player is luckier (or unluckier depending on how you look at it) in having plenty of fives options for the summer months, with the South West, South East, London, and Yorkshire events having been added to the singles calendar.
Understandably, in 1981, the process I implemented was very much based on a season by season perspective. The current season’s results are given 100% weighting, with the previous season and season-2 results being gradually “aged” to give them less impact. This concept of “season” does not now make so much sense. The fives calendar is now more a continuous sequence of events, with the new season start (South West) being a rather arbitrary point in time.
In particular, there are some issues evident with the current approach. For example:
a) The early season events (South West, etc.) carry rather more weight than perhaps they should (100% weighting) compared to those from the previous season, with some of those events being only very recently played.
b) The number of events that count in the rankings spans a varying period of time ranging from 2 to 3 years as we progress through the current season. There is no real justification for this.
To address these issues a new approach is being implemented. The system will now always include 3 years of results. At any point in time the “current” year (at 100% weighting) will be the previous 12 months from that point, with Year-1 and Year-2 spanning the prior yearly periods. Year-1 and Year-2 will roll forward in line with the “current” year, with the results from each carrying a reduced weighting. So, as we complete a new tournament, the previous year’s running of that event will drop down to a lower weighting category, before eventually disappearing off the rankings radar after 3 years.
I plan to issue a new rankings update quite soon (reflecting the recent SW and SE events), and the new approach will initially be adopted for the Doubles only. I will bring the Singles into line at a later point (there is more work involved in that). I’ve compared the new system against the current approach and for the top 30 players there turns out to be very little difference in the doubles rankings order. Lower down the list there is increasingly more impact as the additional (low weighting) results from Year-2 come into play. One result of the new process is that the number of players in the rankings will be greater, because a longer period of time is normally included, some might say no bad thing.
Note that for the forthcoming update some players might be surprised to see that they have just entered the new doubles rankings despite having last had success way back in the 2016-17 season! Anyway, please bear with me as we make the transition. Things will settle down in due course.
My thanks to Chris Burrows and in particular Will Ellison, who have both provided very useful input to help with these changes.
And if you have managed to continue reading this far, my congratulations!
Dave Hebden