We are living in the midst of the ultrarunning boom.
Frank Shorter's victory in the
Olympic Marathon in Munich in 1972 is often cited as the launching point of the US running boom. The explosion in popularity of "jogging" and road-racing in general (and marathons in particular) has its roots in that legendary race, as well as the publication of
Jim Fixx's seminal
The Complete Book of Running which was published in 1977. Forty years later, ultrarunning is having its moment. The success of Dean Karnazes'
Ultramarathon Man and Chris McDougal's
Born to Run have helped to fuel an unprecendented growth in ultramarathon participation. An estimated 70,000 folks will complete an ultra this year, an
increase of over 400% in the past fifteen years, according to Ultrarunning Magazine, the bible of the sport.
While it's
still not exactly mainstream--that's about 12% of the 540,000 marathoners this year, not exactly a mainstream sport itself--the interest in our little niche has certainly swelled to unprecedented levels. Sponsorship money is flowing--well, maybe not flowing per se, but at least trickling in.
Ultrarunnerpodcast and
Talk Ultra boast thousands of downloads a month.
Ultrasportslive TV and
irunfar.com provide live coverage of major races. Stupid blogs like this one seem to be multiplying like rabbits.
So I've decided to do my part in servicing the Ultrarunning Boom. If we're going to be a mainstream sport, let's act like it, dammit! Let's get in on the sports conversation. Forget fighting about whether
LeBron could take MJ one-on-one. We want people arguing in bars (or, more likely,
craft-beer tasting rooms) about who the best ultrarunners are, right? Well, fear not. Your prayers have been answered with the Ultrarunning National Rankings.
In truth, I've been kicking this idea around for a few years. I train with alot of tri-geeks, and they all have their
national rankings, both overall and for their age groups, courtesy of
USAT. I thought that was pretty cool, and thought a similar idea--a national ranking for any ultrarunning finisher in the country--would be pretty awesome. I didn't particularly like the
USAT formula, though, which involves comparing performances to estimated times by certain "pacesetters" in the race. I didn't love the
Ultrasignup rankings either, which work similarly, by awarding finishers a percentage of the race winner's time--it penalizes folks for racing in competitive fields, and rewards those who only run races they know they can win. (Though admittedly the database power of Ultrasignup, allowing them to rank absolutely everybody, is very impressive.) I wanted a system that acknowledged that not all races were created equal. And so I found my model: golf.
The
World Golf Rankings are
very complicated, and may not be perfect, but they provided the basis for what I wanted in my system. In the WGR, players accumulate points via their performances in tournaments on the various world tours. The tours are all ranked--the US tour tanks the highest, followed by the European tour, and there are various factors for the Australian tour, the Japanese tour, and all the various mini-tours around the country--and the events within those tours are all ranked, too, based on how many of the top players in the world are playing. Each tournament is assigned a value of importance--the majors are the highest, of course, with the World Golf Championships a step below. The combination of what tour is involved, what the event is, and how many top players are there, determines how many ranking points are available, and how many players at that tournament will receive points. The cumulative points are then divided by the number of events the player finished, and the result gives the player the number used for their ranking.
This was what I wanted. Just like in golf, some events on the ultra calendar--
Western States,
Leadville,
UTMB--are more important than others. Placing highly in those events should carry more weight. After all, which impresses you more: my win in a local 50K, or, say, Dominic Grossman's 19th-place finish at States last year? (If you have to think about it, the rest of this post probably isn't for you.) I could simplify things somewhat--ultrarunning doesn't have tours, for one thing--but the basic ideas were there. Win small events, get a bit of credit. Win bigger events, get more credit. Beat the best runners in the country, get even more.
Initially, I had envisioned something akin to what USAT and Ultrasignup provided: a ranking for every finisher of an ultramarathon in the US. Quickly, though, the impracticality of this idea became apparent. First of all, there was just too much data to enter manually. I don't have an automated database like Ultrasignup does, and I can't manually enter, say, 1100 finishers at JFK into a spreadsheet. Plus, even if I figured out a way to automate it, more problems reared their head. For one, duplicate names--how would I deal with, say, having five different Matt Smiths in the results? (I've encountered this problem on Ultrasignup, which sometimes confuses me with another Jason Friedman about my age who happens to live less than 100 miles away.) Limiting the rankings to top finishers doesn't eliminate this problem, of course, but makes it much more manageable. Also, golf doesn't award points for every finisher--you have to hit a certain minimum criteria at a tournament for ranking points to become available. That might mean making the cut, or the top 20, or whatever. But not everyone gets points. So not every finisher was going to get a ranking. I had to make peace with that.
Trying to be as comprehensive as possible, I'm including every domestic race I can, as well as major international ones which might attract the top US talent. Using the
Ultrarunning Magazine race calendar, I started by assigning each race a score on a five-point scale. Most races were ranked as Level 1: there are over 1000 ultras in the country, and you haven't heard of the overwhelming majority of them. Level 2 races are slightly more prestigious--they might have some local or regional cachet, or they might have some name recognition by virtue of being associated with a race where a different distance is more important--say, the Ice Age 50K, which gains some prestige by riding the coattails of the
Ice Age 50 Mile, but is decidedly the less important of the two events. Level 3 races have a strong regional importance and maybe some mild prominence on the national stage, but aren't quite attracting the top fields--think
Leona Divide or
Umstead. Level 4 races are national-class events--
Miwok,
Chuckanut,
Speedgoat--that are separated from the top only by degrees. (By default, I assigned all national championship races a Level 4. They should be a big deal, even though not every US championship is created equal.) Level 5 is reserved for the true majors: States, Leadville,
Sonoma,
North Face, and a handful of others that if you win, you just consider retiring right there on the spot since it probably will never get any better. IAU World Championships and World Cup races are automatically granted a Level 5.
Using the same ratios for points and places that are used by the WGR, I then established how many points were available for each level of race, and how deep the scoring fields go. A Level 1 race is worth five points to the winner, three for second, and one for third. Level 2 races score out to the top 5; Level 3 the top 10; Level 4 the top 15; and Level 5 the top 25. Generally, second place is worth about 60% of the points of first place (again, similar to the WGR), placing a premium on wins, which I like.
Next, we have to calculate a multiplier for strength of field--I want to reward people for racing against the best. For the WGR, the top 200 in the world are used to calculate field strength; I settled on the top 50. Each spot in the top 50 is assigned a point value, and those values correspond to a certain factor by which every finishing spot in that race is multiplied. Of course, this is the first time we've done this, so there is no top 50 to work off of. Instead, I used the results of the Ultrarunning magazine
Ultrarunner of the Year voting from 2014 to set a baseline top 10 for the men and women. When any of those top 10 run in an event this year, that race is worth correspondingly more points. Not only does this place a high importance on seeking out top competition, it also acts as a reward for those who have previously achieved a high ranking. (Once I have enough data for a reliable top 50, this multiplier effect will become a bit more pronounced.)
I decided to base the rankings on the sum of the points earned, not a per-event average, for several reasons that I will get into later. At this point, my formula was set, and I started the (rather laborious) task of compiling results and manually entering the data into a spreadsheet. It's a painstaking process. I use the Ultrarunning website, as well as Ultrasignup, and I wind up doing a lot of Google searches to find results that aren't posted there. So far, out of the 575 or so races that I have listed through the weekend of May 31, I've found results on about 90-95% of them (several were cancelled or have been removed from consideration due to fat-ass status or other situations that will make getting results unlikely). In all, over 1400 men and 1300 women have earned at least one ranking point so far this year.
Before I unveil the actual rankings, I know you probably have some questions/concerns/criticisms. I'm going to try to anticipate some of them now, and address them as best as I can by trying to explain my reasoning.
Ultra rankings won't work. You're comparing all different kinds of events: road, trails, track, timed events. Different runners have different strengths.
True, but that's a feature, not a bug. That's exactly why I think this is a cool idea. You can accumulate points in any race, in any discipline. In theory, the most well-rounded athlete should have the best chance achieving a top ranking. Think about golf: some courses are long, some short, some with tighter lies or deeper rough. Doesn't matter. You have to beat whoever shows up on that day.
You're not accounting for times, or margin of victory. What about course records?
I want the focus to be on the head-to-head competition, not the times. Course records are nice, but ultimately meaningless. You don't get more credit in golf for winning by 10 shots than winning in a playoff. A win is a win. Also, comparing times across courses--just like comparing golf scores across courses--doesn't work.
Your race ratings are terrible.
Yeah, well, that's like, your opinion, man. This is obviously the most subjective part of the system. I live on the East Coast, so there's probably some local bias involved. And I certainly can't keep track of 1500 races and know which are necessarily the most important. What I'd like to do in the future is have a committee of folks spread out around the country, so I can have people responsible for rating races in their home region. Let me know if you want to volunteer. When we monetize this thing we'll all be rich.
How come all the timed events/track races are rated so low?
That's not 100% true, but I'll admit, the track events generally are receiving lower ratings than their road and trail counterparts. My explanation is that, in the current climate, these races are not as highly regarded, and the fields are (usually) much weaker. This isn't to say that some of the stuff Zach Bitter and Joe Fejes are doing isn't incredibly impressive or historically important. But the truth is that less people are paying attention to, and talking about, these accomplishments than they are dissecting results from Leadville or Lake Sonoma, and that less of the top runners are showing up at these events.
What about FKTs/solo record attempts?
There's no way to account for these in the system, which I am OK with. As I've stated before, this is about head-to-head competition deciding who the best runners are. I don't want to be in the position of deciding whether Zack Bitter's 12-hour record is more impressive than Rob Krar's R2R2R FKT or Mike Wardian's treadmill 50K.
In reality, I think all of these things that we're talking about--FKTs, course records, national bests--are fodder for voting, not rankings. Which is great. I find the UROY vote fascinating, and I'm not saying we should replace that vote. This ranking is meant to be an objective supplement to that subjective process. Golf has the WGR which is completely objective, and the Player of the Year vote which is completely subjective. You can use whatever criteria you want to value when you're voting.
You should use an average, not a cumulative score. Without an average, there's no penalty for running poorly or DNFs.
I struggled with this decision for awhile. This is where I deviated from the WGR, which uses an average. In the end, I decided an average for our sport simply didn't work. First of all, I wasn't comfortable with setting a minimum amount of races to qualify--that seemed way too arbitrary to me. Secondly, from a logistic standpoint, it was almost impossible. I'd have to comb through all the results of every single race and record zeros for anyone who didn't earn points for that race, just so I could average that race in later if they happened to record points--just not doable. Plus, how would I count slower, non-scoring performances vs. DNFs? You'd like to reward finishing, I guess, so if non-scoring finishes are worth zero points, should DNFs result in losing points? And then, how would I track DNFs at all? Most race results don't list them. If I can't track them, it might encourage people to DNF, rather than record a slower finish. (The Ultrasignup rankings
have run into this issue.) Finally, some people use low-key races as training runs, or social events. I don't want to discourage this practice by penalizing people for participating in a race without actually racing it. Ultimately, cumulative points was the only way to go.
You do a terrible job with international races and international runners.
I'll admit this is the one of the biggest problems I'm having right now. These are US rankings--God knows I can't do world rankings, though it'd be fun--but there are plenty of top US athletes racing overseas in big races, and I need to account for their results. And since I'm not keeping track of international runners (except in certain cases when I know they'll run a bunch of US races) the field strength of these races isn't as robust as it probably should be, so the multiplier isn't as significant as you'd want it to be. These races aren't as important on the
domestic level as they are overseas, so maybe, it's not as big a deal as I think it is, but I'll readily admit this is a problem that I haven't solved, and I'm open to suggestions.
Level 4 and 5 races are undervalued in points as opposed to levels 1, 2, and 3.
And this is the other problem. Again, I based this off of the WGR, making level 5 races akin to the majors, level 4 in line with large events like the WGC or the Players, Level 3 some of the medium-sized PGA tour events, Level 2 a small PGA tour event, and Level 1 a mini-tour event. At baseline, winning a level 5 event is worth five times as much as a Level 1 event...and likely more, since the chance for having a field strength multiplier is much higher. These are approximately in line with the ratios the WGR follows. But the more I look at it, the less sure of these numbers I am. Is winning Western States really just five times as important as winning a podunk 50K? Should it be worth ten times as much? Or twenty? I'm a little too far into this now to change for this year, but next year I might need to tweak the relative values. I could fix this by adding more levels, but I think parsing the different races between levels is going to get maddening. (I mean, States is a 10, but is Leadville a 9? Is Vermont a 6, 7, 8, what? Too complicated.) Probably the larger races will simply be worth more points. (Incidentally, I do like the numbers of scoring slots for each level, which are 3, 5, 10, 15, and 25 respectively. These seem about right and I'm pleased I got that right on the first try.)
ANYWAY...here are the first set of rankings, through May 31. We're getting into the summer racing season now, when big events start coming more frequently, so I'm going to try to update these maybe every couple of months. If you like, you can view the
entire list here. Sheet 1 is a list of all the races, with rankings and field strength multipliers. Sheet 2 is my reference for how points are distributed and how multipliers are calculated. Sheet 3 is men and Sheet 4 is women. I have it listed alphabetically by first name, so you can find your name there. Sorry, I don't have it set up to view the entire list in numerical order. I'll figure out how to do that at some point I'm sure. Probably. Maybe.
Through May 31
|
Men
|
State
|
Points
|
Women
|
State
|
Points
|
1
|
Alex Varner
|
CA
|
52.6
|
Magdalena Boulet
|
CA
|
77
|
2
|
Ryan Bak
|
OR
|
48
|
Stephanie Howe
|
OR
|
65.5
|
3
|
Brian Rusiecki
|
MA
|
46.2
|
Nicole Studer
|
TX
|
55.6
|
4
|
Paul Terranova
|
TX
|
45.5
|
Cassie Scallon
|
CO
|
46.6
|
5
|
David Laney
|
OR
|
43.9
|
Aliza Lapierre
|
VT
|
41
|
6
|
Dylan Bowman
|
CA
|
42
|
Jacqueline Palmer
|
DE
|
37
|
7
|
Chikara Omine
|
CA
|
40
|
Kathleen Cusick
|
VA
|
34
|
8
|
Mario Mendoza
|
OR
|
38
|
Traci Falbo
|
IN
|
33.8
|
9
|
Rob Krar
|
AZ
|
35.7
|
Caroline Boller
|
CA
|
32.6
|
10
|
Jorge Maravilla
|
CA
|
34.6
|
Katalin Nagy
|
FL
|
32.5
|
11
|
Jorge Pacheco
|
CA
|
32.8
|
Sarah Bard
|
MA
|
31.225
|
12
|
Benjamin Stern
|
CA
|
31.2
|
Ashley Erba
|
CO
|
30.9
|
13
|
Jared Hazen
|
CO
|
30
|
Megan Roche
|
CA
|
30
|
14
|
Jean Pommier
|
CA
|
30
|
Melanie Fryar
|
TX
|
29
|
15
|
Justin Houck
|
WA
|
28
|
Bree Lambert
|
CA
|
27
|
16
|
Andrew Miller
|
OR
|
26
|
Amy Sproston
|
OR
|
24
|
17
|
Ian Sharman
|
CA
|
25
|
Angela Shartel
|
CA
|
24
|
18
|
Scott Breeden
|
IN
|
25
|
Meghan Arbogast
|
CA
|
23.225
|
19
|
Alex Nichols
|
CO
|
24.8
|
Rachel Ragona
|
CA
|
23
|
20
|
Christopher Dannucci
|
CA
|
24.8
|
Laura Richard
|
CA
|
22.5
|
21
|
Christian Gering
|
CO
|
24.25
|
Nuria Picas
|
ESP
|
21
|
22
|
Karl Schnaitter
|
CA
|
24
|
Tracie Akerhielm
|
TX
|
21
|
23
|
Tim Tollefson
|
CA
|
22.5
|
Lindsey Tollefson
|
CA
|
20.975
|
24
|
Seth Swanson
|
MT
|
22.2
|
Kaci Lickteig
|
NE
|
20.275
|
25
|
Ben Nephew
|
MA
|
22
|
Amanda Basham
|
OR
|
20
|
26
|
Daniel Hamilton
|
TN
|
21
|
Camille Herron
|
OK
|
20
|
27
|
James Blandford
|
PA
|
21
|
Joelle Vaught
|
ID
|
20
|
28
|
Nate Polaske
|
AZ
|
21
|
Megan Stegemiller
|
VA
|
20
|
29
|
Jason Leman
|
OR
|
20.6
|
Michelle Yates
|
CO
|
20
|
30
|
Fernando de Samaniego
|
CA
|
20.3
|
Silke Koester
|
CO
|
19.4
|
31
|
Jim Walmsley
|
MT
|
20.3
|
Emily Richards
|
NV
|
19
|
32
|
David Kilgore
|
FL
|
20
|
Kerrie Bruxvoort
|
CO
|
19
|
33
|
Ford Smith
|
TX
|
20
|
Julie Fingar
|
CA
|
18
|
34
|
Mike Bialick
|
MN
|
20
|
Ashley Lister
|
PA
|
17.8
|
35
|
Patrick Smyth
|
UT
|
20
|
Amy Rusiecki
|
MA
|
17
|
36
|
Zach Ornelas
|
MI
|
20
|
Jennifer Edwards
|
WA
|
17
|
37
|
Ryan Smith
|
|
19.5
|
Catrin Jones
|
Can
|
16.8
|
38
|
Zach Bitter
|
WI
|
19.5
|
Robin Watkins
|
DC
|
16.5
|
39
|
Lon Freeman
|
CA
|
19
|
Gia Madole
|
OK
|
16
|
40
|
Owen Bradley
|
AL
|
19
|
Suzanna Bon
|
CA
|
16
|
41
|
Adrian Stanciu
|
CO
|
18.5
|
Emily Harrison
|
AZ
|
15.6
|
42
|
Jason Lantz
|
PA
|
17.35
|
Kathy D’Onofrio
|
CA
|
15.6
|
43
|
Bob Shebest
|
CA
|
17
|
Kerrie Wlad
|
CO
|
15.2
|
44
|
Jared Burdick
|
NY
|
16.8
|
Leslie Howlett
|
UT
|
15.2
|
45
|
Brandt Ketterer
|
CO
|
16
|
Alicia Woodside
|
Can
|
15
|
46
|
Catlow Shipek
|
AZ
|
16
|
Amie Blackham
|
UT
|
15
|
47
|
Charlie Ware
|
AZ
|
16
|
Beth Meadows
|
TN
|
15
|
48
|
Karl Meltzer
|
UT
|
16
|
Claire Mellein
|
CA
|
15
|
49
|
Ray Sanchez
|
CA
|
16
|
Jennifer Benna
|
NV
|
15
|
50
|
Mario Martinez
|
CA
|
15.9
|
Jessica Lemer
|
WI
|
15
|
|
|
|
|
Joanna Masloski
|
CO
|
15
|
|
|
|
|
Megan McGrath
|
NJ
|
15
|
All right! Let the ad hominem attacks on my character begin!