Bumped. GO BRUINS. -N
I started this post on Friday afternoon but didn't have time to finish it. In the intervening 50 hours, the point has become more personal for Bruins fans than it was previously. I think this offseason is the right time for the NCAA selection committee to finally eliminate RPI in favor of a formula that considers margin of victory. By ignoring victory margin, the committee is handicapping itself in its quest to identify the 34 best at-large teams.
This year in particular Consider how the incessant "bubble" discussion would change for Pac-10 teams if a measure like Sagarin's Predictor (which uses margin of victory), Sagarin's Rating (which synthesizes his Predictor with a model that does not consider margin of victory) or Ken Pomeroy's Pythagorean formula were used instead of RPI (rankings through last Thursday's games--I compiled this on Friday; it's close enough to how these ended up for you to get the point):
Team RPI Sag Prdctr Sag Rtg KenPom
UW 11 14 (-3) 17 (-6) 14 (-3)
UCLA 27 7 (+20) 14 (+13) 6 (+21)
ASU 30 12 (+18) 21 (+9) 13 (+17)
Cal 40 28 (+12) 27 (+12) 29 (+11)
USC 49 30 (+19) 39 (+10) 33 (+16)
Ariz 58 40 (+18) 44 (+14) 38 (+20)
WSU 90 36 (+54) 52 (+38) 31 (+59)
Stan 104 49 (+55) 64 (+40) 50 (+54)
OSU 156 120 (+36) 114 (+42) 121 (+35)
Oregon 177 142 (+35) 140 (+37) 152 (+25)
AVG 74.2 47.8 53.2 48.7
MED: 53.5 33 41.5 32.0
As you can see, in every case but Washington, the numbers in the other rating systems rate each Pac-10 team significantly higher than the RPI system does. According to RPI, USC's rating was low enough that it was not even on the bubble when we played them. Had Kenpom been used, USC may have already been a lock to make the tournament. Of course, they weren't and we lost to them, a loss that cost us another 6 spots in the final RPI rankings.
Why is it the case that RPI is so different from these other two respected rating systems? Simply put, RPI's ignorance of the score of games creates a lot of incongruities that should lead its users to question its effectiveness for its purpose. Here is a sample scenario, if BYU were to beat UCLA by 1 point in Provo, it would have the exact same effect on UCLA's RPI as BYU beating Mississippi Valley St. by 51 in Provo would have on MVSU's. In fact, move the UCLA-BYU game to Pauley, and the game would have a more negative effect on UCLA's RPI than MVSU's because RPI gives 60% weight to road losses and home wins. I'll say it another way: MVSU loses to BYU by 51 in Provo, and we lose to BYU by 1 at Pauley, and RPI would rate MVSU as better than UCLA.
The committee shows much evidence of its recognition that RPI is not an effective measure of a team's strength. For instance, Arizona got a tourney bid despite a #63 RPI, while San Diego State was left out of the tournament with a #34 RPI. UCLA, one spot higher in RPI, picked up a 6 seed, but the Aztecs are NIT-bound.
Of course, had the system used by the committee viewed margin of victory, UCLA's wins would look more impressive. After all, despite finishing 2nd in the Pac-10, we led the conference in point differential by a significant margin, and we also pummeled all of the crummy competition we scheduled in the preseason. Use these other measures, and UCLA is a protected seed.
Alas, we have the RPI this year, and Coach Howland was unable to game the system as well as some other coaches did:
-Siena RPI #18 despite 7 losses and only 12 games against top 100 (compared to UCLA's 18 games vs top 100)
-Utah RPI #10, despite a worse record than UCLA (by 1 less win), no road wins over an RPI top 100 team, a loss to a #195 RPI team Idaho State, a home loss to Cal, and fewer games and wins over top 100 competition. And by the way, their only wins of note were against Gonzaga when the Zags were reeling, SDSU and BYU.
And that's the system that sends us to Philly to play a tough 1st round game and if we're fortunate enough, Villanova in its secondary home.
I thought of this while noting for myself that before the USC game, we were something like 5-5 against RPI top 50 teams. USC was 49th in RPI. Had we beaten USC, they would have fallen out of the top 50 (despite being the same team they were all along), and our record against the top 50 would have fallen to 3-5, which would have been a black mark in the selection process. It struck me as odd that no matter what happened against USC, our record against the top 50 was doomed to get worse.
Of course, we did lose, and our record against the top 50 did also get worse. And a few minutes ago I reflected on the fact that we dropped RPI 6 positions after a neutral court loss to a team that was in the top 50. In the rating systems that consider scoring margin, we didn't drop nearly as far. The irony thickens.