Sunday, June 3, 2012

Playoffs: Selection Part II

Now that I’ve described my issues with the BCS formula, it’s time to outline my selection process for the four team playoff.  To recap, college football is undergoing changes and the four team playoff looks to be a done deal.  My format consists of utilizing the bowl system to host the semifinal games, and a host city bidding on the national championship, much like how the NFL selects the Super Bowl site.  With this format described, how will those four teams be selected in the Kostick Playoff Model?

I think many people’s issues with the BCS formula are due to the flaws in each of its components, which I could speak ad nauseam.  The Kostick Playoff Model will use a formula in ranking the Top 25 teams each week.  Say what you want about the problems with using a formula to rank teams, but the truth of the matter is it’s used to evaluate teams in several of the playoff systems the NCAA organizes.  The FCS utilizes a selection committee that selects its at-large teams with the Gridiron Power Index (GPI), while the NCAA men’s basketball selection committee analyzes metrics such as the RPI and KenPom in determining at-large berths to the tourney.  Granted, the NCAA men’s basketball selection committee uses those as tools in the selection process, but the FCS goes as far as mandating the use of a computer poll.  Anyway… let’s call this formula the Playoff Power Index (PPI).  The PPI looks to incorporate both impartiality and subjectivity in determining the top teams in college football.  The PPI will consist of five computer rankings and what I call the committee poll.  While the exact weight of each component would still need to be worked out, the committee poll would heavily outweigh the computer rankings.  The top four teams in the final poll, regardless of a conference championship, move onto the playoffs.  Let’s look at each of these components individually.

Computer Rankings
The goal of the computer rankings is to introduce a factor of impartiality.  Computer rankings are everything human polls aren’t – impartial, logical, and emotionless.  The BCS’s constant revising decimation of the computer rankings made them nothing short of a mockery.  Take the removal of margin of victory for example.  It was argued that including the margin of victory rewarded blowouts, and and as we're all taught, embarrassing the opponent is unsportsmanlike.  This introduced emotion, the exact thing computers lack, and in turn disregarding impartiality.  The strength of schedule was removed because the BCS argued the pollsters already account for strength of schedule.  No, seriously, that was their reasoning.  I wish I was joking here, but unfortunately, this is the exact flawed logic that decides the most important thing in all of sports – the championship.  Luckily, the computer rankings used in the PPI would be required to use metrics such as margin of victory and strength of schedule, so that teams with similar schedules can be compared, as well as teams with differing schedules.

Transparency was another key component lacking from the use of computer rankings in the BCS.  Only having five out of the six computers formulas available, without any oversight, created a potentially corruptible system.  It added another dimension of mystery in a system already shrouded in controversy and mystique.  And worst of all, it’s blatantly fixable.  The PPI will use five computer rankings with the requirement of complete transparency.  I’m sure it won’t be an issue finding five people wanting to prove they are the most competent in rankings the best teams in America’s second largest sport.  Just take a look at this list...finding only five is the more likely issue.  Next, the computer rankings will have an oversight committee dedicated to ensuring the accuracy of the rankings.  The committee could be one person, say Jerry Palm at CBS, or hell it could be a team.  But their sole responsibility will be checking the accuracy of scores inputted, teams played, results, etc. and ensuring each computers rankings are accurate with their respective formulas.  Easy enough, right?

Committe Poll
Dennis Dodd released his playoff plan and format several weeks ago and outlined a similar idea of having a new human poll.  I had a similar idea of creating an entirely new human poll, but my committee poll goes a few steps further.  The committee poll will be comprised of 25-50 media members equally distributed across the nation to eliminate any regional bias.  The number of media members associated with major networks must be equal as to prevent the networks from pressuring their employees into voting conference affiliates higher in order to gain monetary advantages.  A moderator will be chosen to sit in on the discussions without participating to ensure only the selected participants are present, active discussion takes place, and to document the voting.  The committee poll requires its participants to meet weekly, after the conclusion of all of the week’s games, to discuss and vote to create a top 25.  The committee can meet as many times as needed, either in person, teleconference, video conference, etc., and use any metrics, computer rankings, while discussing.  The poll must be released by noon on the day of the next week’s first game.

The final BCS standings are as much about how a team did throughout the season, as they are about where you start.  Being a top five or ten team in the preseason poll provides a huge advantage to teams without even playing a game.  The first committee poll top 25 would be released after week 1 and each week thereafter.  This should help reduce the impact of preconceived notions, by forcing evaluation of the teams after their performances.

So there it is, the PPI.

There are hundreds of intricate details that would need to be worked regarding computer ranking factors to include selection of the computers, oversight committee participation, committee poll selection, and committee poll rules and procedures, but this provides the basis for incorporating two aspects most commonly used in ranking sports teams – numbers and metrics, and subjective impressions.

The requirement for a conference champion is one that I went back and forth on a lot.  When the playoff discussion first kicked off in February I sent out an email with my proposal, using top ranked conference champions within top 8.  The more I thought on it and the more options the conference commissioners kept throwing out I thought to myself “why, if you have an effective ranking system would you take a conference champion ranked sixth or eighth in a four team playoff?”.  Then it hit me – “effective ranking system.”  That’s the issue to begin with!  The BCS is not an effective ranking system.  It makes no sense, when using an accurate ranking system, to select someone that you’ve ranked outside of the top four.  The whole idea in the first place is to match the best four teams to decide a national champion, so why not just correct the ranking system and take the four best teams?  We all know all conferences aren’t created equal, so why pretend they are?  Historically 80% of the teams ranked in the top four of the final BCS have been conference champions, so odds are you’ll have 3-4 champs each playoff.  But there’s always the chance the second best team in the Big 12 or SEC is better than everyone else too.  That’s what happens when you have Texas and Oklahoma, Florida and LSU, and Michigan and Ohio State in the same conference – get over it.

So that concludes the Kostick Playoff Model.  Hopefully the powers that be in college football take a look and adopt a few of the ideas.  Hell, maybe they'll even hire me to be the commish!

2 comments:

  1. I like this, and agree 100% on this topic:

    The final BCS standings are as much about how a team did throughout the season, as they are about where you start. Being a top five or ten team in the preseason poll provides a huge advantage to teams without even playing a game. The first committee poll top 25 would be released after week 1 and each week thereafter. This should help reduce the impact of preconceived notions, by forcing evaluation of the teams after their performances.

    Blows my mind how they rank a team before stepping on the field. Preseason rankings effect weekly/final rankings way too much.

    ReplyDelete
  2. It's great when it works in our favor though! Hell WVU and VT are both expected to be top 15 preseason! I'll bitch about the negatives, but say "hell yea" when it helps us out at the end of the year!

    ReplyDelete