Here we are, entering the first full week of February, and the calculators are burning out their batteries. Why? We are merely days
away from the return of the "January 20."
Regular readers will recall our analysis in early January of the 20 teams (at that time!) most likely to win this year's national
championship. We introduced a number of new statistics such as Adjusted Winning Percentage and Adjusted Scoring Margin, and promised to re-visit those same clubs in early February as well as early March.
Well, the number crunching is underway and will appear toward the end of this week. In the meantime, we have been overwhelmed by questions of all sorts (as well as by a more than thorough discussion of last week's lightening rod, Points Per Possession).
So, while you digest this topic and more, we are thrilled to point out a MAJOR addition to the statistics page found in every college basketball "team page" on ESPN.com. Points Per Shot (PPS), which we have discussed here ad nausea since early December, is now prominently listed for every player in the far right-hand column.
The entire stats package, in fact, is easier to read and incredibly useful. Kudos to Greg Collins, Ron Buck and company for doing everything they can to make ESPN.com's college basketball coverage so far ahead of the field that our rearview mirror is completely blank.
I say that as a fan, which is what I was before this column and what I will be long after it's gone.
For now, on to the emails ...
PPP ... and other observations
In the last Box Score Banter, someone wrote in about Points Per Possession, to mention that it was a meaningless stat because, in the same game, the two teams playing have the same number of possessions. However, PPP would be useful in comparing the scoring/offenses of different teams in different games.
For example, in the Wisconsin-Butler game, each team had only 49 possessions. Butler had PPP of 1.18 while scoring only 58 points. On the other hand, in the Florida-Tennessee game, Florida scored 81 points, but needed 74 possessions, which comes out to a PPP of only 1.09.
In fact, one could use PPP as a more accurate measure of scoring per game, because a high average doesn't necessarily mean a high efficiency and a low average doesn't mean a low efficiency.
Bryan Tsao,
Taipei, Taiwan
I couldn't agree more, Bryan. You also get this year's award for "furthest" email message. Thanks!
Noah Hunter's letter re: Points Per Possession was right on target, if the target is merely a recap of a single game. Obviously, PPP is not meant to be a single game statistic; it's worthless for that purpose. As Mr. Hunter points out, the score accurately reflects the statistic.
What it accomplishes, over the course of a team's schedule, is a barometer of how a team performs against variant defenses, and even against variant offenses, as the number of possessions will be increased or decreased depending upon how the team rebounds on defense or otherwise takes the ball away from the opponent. As such, it is a simple formula that expresses a conglomerate of FG percentage, 3FG percentage, FT percentage, turnovers, steals, fouls, opponents fouls, as well as being variable enough to express the
difference (in the number of possessions) between playing a slow-down team or a run-and-gun.
Haven't figured out how to do it yet, but I am seriously considering assembling a team of volunteers to track PPS (as well as individual player +/-) during the NCAA Tournament this year. You'd probably have to be in attendance to really do it right, especially with regard to substitutions, which TV doesn't always catch in time.
I'm sorry I missed the first installment of the "points per possession" discussion. I find it is a very useful stat, but I'm kind of biased. I've spent a lot of hours tracking it for the last three years, usually under the names Offensive Efficiency and Defensive Efficiency (abbreviated to OEff and DEff). While one can argue its value in the context of a single game, it provides a great deal of useful information about teams over the course of a season.
There is a classic example in Division I this year: Wisconsin.
Wisconsin leads the nation in Scoring Defense. Many people recognize that that is in large part due to the pace with which the Badgers play offense, but, by looking at DEff, we can be more definitive. Wisconsin's defense is actually good but not great, ranking 58th in DEff at .92 points per possession (compared to the Division 1 average of .98). In fairness to Wisconsin, I should also point out that when one takes quality of opposition into account (something else that I do), they move back up to ninth.
Even within the context of a single game, OEff and DEff allow one insight into how the team got where it did. As is the case with most stats, they usually confirms one's subjective impressions, but sometimes provide cause for reconsideration.
An example of counterintuitive results is Stanford. The general consensus is that Stanford is playing a more up-tempo game this year. By at least one measure, however, that is false. Last year, Stanford averaged 70.9 possessions per game; this year, they average 70.6. Given the limitations in my data, that difference is not terribly significant. On balance, it might be more accurate to say that Stanford is playing more aggressively rather than faster.
There is more information at my web site, http://home.pacbell.net/sdurrett/basketball.html. I look forward to
future discussion of the issue.
Steve, we are not worthy of your exhaustive work! And I hereby recommend your web site to all readers.
Love the columns.
Regarding points per possession, Noah Hunter's comment is certainly valid when comparing two teams in a single game, but PPP is still a valid metric when comparing teams' production over the course of the season. Think about it: If Princeton averaged the same points per game as TCU, while taking probably about half as many possessions to do it, that would indicate that Princeton is way more efficient on the offensive end.
As a college basketball fan and Stanford grad, I'd be interested to see how the Card stacks up in this metric. They have a fair offensive output, but I would guess that the number of possessions in an average Stanford game is relatively low, given their patient O and no-frills D.
You are right, Kurt. Check out Steve Durrett's website above for even more answers.
I am a proponent of the Points Per Possession statistic, and here's why: A good defensive team that is offensively inept may have the same record as a slow-paced, but offensively efficient team. Both teams may have similar average points and points allowed (say each team wins by an average of 55-50).
We have no other way (at least not that I can think of) to determine which team has a greater offensive efficiency. If I was watching a team that generally scored very few points, I would want to know (especially at the end of the game) whether it was because they could stop the other team from scoring on a great majority of their possessions or that they could manage to score at will as long as they had 35 seconds to work with. This seems like it would be useful to know for end-game strategies
(whether or not to foul, etc.).
I hadn't considered how said data could impact end-game strategies. My guess is that the knowledge might be beyond the comprehension of some coaching staffs, but we'll float the idea and see the reaction.
Joe, be assured that some of us LOVE the points per possession stat. I'm sure you'll show more restraint than I would in answering the goober who thinks the final score gives the same information.
My answer would be: have you ever actually watched a game?
If Fresno State and Wisconsin both score 70 points tonight do you think they each played the same offensive game? Hint: Fresno likely had 50 percnet more possessions to get those 70 points and Tark chewed a hole in his towel because they shot so poorly. Wisconsin shot lights out all night.
Do you think Loyola Marymount was really the best offensive team in NCAA history? PPP let's you compare teams and players with different styles or even from different eras. Simply looking at points is not nearly enough.
No name-calling needed, Jeff. All points of view all welcome here (as well as POINTS PER POSSESSION). I think the wide variety of responses to the PPP issue raised last week has been both healthy and revealing.
It's fair to say that, while not especially useful on a single game basis, PPP is something that should be added to the everyday lexicon for team evaluation. Not bad work for our first year at this!
I'll shorten your lesson for you on one that seems to have tripped you up a couple times. There are two systems of public universities in California: the University of California system, and the California State University system. UC schools are referred to "UC ___," with the blank being the city (Berkeley, Davis, Los Angeles, Riverside, Irvine,
San Diego). You're already used to doing this, as when you say "UCLA got beat all to heck by California (92-63)." This also shows the exception to the rule: the original (and best) UC is UC Berkeley, and is therefore known commonly just as "the University of California" "California" or, affectionately, "Cal."
California State schools are called in a similar manner. "California-State ___," fill in the city. Most of the time this is abbreviated to just "Cal-State ____." There are a couple exceptions, i.e. "Fresno State," but calling it "California State Fresno" is still
technically correct.
Now on to the juicy stuff. You've been one of the few people giving Cal any respect -- for that we thank you -- but why has it been so hard for us to get any love from polls? South Florida, Georgia, Mississippi State and now UCLA (though that's after the fact) all have more votes or are even ranked, whereas Cal -- a team that beat them all -- is yet to receive single
vote.
Those votes aren't too likely now, what Cal with losing to USC at home. As for the lesson in California higher education, you'd be amazed at how many messages I have gotten on this subject.
For the record, I completed high school in California and applied to both UC and Cal-State schools (attending neither). I understand the nomenclature and, for instance, will no longer refer to UC Irvine as Cal-Irvine. Lord knows I wouldn't want to be overrun by Anteaters (or Banana Slugs, for that matter, which is another column for another day!).
Having watched a number of overtime games this season (and past seasons), I've begun to wonder if there is a correlation between a team forcing overtime and then that team winning in overtime. Conventional wisdom seems to indicate that it is the home team that has the advantage in overtime, but I've begun to wonder if this is really the case.
It's my hypothesis that the team which forces overtime is the one that is significantly more likely to win the game. Anyhow, I would appreciate it if you would be interested in looking into this problem.
Eric Oglesbee,
Bethel College
Mishawaka, Ind.
I'm interested in any such "problem" impacting our great game. Between now and next time, I'll run some data on "home and away" stats with regard to overtime results. It is much harder, obviously, to track your hypothesis about teams winning or losing as a result of "forcing" the overtime. This is really a question of momentum, which has no column in any box score that I know of.
Procedurally Speaking
A numbers of fans have been asking about the methodology behind the weekly ESPN.com bracket projections. As many of those questions fall in the middle between "Box Scores and More" and "Bracket Banter," we've decided to handle a few of the more pertinent ones here:
Please, if you're as smart as you seem to be, please, please, please (please) explain to me exactly how Strength of Schedule is factored. Granted, I am not a mathematician, nor even widely considered to be all that bright, but for the life of me I can't figure it out. An example of my bafflement:
Glancing over the most recent RPI-facsimile list on the site, I notice that Wisconsin (14-4) is rated 4th, presumably because of their strength of schedule, which ranks 2nd. A team that beat them, Michigan State (16-2), stands at 11th in the RPI, with a strength of schedule rank of 46th.
Let me please reiterate that I am not a professional mathematician and
mention also that I failed to pass Algebra I in high school, but I can't
see how such a discrepancy of schedule strength is possible between
these two clubs. Wisconsin plays the second-most challenging schedule of
any school in the country, but Michigan State has 45 schools playing
tougher schedules than them? This seems odd if not impossible.
Both are Big Ten teams, so, with slight variations, their conference
schedules should be very similar. But Michigan State has a good number
of quality non-conference Top 25 games to supplement their conference
schedule, including (wins, no less, over) North Carolina, Florida, Seton
Hall and Kentucky. What teams of distinction reside on Wisconsin's slate
to set their "non-con" so far above MSU's? Aside from Maryland, who has
Wisconsin played, period? Could MSU drawing Illinois only once this year
really hurt their overall SOS that much?
There must be some secret to the formula that eludes me. How is that
damn Strength of Schedule rating figured? Search as I might, no one has
been able to explain this rather significant element of the rather
significant RPI to me. Please, please, please deliver me from my
ignorance. Joe, you are my last hope.
Reg Redmond,
Atlanta, Ga.
Self-deprecation always works with me, Reg, so here goes:
1. SOS ratings are based on OPPONENTS TO DATE, not a team's entire schedule. As such, with Wisconsin and Michigan State now playing virtually the same schedule from this point forward, their Strength of Schedule numbers figure to creep toward one another right through the Big Ten tournament.
2. Arithmetically, SOS is merely the combined winning percentages of your opponents. Like it or not, Wisconsin's arithmetic (for the moment) is higher.
3. Take a closer look at the two school's non-conference schedules this year. You'll see that MSU's "lows" are a good bit lower than
Wisconsin's. Herein lie the primary differences between them.
I have been reading your columns on ESPN.com and have come to the conclusion that you constantly change your views on how teams get into the tourney. For example, you told one writer: "Remember, the committee is evaluating TEAMS, not conferences. While many of your observations will shake out in the end, teams are to be considered without regard to
conference affiliation."
OK, this is valid. But then you turned right around and said this to another writer: "Conference record does mean quite a bit. It is, after all, the most even-handed way of comparing league partners."
You completely did a 180 and changed your way of thinking. So what do you think now? Do you change your mind so that people do not make you look like a fake? I mean, come on. Pick a viewpoint and stick to it.
If anything, I have been accused of not being flexible enough with regard to applying Selection Committee criteria. You misunderstood what I meant when using the term "conference record." I was referring to the league record of INDIVIDUAL TEAMS, not the conference as compared to other conferences. This position is in complete agreement with my
initial argument.
Sorry to disappoint you.
First of all, thanks for the great job you do on your brackets at ESPN.com. It's obvious to me that your brackets better reflect the way the committee seeds conference teams around the regions.
I had a chance to catch you on Ted Sarandis' show on WEEI in Boston this week, and I really enjoyed listening to you. Ted's about the only person on that station that's worth listening to in my opinion, and your segment with him was really fascinating for us college hoops junkies.
I did have a specific question on an issue you brought up then, and also mentioned on your Q&A at the ESPN site. In response to a question about the Big Ten, you write: "Teams are to be considered without regard to conference affiliation. If the 11th SEC team is better than the second ACC team, the latter is supposed to stay home."
I understand that the NCAA's written policies state this, and until last year I believed this was true. But by removing a Vanderbilt team that was in practically every serious observer's bracket to make way for Arkansas, don't you think the committee cast that principle into doubt?
Full disclosure: I'm a Vanderbilt graduate, so this issue is very personal to me. I appreciate your words of support for last year's team in your columns. Despite what you have to say, we both know that YOU had all 64 teams right last year and it was the committee who got 63 of 64.
I will probably never be able to look at the selection process as dispassionately as I'd like. Nonetheless, I feel that the committee
compromised the credibility of its successors last year in the way that a Supreme Court might by throwing out precedent and legislating from the bench. I have no problem at all with a quota of mid-majors in the
tournament, but that's obviously different from a bunch of mid-major
commissioners and ADs acting in their self-interest in the crunch of
Selection Sunday.
Frankly this year, despite Craig Thompson's departure, I have no clue what to expect from the committee. But until I see otherwise, and until a member is added from the SEC, I will assume that the de facto limit on teams from our conference is six.
The best I can tell you is that more than one committee member has confirmed that Vanderbilt was among the very last few "out" a year ago (along with Virginia, Villanova and Notre Dame). Did the committee take the easy way out by swapping Arkansas for Vandy after the 'Hogs won the SEC tourney that afternoon? We'll never know.
What I am confident of is that the absence of an SEC representative on the committee will never keep your conference from receiving seven bids (if warranted). Call me crazy, but I really believe that there is so much scrutiny of the process these days -- and I am proud to be a part of the posse -- that it is very difficult for individual members to play politics.
I am so happy to see that Bracketology is back! I live in Nebraska, and so there isn't much going on sports-wise after football season wraps up. I won't root for the Huskers (even though Danny Nee got the ax), but I love college hoops. So, I want to thank you for getting the brackets up and running again.
I do have one quick question, if you have time to explain it to me. I am having trouble understanding the "play-in" game. How is it decided who will play in it? In your latest bracket, you have the winners of the SWAC and IVY in the play-in. Is that because they are the worst two teams in the field, or their conferences are the weakest in the RPI?
Say, for example, the No. 12 seed makes a run and wins the Big 12 tournament. Would they, then, play in that March 13th game?
Good questions all, Jay. And I will address the play-in format in a weekly bracketology column shortly. However, so many have asked that I want to explain the process in as many places as possible.
It used to be, when play-in games were required, that the teams chosen for them were based on Conference RPI from the prior season. In other words, the lowest conferences from Year X knew their champions would be play-in participants in Year X+Y.
This year, for the only play-in game required, the guidelines have changed. The committee is to assign the two lowest seeds on its S-curve (No. 64 and No. 65) to the play-in game, regardless of conference affiliation. So, yes, it is hypothetically possible that the play-in could feature, say, Northwestern and Rhode Island.
For purposes of projecting, I apply the same criteria. This past week, the projected SWAC (Alabama State) and Ivy winners (Yale) happened to fall into slots 64 and 65.
On the web site it says that Kansas is the host for the first round in Kansas City. Does this mean that Kansas cannot play there? I was just wondering, because I thought that the host of that sight was the Big 12 conference.
Jenny, you are right. Whatever web site you are looking at is wrong. The
Big 12 is indeed the official NCAA host at Kemper Arena. The Jayhawks
may be sent there if the Selection Committee chooses.
I love the column as well as the give and take Q&A after it is posted. Last week you mentioned that the NCAA selection committee definitely focuses on momentum going into the Big Dance, whether it is a strong showing in the conference tournament, etc. Have you given any thought to constructing/using a "hot streak" ranking that would use similar inputs,
but be "adjusted" for momentum over, say, the last 7-8 games.
For whatever reason, the committee looks at "last 10 games" when evaluating each team (for both selection and seeding purposes). This data is part of the so-called "Nitty Gritty Report" that each member receives and is also part of a new web site I am involved with, www.bracketology.net.
Obviously this could turn into a long and tedious task, but, if you could, please send me the criteria with which the committee makes its selections for the tournament. I hope this is not too overwhelming. Perhaps there is a web site I can visit to save you the time. What I am looking for are the criteria which the committee looks at and what
factors hold more water than others.
I never start a new "bracketology" season without stopping at www.ncaa.org. Just about every year, there is a no wrinkle or two.
There is no accurate replication of the RPI in the public domain. The NCAA makes secret adjustments that nobody knows exactly what they are.
I disagree, having seen NCAA data and compared it to the "Adjusted RPI" produced by Collegiate Basketball News. Last year, they were identical.
Joe Lunardi is a regular in-season contributor for ESPN.com and ESPN Radio. He also edits the Blue Ribbon College Basketball tournament preview edition. Write to Joe at jlunardi@home.com.
|
|
ALSO SEE
Bracketology: Jan. 29
Bracketology Banter Chat with Joe Lunardi, Thursday at 5 p.m. ET
|