Jan 7 2010

Adjusting Individual Defensive Efficiency Ratings

A couple of months ago I presented individual defensive efficiency ratings for the 2008-09 regular season that I extracted from play-by-play data. In this post I will present a method for adjusting these ratings in an attempt to get a clearer picture of a player’s defensive abilities.

Adjusting the Defensive Ratings

To adjust these defensive ratings I fit a multilevel model that allows us to measure the individual offensive, individual defensive, and team defensive impacts on individual efficiency ratings. I fit this model for each of the 2006-07 through 2009-10 regular seasons, and I also fit a single model using all data from those seasons. The results of these fits can be found in the following spreadsheet:

Adjusted Individual Defensive Efficiency Ratings

In this spreadsheet you will find tabs for each of these model fits. The ratings are in terms of the player’s difference from the average defender. Standard errors are listed along with color coded confidence levels. These color codes give us an idea as to how much confidence we have in the estimate. In other words, green means we’re confident the player is not average, red means we have little confidence the player is not average, and yellow is the middle ground between the two confidence levels.

Interpreting the Ratings

To interpret these ratings, you have to think in terms of knowing the defensive player used the possession. For example, Dwight Howard’s 2009-10 rating suggests that when he uses a defensive possession the individual offensive efficiency rating of the player that used the offensive possession is 14.7 points lower than what it would be against an average defender.

It is important to note that because this model shrinks estimates to the mean, bad defenders that get little playing time will be considered average.

These ratings also adjust for the team the player plays for, as Dean Oliver shows in Basketball on Paper how some good defensive players can play on poor defensive teams. The general idea was to try and account for “Dumars-like” players while at the same time control for the idea that one individual doesn’t have complete control over how well a team does defensively.

I haven’t done anything scientific to fully study the impact of this team adjustment, but it seems to make sense after eyeballing the impact of this on players like Pau Gasol and Chris Bosh. Eyeballing something doesn’t give us a ton of confidence, so this adjustment is worth a deeper look in the future.

Players Still Underrated After Adjustment

These adjusted ratings do little to account for the fact that we don’t have a great way of giving credit to defenders when opponents make or miss shots. Guys like Shane Battier that defend the opponent’s best offensive player aren’t going to stand out in these ratings.

What Makes Sense? What Doesn’t?

I’m still trying to learn what makes a good defender, so I’d like to hear your thoughts on what ratings make sense, and which don’t. What players have reputations for being good defenders that this model isn’t estimating well?

If you enjoyed this post, use RSS to get notified of new posts.

13 Comments on this post

Trackbacks


  1. DSMok1 said:

    What exactly did the multilevel model do? You didn’t really explain how these ratings differ from the unadjusted ones….

    Also, have you looked into not regressing to the mean? In my regressed statistical +/- models, I regress to a prediction based on the team efficiency and the minutes played (this prediction has a much lower stderr than simply using the league bell curve).

    January 7th, 2010 at 3:59 pm
  2. Ryan said:

    The multilevel aspect shrinks the estimates and reduces the standard errors. The model overall adjusts these estimates based on opponent and team ability.

    As for the shrinkage technique, do you mean shrink to a different mean, such as one based on minutes played?

    January 7th, 2010 at 4:07 pm
  3. DSMok1 said:

    “As for the shrinkage technique, do you mean shrink to a different mean, such as one based on minutes played?”

    Yes–I ran a regression of SPM onto team efficiency margin, minutes played, position, age, etc…, threw out the non-significant correlations, and used that prediction (and standard error) as the baseline toward which to regress SPM. I finished the up-to-date college Statistical Plus/Minus yesterday. I ended up with these terms in my “prediction”:
    Intercept -6.66
    MPG 0.25
    TmMargin 0.19 (in pts/100pos, = kenpom’s OE minus DE)
    Yr 0.19 (Fr=1, So=2…)
    Pos 0.13 (G=1.5, C=5,…)
    (All of these were significant; MPG and TmMargin extremely so; Yr had a P of .0015 and Pos a P of .01)

    That prediction regression yielded a standard error of 3.66, but that was skewed by using some highly fluctuating SPMs at low MPG to get that end of the distribution. I normalized the StdErr and got ~3 as the standard error of that estimate.

    Then, to actually calculate an estimated true talent SPM, I used the Raw Adjusted SPM’s (I.E. adjusted for the team margin) and standard errors (usually >3 at this point in the season) then combined them with the prediction according to the Bayesian framework I outlined here: http://sonicscentral.com/apbrmetrics/viewtopic.php?p=28221#28221 .

    January 7th, 2010 at 5:18 pm
  4. DSMok1 said:

    It would work better if I had more years of data; the Bayesian prior is a little too powerful at the moment. There just isn’t a big enough sample…. Right now, 3 Dookies lead the list for True Talent. (Scheyer leads the SPM lists by any measure, actually…) Fortunately, I do have last year’s data… I’m going to work up an “aging curve” for SPM for college soon and integrate last year’s data. (Again, look at what I outlined in that APBR post.)

    January 7th, 2010 at 5:24 pm
  5. Deepak said:

    Any thoughts on why Luis Scola rates so highly for the 08-09 season? I’m not sure, for instance, why he’d be getting a better rating than Yao Ming.

    January 7th, 2010 at 6:06 pm
  6. Ryan said:

    One idea I’ve had is to classify players as starters vs bench players, since I’d expect the mean ability of a bench player to be lower than that of a starter. I’ve had some issues with this in the past, mostly because game situation has a further affect on this. The best way regress to the different groups while controlling for game situation. I didn’t want to get into that level of analysis initially.

    Deepak, the ratings for Scola versus Ming aren’t that far off, and they certainly aren’t statistically significant from each other. That said, Scola steals, defensive rebounds, and draws fouls on defense at a slightly higher rate. I wouldn’t read too much into it, though.

    January 7th, 2010 at 7:04 pm
  7. DSMok1 said:

    “One idea I’ve had is to classify players as starters vs bench players, since I’d expect the mean ability of a bench player to be lower than that of a starter. I’ve had some issues with this in the past, mostly because game situation has a further affect on this. The best way regress to the different groups while controlling for game situation. I didn’t want to get into that level of analysis initially.”

    From what I’ve seen, MPG is the best independent predictor of ability. It adjusts for injuries (games missed). It is dependent on a coach’s perception of the player, which is an orthogonal (did I use that right?) dataset from the box score. The only issue is that it undervalues Tim Duncan and other veterans in the NBA whose coaches want to take care of them. However, MPG is EXCELLENT in college, where there are no such worries.

    January 8th, 2010 at 10:11 am
  8. Ryan said:

    I don’t like the idea of having a moving average that allows infinitely many possible groups, but I do like the idea of using MPG to differentiate between what I was calling “starters” versus “bench players”. One simple case is Varejao, although I’m pretty sure he’s the exception rather than the rule.

    January 8th, 2010 at 11:33 am
  9. DSMok1 said:

    Hey Ryan–could you add the calculated home court advantage to your NBA Power Rankings? Did you calculate a separate HCA for each team, or a league-wide one?

    January 13th, 2010 at 5:21 pm
  10. Ryan said:

    The ranking model estimates the relationship between home court advantage and the probability of scoring 0, 1, 2, or >=3 points on a possession. So it’s not exactly straight forward to interpret. If you’re looking for a single number (such as the proverbial 3 points) then I think using the team’s home versus away ratings would be very close to what my model estimates.

    January 13th, 2010 at 8:44 pm
  11. Kevin said:

    Ryan,

    In terms of what makes a good defender, I have some thoughts. Ball denial is a skill that seems as difficult to quantify as the ability to make a man miss his shot without actually blocking the shot. One idea I had about it is if we could somehow look at the changes in USG that occur when one defender is matched up on a given player. If the USG goes down significantly, that’s a pretty good indication that the defensive player is denying the offensive player the ball. There are some issues with this, because like any counterpart measure, simply looking at who is on the court at a given time is not going to actually tell us who is guarding who due to cross-matching issues (my own favorite Bulls have this problem a lot with Derrick Rose and Kirk Hinrich as they often switch duties with Rose taking the SG and Hinrich operating on the PG). However, I do think that looking at the effect on USG rates would be fruitful.

    March 31st, 2010 at 3:35 pm
  12. Ryan said:

    Kevin, I think you bring up an excellent idea that I’ve never looked at before. There is a lot lacking on defense, and this is certainly something worth exploring. Thanks for the idea!

    April 8th, 2010 at 3:51 pm
  13. Jason said:

    Any plans to do Adjusted Individual Defensive Efficiency Ratings for the 2010-11 season?

    July 19th, 2011 at 1:24 am