a better playoff output projection

If you’ve been following me for a while, then you’re probably aware of a stat I created called POP which aims to predict the playoffs and more accurately forecast who the best teams in the league are.

There’s been some work done recently, as well as suggestions from smarter people, that’s made me consider making changes to the stat.

In its current iteration, POP is (Fenwick Close + ((0.6 x Goals Close)/PDO Close)) x (PP% + PK%). The first part of the equation is very similar to what Tom Tango is working on right now with weighted shot differential, so it was interesting to see someone else working on something similar.

Tango’s original stat uses all shot attempts (mine excludes blocked shots) and weighs any non-goal shot at 0.2. The comments on that post though suggest he’s moved to weighing non-goal shots at 0.1. By multiplying goals by 0.6, I’m essentially weighing every non-goal shot as 0.1 already. In that regard we’re on the same page.

The big difference is that his stat didn’t reflect the score of the game, while mine at least attempted to do by only using data when the score is close in an attempt to eliminate score effects.

But there’s a problem with using just “close” data as recent work by Micah McCurdy has shown that adjusting for the score state as opposed to eliminating data is much better at predicting future outcomes. My only problem was I wanted to keep goals and fenwicks in the same language (both at close) and while Score-Adjusted Fenwick was easily found, score-adjusted goals has never really been looked at. Thanks to Micah’s work it was very easy to just do myself.

So the first adjustment I made to the formula is dispatching close data and having Score-Adjusted Fenwick and Score-Adjusted Goals instead, creating Score-Adjusted Weighted Shots. Matt Cane recently looked into this as well using Tango’s method so the difference here is excluding blocked shots and weighing non-goal shots at 0.1 instead of 0.2.

With my initial goal being playoff outcomes, I checked how well SAWS did versus POP as well as SAF.

Since 2009-10 the better SAF team won 50 of 75 series, the better (old) POP team won 56 of 75 (factoring the last 21 games before the playoffs bumps it up to 58 of 75) and the better SAWS team matched that total winning 56 of 75 as well.

I would think that’s pretty significant, and a decent indication that SAWS is a better measure of true talent (with regards to the playoffs at least).

The second adjustment I made was with how I treated special teams. By multiplying it, POP favoured very good special teams too much and not enough for weaker special teams. What I did  instead is weigh SAWS by average time at 5-on-5 and weigh STE by average time up or down a man. I also divided it by two so that they were the same weight (average at 50%) to start with.

So the new formula for POP is (0.78 x SAWS) + (0.22 x STE/2).

The better (new) POP team has won 58 of 75 series since 2009-10. The crazier thing is that it was actually 7-8 in 2011-12 (there’s something strange about that year I think).

With all that in mind, there’s still some things that need to be looked at.

It’s clear it works in the playoffs, but how it performs in the regular season is uncertain for now. The correlation to end of season points increased with the adjustment from .707 to .739, but because of the difficulty in computing it and my own inability to do so, I’m not sure how well it correlates to future points. A game range on one of war-on-ice or puckalytics, or the addition of SAG to puckon.net would help immensely. My gut says it would do very well, but without testing I can’t be certain.

Early season results and projections look much better with the adjustment, but we won’t know for sure how accurate they are until the end of the year. They also appear to be much more stable. Here’s the last four weeks (note that it’s just a weekly update, not daily).

Screen Shot 2014-12-12 at 1.56.45 AMScreen Shot 2014-12-12 at 1.56.59 AMScreen Shot 2014-12-12 at 1.57.20 AMScreen Shot 2014-12-12 at 1.57.52 AM

There’s also the use of PP% and PK% for special teams. For now, it’ll do fine, but there’s likely a much better way to look at special teams that can be implemented.

Then there’s the fact that POP’s playoff forecasts were improved with more recent data, it’s probably worth checking if the same applies to the new version of the model. Again, game ranges would be helpful.

Lastly, an application to the player level would be of great use. I dabbled in this last year with the old POP and the results were promising when looked at relative to the team (it’s always good when Crosby is at the top). Here’s last season’s top 25 in raw weighted shots at 5-on-5. It’s not a bad list by any means.

Screen Shot 2014-12-12 at 2.29.51 AM

Calculating 5-on-5 is easy enough (albeit riddled even more-so by variance), but how a player performs at special teams is a lot trickier to determine.

In any sense, there’s still a lot of work to be done.

Advertisements

2 thoughts on “a better playoff output projection”

say something

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s