Data Deep Dive: Awards Season Has Been Weird

Through six weeks, the Awards season has been just plain weird. At least, it’s felt that way. “Spotlight” comes back from the dead to win Best Performer (BP).  A poorly reviewed “Daddy’s Home” over performs pro estimates by 115% on its opening weekend.  A well reviewed “The Big Short” craters 26% below its expected weekend when it expands theaters.

Not content with just feeling weird I decided to investigate the data to see what has really been going on here.

The following table is an update to one I published Week 8 of the Fall season and shows how predicted BPsdid if you averaged ProBoxOffice.com and ShowBuzzDaily forecasts each week of the summer and fall seasons (full table):

SummerFallTable

The last three columns are really the focus here as they show the week of release of the actual BP, the predicted finish of the BP, and what place in the rankings my model computes the Perfect Combination (PC) came in. I’ve color coded it blue for cels with 1’s in them, green for cels with 2-5, orange 6-20, and red for weeks where my model didn’t have the actual PC in its top 20.

What you can see here is that, in this time frame, the professional forecasters did a nice job of correctly predicting BP 39% of the time, 11 times out of 28 weeks. BP most often came from releases in their first week (43% of the time, 12 times out of 28 weeks) and from among the top 4 predicted BP candidates each week. My model is based on these observations and would have predicted PC of 32 % of the time.  Hitting .300 in professional baseball is generally considered to be good and in a Fall season where top players selected PC 4 or 5 times the whole season, this performance of my model was competitive.

However the Awards season has been very different so far (full table):

AwardsTable

Here, BPs have come from all over the place both in terms of release week and predicted BP based on the pro forecasts.  In fact, the professional forecasters have only been right about BP once, with “Krampus” during Week 1 which computes to 17 % of the time. Not surprisingly give these facts, my model has yet to select a PC correctly.

All is not lost though. If you used the lineup suggested by my model each week, like I do from my model only account, you’d still be in 428th place and in the 98th percentile of all players. The model hasn’t been perfect yet but it has still been relatively competitive.

As you may have noticed from my columns lately, I’ve been using the model as a baseline but look for films among the new releases in particular that the professional forecasters are most likely to be wrong about. I haven’t really seen a reliable pattern there yet other than you should never underestimate a horror movie.

Time will tell if this unpredictability trend continues and I’m not quite ready to change my model in response to these observations but I’ll write follow ups occasionally so that we can all be better, more consistent players.  If you find yourself behind right now, don’t get discouraged.  We’ve already seen two times in our 6 week Awards season where the difference between PC and the conventional wisdom my model spits out to be $70M in score and with 7 weeks left there are plenty of other chances to catch on to one of those big deltas.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s