Data Deep Dive: Best Performer Forecasted vs Actual

Last week, Phil’s Phun Phlicks left a pretty brilliant comment on my Thursday prediction article that serves as the inspiration for this piece:

This is a game of outliers. I think we can all safely assume that Maze Runner last week fell into the 26% of that hypothetical sample size. So here’s my question, since the data set already exists – how many bonus winners fall into that 26%? And how many, uh, “dream-crushers” like Everest fall into that 26%? I’d be curious how many of those 26% are above the 10% and below the 20%.

Ask and you shall receive.

Why should you care?

Phil is right that not all the movies matter the same amount each week, those that are candidates for Best Performer (BP) typically matter a lot more than others (this past week not withstanding).  While most of my analysis has centered on finding bounds for expected results, Phil has an interesting theory that the opposite should be in play.  Finding indicators of extraordinary results is important too.

Where’d I get my data?

Historical Best Performer winners from the Research Vault, ProBoxOffice.com forecasts, and weekend actuals.

What were my findings?

Here’s the table that gets generated from all that data:

BPanalysisWeek8

There are lots of interesting aspects of this.  First, notice how often during the Summer season that the forecasted BP ended up being the actual BP.  The longest drought lasted three weeks whereas for Fall the latest drought lasted five weeks.  This points to the idea that Summer was easier to forecast.

The biggest jumps from forecasted ranking to actual BP were “Mr. Holmes” in Summer Week 11 and “Hotel Transylvania 2” in Fall Week 4.  “Mr. Holmes” was the only actual BP that wasn’t forecasted at all, but it went from a tiny but successful pair of opening weekends just prior to a larger distribution that week.  Not only was that foreseeable but it was a pretty popular topic in the forums that week.

“Hotel Transylvania 2” came from 7th to 1st during The Great “Everest” Whiff of 2015 (copyright Phil’s Phun Phliks) but that means that out of the 22 weeks in this sample, only twice did a movie jump from beyond 4th in the forecasted BP rankings to take the actual BP and both were pretty special circumstances.

Which brings me to Phil’s original question.  His premise is that if my previous research is correct and 74% of ProBoxOffice.com movies fall between +10% and -20% of their forecasts, how often does the Best Performer get decided by that other 26%?  Let’s break that down.

There are 22 entries in the table above and the 9 of them (40.9%) shown in white are weeks where the forecasted BP ended up being the actual BP.  I’m cheating a little bit there in that I’m including those that won and stayed within the +10%/-20% range and those that exceeded that in the same bin positiely.  The argument there is that if you chose the forecasted BP and things turned out even better for you, you wouldn’t exactly complain.

The three entries in green (13.6%) are weeks where the forecasted BP didn’t win, it performed within the +10%/-20% range, as did the actual BP winner.  In other words, these are situations where movies performed within forecasted expectations and a different movie won BP than anticipated.  I’d argue that those are weeks with particularly good FB$ prices.

In yellow are six weeks (27.3%) where the forecasted BP didn’t win, it performed within the +10%/-20% range, but the actual BP winner had a better week than the expected +10% top end.  Put another way, the forecasted BP film did what it was supposed to do but another film did so much better than expected it overtook that candidate and snagged BP.

Finally, in red are the four weeks (18.2%) were the forecasted BP tanked below -20% of its forecast.  Again, I’m cheating a little here and not separating the actual BP into those that meet expectations and those that exceed it, but you can see that each situation happened twice in our 22 week sample.

Given all of that, there are a couple of take aways to consider:

  • Only twice did an actual BP rank lower than 4th in the forecasted BP candidate list.
  • Only once was an actual BP past its 4th week of release.
  • Just under half the time (10 out of 22 weeks), Phil’s assertion is correct, the actual BP was determined by a forecasted BP failing to meet expectations and/or another film exceeded expectations enough to take BP for the week.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s