Data Deep Dive: How Accurate is

Instead of writing a weekly recap article on Monday or Tuesday that ends up being a poor imitation of what BoxOfficeGuy already writes on the main FML site, I decided to go a different direction by going deeper into data analysis.  This week I explored a question that I’ve asked myself many times:

How accurate is

Why should you care?

Without Weekend Forecasts, you are left to guess the value of each FML screen at your disposal and any combinations you come up with are a crapshoot.  This data they provide, free by the way, is vital baseline data that helps eliminate bad FML choices.  If you could put some bounds on their accuracy and understand it better, it would help tune that elimination of bad choices even further.

I should state before discussing my methods that they generally do a great job at a difficult task.  With similarities to real estate, predicting the box office take of a movie any given week involves comparables, marketing budgets, the star power of the cast, critics reviews, number of screens on which it appears, and a host of other items.  It’s kind of a miracle that they are right as often as they are and as we’ll see here in a bit they are right a high percentage of the time.

Where’d I get my data?

For this experiment, I collected movie revenue data from Box Office Mojo and compared it to Weekend Forecasts starting with Week 1 of the FML Summer 2015 season and ending with Week 13, inclusive.  There were 132 forecasts in all over this period and I measured the accuracy of each by subtracting the actual from the forecast and dividing by the forecast.  So, a positive percentage means that the forecast was lower than the actual and a negative percentage means that the forecast was higher than the actual.

I then put the results into distribution bins, each 5% wide.

What were my findings?

image (1)

Over the 132 forecasts in this data set 74% of the time, Weekend Forecasts were between -20% and 10%, an important set of numbers to keep in mind when you use my nerdy bin packing comparator this week.  Nobody is going to complain a whole lot about movies that do better than what was forecasted, so lets look a little closer at the ones that missed badly.

Of the 17 forecasts that missed more than -20%, 12 of them were in their opening weekend.  That’ not surprising given that the set of variables to take into consideration are wider during that first week.  Beyond that, forecasts appear largely based on things like number of screens and the revenue from the previous week.

However, just because a movie is in its opening week doesn’t mean that it can’t be forecasted properly.  In this sample, 30 movies were in their opening weekend and almost two-thirds of them were forecasted better than -20%.  And when you look even more closely at the 12 that didn’t forecast well, there’s no obvious pattern, which gives me something to analyze for another time.

In conclusion, the people over at do a great job and they’ve made my FML playing experience a better one.  Based on this data set, when using your sliders (updated each Wednesday evening), keep that 74% within -20% and +10% in mind for Best Performer candidates.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s