Data Deep Dive: FML Prices vs Forecasted and Actual RevenueThis

A few weeks ago, I looked at how FB$ pricing decayed for individual movies throughout the Summer season, but I didn’t really get into whether or not that meant anything.  This week, I took a closer look at pricing data relative to published forecasts and actual weekend returns to get a sense for how the Fantasy Movie League staff tries (if at all) to influence the player base.

Why should you care?

Most people think of their FML picks each week in terms of value.  In fact, the scoring system itself rewards value in the form the the Best Performer bonus (and by extension, the Perfect Combination bonus).  You only have 8 slots to fill each week and $1000 Fantasy Bux with which to fill them.  Not all 15 available movies are necessarily wise plays and figuring out which ones are the best plays, meaning they have the best value, is crucial.

What does “value” even mean in this context?  For the Best Performer bonus, it means the actual box office revenue divided by the FB$ price and most people divide that again by 1000 to get a number that is a little easier to interact with.

Where’d I get my data?

This article will look at historical value of  movies since the beginning of FML and let me state up front that I have no insight into how the FML staff actually determines pricing each week.  Prices are always available on Mondays and there is a strong partnership with ProBoxOffice.com, so it stands to reason that prices are influenced by an early version of the ProBoxOffice.com Weekend Forecast that comes out every Wednesday.

One factor I’m going to ignore here in this analysis and not feel good about is the possibility that the FML staff might toy with pricing to get us to select a film it knows might fail based on data that isn’t yet public on a Monday (or vice versa).  As a comparison, consider Las Vegas point spreads on NFL games.  Point spreads are not meant to predict the outcome of games, rather, to distribute bets on both sides to reduce the risk the house takes on.  Points spreads encourage variation, is another way to think about it.

Living in the Los Angeles area most of my life, I can tell you that point spreads on the Rams and the Raiders (when we had them) here were typically higher than those of other teams.  Why?  Because pre-Internet you could drive from LA to Vegas and place a bet but you couldn’t do the same from Chicago or any other major US city.  So in order to spread the money more evenly, the sports books routinely gave LA opponents more points.  You see this with more popular teams, like the Cowboys, in the era of Internet gambling.

I don’t know whether or not that is going on here with FML, but the possibility exists in order to increase variations in the combinations that are selected.  For example, as “Fantastic Four” is free falling at the box office, maybe the FML braintrust prices it lower than it should to try to influence players into selecting it and increase variance.  That would throw off the averages presented below, as do films that over perform on certain weeks.

All that said, FB$ pricing came from the same data set as the previous article, which I’ve been updating manually since.  Actual box offie data comes from BoxOfficeMojo and forecasts from ProBoxOffice.com.  Movies were included in the data set if they had a ProBoxOffice.com forecast the week they were included in the FML choices, so if you see gaps in the data, that’s why.  In all 181 movie/weeks are included in the set, all of which is available for download at the end of this article.

What were my findings?

First, let’s look at FB$ vs forecasts:

ForecastVsFB

On the X-axis FB$ pricing as it appears on Monday of each week.  On the Y-axis is the ProBoxOffice.com weekend forecast, which becomes available every Wednesday.  Each dot is a week of a particular movie and certain outliers are called out with text showing the Best Performer score that particular week.  For movies with less than $20M of forecasted revenue, the interior second graph is a zoom in on those films and weeks.  All dots are shown with 50% transparency so it is easier to tell by the darkness where there are overlaps.

Overall, the Best Performer score average of all films is 112.68, but you see variance between smaller films (107.44) and larger ones (135.33), although two outliers (“Jurassic World” and “Minions” opening weeks) skew that latter number a bit.  But look at how the numbers change when compared to actuals, available on the following Monday:

ActualVsFB

All three averages drop when using actuals instead of forecasts.  In other words, the prices relative to the forecasts are, on average, too high compared to what actually happens with revenues each weekend.  But look in particular at the larger movies, the top four of which all jumped up when switching to actuals.  This reinforces the notion that high grossing movies can have unpredictable upsides once they reach a certain threshold.

For those that run their own forecasts, some of this data might be helpful from a bounds perspective, so feel free to download my full data set and if you have any thoughts, I’d love to hear about them.

This week, for the first time, I used the 108.83 number to initially populate the Estimate Comparator on Monday instead of waiting for the ProBoxOffice.com estimates on Wednesdays with the hopes that those players that create their own forecasts could get some use out of it earlier in the week.  This number may prove to be quite volatile, but I’ll be keeping a close watch on it and report on it again another time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s