I’m just going to come right out and recognize that I’ve been pretty lousy the last couple weeks. I mean, I might as well have been the wooden bat manufacturer for The Great “Everest” Whiff of 2015 during Week 4, I had “The Intern” x7 + “The Visit” loaded into my lineup and changed it during the last 30 minutes in Week 5, and this past week I was completely spooked by the BoxOfficeMojo projection for “Everest” and went all in on “Sicario” (not that it mattered given that I didn’t mention eventual Best Performer “Maze Runner” at all in my article last week). So, yeah, if you’ve been looking for a bad advice column I’m happy to have been at your service.
The “Maze Runner” miss last week really bothered me, though, so I went back and looked at my analysis (including things that didn’t make it into the article) and found that it was foreseeable. I just needed to present more options. Thus the madness you’ll be exposed to this week.
Primer: Simulating Variance
Hate math? Skip this section.
I touched on this in an earlier article, but what I’ve been doing in preparation for writing articles each week is simulating different variance combinations with a Java program that tries 6,561 different variances. Why 6,561? As I discussed earlier this week, an analysis of Summer movies found that 74% of ProBoxOffice.com forecasts are between +10% and -20% and that proved close enough to being true for Fall as well.
So, imagine using the Estimate Comparator and adjusting “Black Mass” by -20% to see if it changes the Best Performer. Then both adjust “Black Mass” by -20% and “Pan” by +10%. Then just “Pan” by +10% and so on. That would give you a sense for boundaries of “normal” for a particular week.
With three possible states (the estimate, +10%, and -20%) for 15 possible movies brings you to three to the 15th power, or 14,348,907 possible variances. If you tried all those variances and scored each of the 205,840 legal screen combinations this week, that’s almost 3 trillion calculations, to which my MacBook Air says no, thank you.
What’s a reasonable short cut, then? If you only use eight movies instead of all fifteen, that’s three to the 8th power: 6,561. My little Java program runs the same 150,000 legal screen combinations made available in the Estimate Comparator through 6,561 variance simulations and keeps track of how many time a particular combination wins. All in about 20 minutes on my laptop.
In other words, it gives you a loose win percentage of different combinations should nothing extraordinary happen. But extraordinary things sometimes happen, so that’s just a starting point.
Week 7 Perfect Combo Probabilities
With that math lesson as background, if you take the ProBoxOffice.com, Deadline (using $17M for the vague “Crimson Peak” wording in that article), and Showbuzzdaily (nod to Sad Robot) forecasts, that leaves 10 films with professional forecasts. The average box office dollar per FML Bux price of those films is $53,953.99. If you apply that average to the remaining, unforecasted films, the result is what is loaded into the Estimate Comparator right now. Use that to select a top 8 Best Performer candidates and run them through the variance simulation and you get these top 10 screen combinations:
Warning: Do not read this as the likelihood that things will literally happen but as a way to test some edge conditions. Should “Goosebumps” perform as expected or better, using it twice in combination with “Sicario” generates the most simulated wins (#’s 2, 6, 9, and 10). Similarly permutations where “Goosebumps” fails to meet expectations, a lineup with 7x “The Intern” (#’s 1 and 4) or “The Martian” x2 (#’s 5 and 8) are the next up to consider.
Put another way, if nothing extraordinary happens, let your feelings about “Goosebumps” guide your choices followed closely by what you think about “The Intern” and “The Martian”.
Week 7 Crowd Wisdom
To contrast with what my math says, what about everybody using the Estimate Comparator early in the week? As suggested in the forums, I added logging to it this week so that I could tell what the result was every time someone used one of the three methods to edit any particular movie estimate. This method weighs more heavily towards the estimates plugged into the tool earlier in the week, but could in part reveal what the home brew forecasters are thinking. Here are the top combinations registered by Estimate Comparator edits as of 5 pm Pacific on Wednesday, 3,913 data points in all:
You can almost completely discount that first one because that was the best scoring lineup given the original estimates used to populate the comparator on Monday evening. People are clearly testing extraordinary outcomes for “Pan”, “Hotel Transylvania 2”, and “Woodlawn” as well.
What does it all mean?
Above all, what both of these tables show is how widely viable combinations vary with better pricing. The variance simulation shows that, at best, it’s most frequent winner still only wins 1 in 10 times. Similarly, the Estimate Comparator logging data shows just how widely the crowd is thinking about the week as well.
The choices are yours to make, but one thing is clear: this game is hard.