As an evolution to my nerdy bin packing estimate comparator, I found myself wanting to automate the variances I was manually using the sliders for on particular films so I wrote a program that automated those variances.
If you vary each of the 15 films at our disposal this week by one of three states (the estimate, + some %, and – some %) that gives you 3 to the 15 possibilities, 14.6M of them. Ideally what I wanted to do is run every possible screen combination through each of these 14.6M film possibilities but the compute power of my MacBook Air proved to not be up to that task in a reasonable amount of time.
But if you look at the first 24 weeks of FML data, when you average ShowBuzzDaily and ProBoxOffice.com forecasts to determine a ranking of Best Performers, 23 out of 24 times the Best Performer came out of the top 6 candidates. If you vary 6 films instead of 15, that is only 729 combinations and more the speed my laptop could handle. I then did that for varying combinations and found that +/- 15% did the best job of identifying the Perfect Combination each week, finding it about 32% of the time.
So each week, my model takes the average of the two professional forecasts and then takes the top 6 Best Performer candidates, extrapolates the unforecasted films based on their FB$, and performs a variance simulation to alter each by +/- 15%. The percentages I use in my articles show how often different combinations won among those 729 simulations.