After a couple of months playing Fantasy Movie League (FML), my strategy for selecting the best combination of screens each week evolved. Previously, I relied on my nerdy bin packing Monte Carlo simulator and the bin packing part of that worked fine (and lives on) but the Monte Carlo part, not so much. Why is that and what did I do about changing that?
Random variation vs specific variation
The way my Monte Carlo simulator worked was to randomize the performance of each movie +/- 10% and then compute an FML score based on all the combinations derived from the bin packing algorithm. The problem with that approach was that it treats all variations equally when, it turns out, they aren’t in FML. In particular, the variations among the candidates for Best Performer matter a whole lot more than for those that aren’t. The $2M bonus awarded to each screen on which the Best Performer appears makes or breaks a selection. In fact, the perfect lineup every week comes from maximizing Best Performer.
So what is more relevant is to look at specific variations among the three to six candidates for Best Performer each week. I found myself running my nerdy bin packing Monte Carlo simulator over and over again trying to wedge it into analyzing specific variations when what I really needed was a new tool. Hence, my nerdy bin packing estimate comparator was born.
How my nerdy bin packing estimate comparator works
Each week, FML presents players with a list of 15 films and associates a FML Bux (FB$) price with each. Players must choose to fill up to eight screens with a combination of the films, whose total cost cannot exceed FB$1000.
How do I determine what “top” means? That varies from time to time and I usually post details on the weekly Chatter update thread, but the short answer is it’s based on professional projections from some combination of ProBoxOffice and ShowBuzzDaily.
You can alter estimates with the sliders on the right hand side, whose center is the estimate as explained above and gives you seven points of change on either side of that initial estimate. As you do, the tables for Best Performer analysis and the top combinations based on your current estimates change automatically. This way, if you don’t believe the ProBoxOffice.com estimates (which are normally quite good, btw) you can see how variations you place on those estimates influence the overall picture. When I use the tool, I start by varying those estimates between -20% and +10% based on research I did on the accuracy of ProBoxOffice.com estimates.
I update the data roughly Wednesday evening after the ProBoxOffice.com projections for the next weekend are available, although as a husband, dad, pet owner, and cloud evangelist life sometimes intervenes.
Frequently Asked Questions
Q: Why doesn’t <some combination> show up as a choice?
A: This actually shouldn’t happen any more. Previous versions of the tool had memory limitation issues that were resolved. If you see this issue still, let me know by posting a comment below or replying to the weekly update thread in The Chatter.
Q: Why do the slider steps have weird or inexact values?
A: When it came to the steps on the sliders, I had a choice to make. Should I make each step the same amount regardless of how large or small the original estimate is, say $100K, or should I make the size of each step proportional to the size of the original estimate? Had I chosen the former, there would be far more steps for movies with larger estimates than for ones with smaller estimates so I chose the former so that the number of steps are the same for each movie but the values are proportional. This is why the step values are sometimes unusual, because division doesn’t always yield nice round numbers.