## MatchMaker Scheduling Algorithm
By Tom and Cathy Saxton
## UpdateIn December, 2021, MatchMaker adopted a new station balancing algorithm created by Caleb Sykes that results in perfect balancing and is faster than the original method. In September 2020, MatchMaker improved support for small tournaments. An update to MatchMaker in 2017 balances station assignments to improved the team experience at the 2017 FRC Stronghold competitions (station balancing details). ## AbstractThe algorithm used by FIRST to generate the qualification match schedule at the FIRST Robotics Competitions (FRC) is critical to the success of the regional and championship competitions. This paper discusses the desired properties of the match schedule, and an algorithm that finds near-optimal solutions in a practical time frame. ## Match FormatThe match format used at FRC competitions is currently three against three. So the match schedule consists of a list of multiple sequentially numbered matches, each with three "red alliance" teams and three "blue alliance" teams. The outcome of these qualifying matches are used to determine team rankings, which in turn are used to determine which teams are invited to play in the elimination rounds which determine the winner in the robot performance category.
## Algorithm SpecificationsThere are several criteria that are desired in the match schedule.
## The ProblemEvery match schedule is going to be unfair; a fair, round robin style tournament format is completely impractical for this type of competition. It's unavoidable that some teams will have a schedule that gives them more weak partners and more strong opponents than other teams in the schedule. This is not only a scheduling algorithm issue, but a simple matter of not knowing in advance which teams will turn out to be more effective than others. The goal is to implement a scheduling algorithm which provides schedules which maximize the qualities listed above and can be generated with an inexpensive personal computer in a brief amount of time, a few minutes at the most. ## The SolutionThe algorithm we have implemented to generate match schedules which meet these criteria uses a simulated annealing algorithm that is very efficient at finding a good solution to a class of problems where finding the optimal solution is impractical. In the case of a typical regional with 54 teams and six rounds, there are (54!)^6 possible match schedules (about 8.1 * 10^570), most of which don't pass most of the desired criteria. Simulated annealing examines a select subset of these possibilities and, if implemented correctly, finds a very good solution in less than a minute. The various qualities of the schedule are implemented by different aspects of the algorithm in a way that largely eliminates most interaction and trade-offs between the criteria. Required rigid criteria are handled first, then more variable criteria are maximized without breaking the earlier criteria.
The algorithm begins by seeding the match schedule with a simple schedule: the teams are dumped in the schedule in random order for each round. Thereafter, teams are only rearranged within rounds. This guarantees the
When running the algorithm, the
The most interesting part of the puzzle is Each schedule generated, which is guaranteed to satisfy the first two criteria, is assigned a score based on the amount of partner and opponent duplication. For each team, we count the number of times that team (in a non-surrogate match) sees each other team as a partner, as an opponent, or in either role. Penalty points are added for each duplication, doubled by each additional time a given team is seen in any category. The weighting for duplication in partners is slightly heavier than for opponents, since there are only two partners, but three opponents, per round. If the newly generated schedule has a better score than the "current" schedule from which it was derived, it becomes the current schedule. If this were the only way in which the current schedule changed, it would be possible to get into a "local valley" where we get stuck with a poor solution which is structured so that it's just a little better than any "nearby" schedule which can be obtained by the permutation procedure. To solve this problem, simulated annealing allows for a small chance of replacing the current schedule with a worse schedule, with the probably decreasing exponentially with how much worse the new schedule is. This will cause the algorithm to jump out of a local valley where no forward progress is being made over many iterations. In addition to the "current" schedule, there's also a "best" schedule which is updated any time we find a new schedule better than the previous best. This is to make sure that we used the best schedule encountered even if the algorithm happens to randomly climb uphill from the best solution while trying to find a deeper valley.
The final step is to analyze the ## ImplementationGetting all of this to work was an interesting and challenging programming problem. We had to experiment with the scoring algorithm to get the pairing uniformity working correctly. Before we changed the weighting of the partner and opponent penalties, opponent duplication was being removed at the expense of partner duplication. The constant in the exponential function that determines when a worse solution gets selected had to be adjusted to be strong enough to get the algorithm out of a local valley, but not so strong it jumps out of a valley before it finds the bottom. The remaining 90% of the effort was spent on the last 10% of the problem: making it fast enough to be usable. The first implementation, which was both efficient and straight-forward, was too slow. Using the "best" quality setting required about 20 minutes to complete. By carefully optimizing both the permutation generator and the scoring function, we were able to speed it up by a factor of 10, dropping the time required for the best quality to a couple of minutes. The optimizations made the code run more quickly, but at the expense of code complexity. To ensure that the optimizations didn't change the results, we ran literally millions of test cases, comparing the results on the original implementation with the optimized version to ensure that the optimizations produced identical results to the more straight-forward implementation. ## Notes
- Fair: 100,000
- Good: 750,000
- Best: 5,000,000
Since the quality parameter sets a number of candidates, rather than a time to run, it will produce comparable quality schedules on different computers, just taking longer on slower models. We have implemented this algorithm in a flexible, customizable way using platform-neutral C++ code (which could easily be simplified to straight C), and have made it available to FIRST for use by the FIRST community free of charge. For evaluation purposes only, our program is available for download. For licensing, contact us at [info]. © 2007-2022, Idle Loop Software Design, LLC |

©2000-2024 Idle Loop Software Design, LLC. You may not copy or reproduce any content from this site without our consent. |