Incorporating strategies from skilled human planners improves automatic planners’ performance.
Every other year, the International Conference on Automated Planning and Scheduling hosts a competition in which computer systems designed by conference participants try to find the best solution to a planning problem, such as scheduling flights or coordinating tasks for teams of autonomous satellites.
On all but the most straightforward problems, however, even the best planning algorithms still aren’t as effective as human beings with a particular aptitude for problem-solving — such as MIT students.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory are trying to improve automated planners by giving them the benefit of human intuition. By encoding the strategies of high-performing human planners in a machine-readable form, they were able to improve the performance of competition-winning planning algorithms by 10 to 15 percent on a challenging set of problems.
The researchers are presenting their results this week at the Association for the Advancement of Artificial Intelligence’s annual conference.
“In the lab, in other investigations, we’ve seen that for things like planning and scheduling and optimization, there’s usually a small set of people who are truly outstanding at it,” says Julie Shah, an assistant professor of aeronautics and astronautics at MIT. “Can we take the insights and the high-level strategies from the few people who are truly excellent at it and allow a machine to make use of that to be better at problem-solving than the vast majority of the population?”
The first author on the conference paper is Joseph Kim, a graduate student in aeronautics and astronautics. He’s joined by Shah and Christopher Banks, an undergraduate at Norfolk State University who was a research intern in Shah’s lab in the summer of 2016.
The human factor
Algorithms entered in the automated-planning competition — called the International Planning Competition, or IPC — are given related problems with different degrees of difficulty. The easiest problems require satisfaction of a few rigid constraints: For instance, given a certain number of airports, a certain number of planes, and a certain number of people at each airport with particular destinations, is it possible to plan planes’ flight routes such that all passengers reach their destinations but no plane ever flies empty?
A more complex class of problems — numerical problems — adds some flexible numerical parameters: Can you find a set of flight plans that meets the constraints of the original problem but also minimizes planes’ flight time and fuel consumption?
Finally, the most complex problems — temporal problems — add temporal constraints to the numerical problems: Can you minimize flight time and fuel consumption while also ensuring that planes arrive and depart at specific times?
For each problem, an algorithm has a half-hour to generate a plan. The quality of the plans is measured according to some “cost function,” such as an equation that combines total flight time and total fuel consumption.
Shah, Kim, and Banks recruited 36 MIT undergraduate and graduate students and posed each of them the planning problems from two different competitions, one that focused on plane routing and one that focused on satellite positioning. Like the automatic planners, the students had a half-hour to solve each problem.
“By choosing MIT students, we’re basically choosing the world experts in problem solving,” Shah says. “Likely, they’re going to be better at it than most of the population.”
Encoding strategies
Certainly, they were better than the automatic planners. After the students had submitted their solutions, Kim interviewed them about the general strategies they had used to solve the problems. Their answers included things like “Planes should visit each city at most once,” and “For each satellite, find routes in three turns or less.”
The researchers discovered that the large majority of the students’ strategies could be described using a formal language called linear temporal logic, which in turn could be used to add constraints to the problem specifications. Because different strategies could cancel each other out, the researchers tested each student’s strategies separately, using the planning algorithms that had won their respective competitions. The results varied, but only slightly. On the numerical problems, the average improvement was 13 percent and 16 percent, respectively, on the flight-planning and satellite-positioning problems; and on the temporal problems, the improvement was 12 percent and 10 percent.
“The plan that the planner came up with looked more like the human-generated plan when it used these high-level strategies from the person,” Shah says. “There is maybe this bridge to taking a user’s high-level strategy and making that useful for the machine, and by making it useful for the machine, maybe it makes it more interpretable to the person.”
In ongoing work, Kim and Shah are using natural-language-processing techniques to make the system fully automatic, so that it will convert users’ free-form descriptions of their high-level strategies into linear temporal logic without human intervention.