Research
Our research is focused on the development of algorithms and tools for intelligent problem solving. We are interested in all kinds of combinatorial search and optimization problems, with a particular focus on the area of automated planning, the problem of finding a course of action that changes the current state of the world to one that satisfies the planning agent's goals and objectives.
Planning problems come in many shapes and guises, and planning techniques can be applied to problems as diverse as solving combinatorial puzzles, scheduling ground traffic at an airport, controlling intelligent manufacturing plants, verifying the security of communication protocols or the correctness of software and hardware designs, and computing the evolutionary distance of genomes.
Finding solutions to such problems is computationally hard because of fast-growing state spaces. Therefore, a major challenge in search and optimization is to find intelligent ways to handle large state spaces.
Publications
Our publications are available online.
Reading Group
We organize a weekly reading group on planning and search.
Tutorials
-
SoCS 2021 master class on "Abstraction
and Cost Partitioning in Automated Planning"
(Silvan Sievers, Gabriele Röger) -
ICAPS 2020
tutorial "Evaluating Planners with Downward Lab"
(Jendrik Seipp) -
ICAPS 2020
tutorial "Certified Unsolvability in Classical Planning"
(Salomé Eriksson, Gabriele Röger, Malte Helmert) -
AAAI 2019
tutorial "An Overview of the International Planning Competition"
(Amanda Coles, Andrew Coles, Florian Pommerening, and Álvaro Torralba) -
ICAPS 2015
tutorial "Latest Trends in Abstraction Heuristics for Classical Planning"
(Malte Helmert, Jendrik Seipp, Silvan Sievers) -
ICAPS 2015
tutorial "LP-based Heuristics for Cost-optimal Classical Planning"
(Florian Pommerening, Gabriele Röger, Malte Helmert) -
AAAI 2015 tutorial
"A Beginner's Introduction to Heuristic Search Planning"
(Malte Helmert, Gabriele Röger) -
ICAPS 2013 tutorial "Engineering a Heuristic Search Planner"
(Malte Helmert, Gabriele Röger)
Current research projects
AI Planning: Fast Downward
Fast Downward is a domain-independent planning system developed within our research group. Fast Downward can deal with arbitrary planning problems specified in the standardized input language PDDL (Planning Domain Definition Language), supporting all propositional language constructs (STRIPS, ADL, axioms). The system is implemented in Python and C++.
Fast Downward won the "classical" track (propositional problems, no optimization) of the Fourth International Planning Competition (IPC 2004) at the 14th International Conference on Automated Planning and Scheduling (ICAPS 2004). The planner LAMA, implemented on top of Fast Downward by Silvia Richter and Matthias Westphal, won the classical track of the Sixth International Planning Competition (IPC 2008) at the 18th International Conference on Automated Planning and Scheduling (ICAPS 2008). At the Seventh International Planning Competition (IPC 2011) at the 21st International Conference on Automated Planning and Scheduling (ICAPS 2011), different variants of Fast Downward won all five awards (winners and runners-up) in the sequential satisficing and sequential optimization tracks of the competition, as well as a runner-up award in the planning and learning competition.
Further information: Fast Downward
Contact: Malte Helmert and Gabriele Röger
Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization: TAILOR (funded by the European Union)
Artificial Intelligence (AI) and all the key digital technologies that are subsumed by the term AI today are an essential part of the answers to many of the daunting challenges that we are facing. AI will impact the everyday lives of citizens as well as all business sectors. To maximize the opportunities and minimize the risks, Europe focuses on human-centered Trustworthy AI, and is taking important steps towards becoming the worldwide centre for Trustworthy AI. Trustworthiness however still requires significant basic research, and it is clear that the only way to achieve this is through the integration of learning, optimisation and reasoning, as neither approach will be sufficient on its own.
The purpose of TAILOR is to build a strong academic-public-industrial research network with the capacity of providing the scientific basis for Trustworthy AI leveraging and combining learning, optimization and reasoning for realizing AI systems that incorporate the safeguards that make them in the reliable, safe, transparent and respectful of human agency and expectations. Not only the mechanisms to maximize benefits, but also those for minimizing harm. The network will be based on a number of innovative state-of-the-art mechanisms. A multi-stakeholder strategic research and innovation research roadmap coordinates and guides the research in the five basic research programs. Each program forming virtual research environments with many of the best AI researchers in Europe addressing the major scientific challenges identified in the roadmap. A collection of mechanisms supporting innovation, commercialization and knowledge transfer to industry. To support network collaboration TAILOR provides mechanisms such as AI-Powered Collaboration Tools, a PhD program, and training programs. A connectivity fund to support active dissemination across Europe through for example allowing the network to grow and to support the scientific stepping up of more research groups.
Contact: Malte Helmert and Thomas Keller
Relevant publications: (Show) (Hide)-
Augusto B. Corrêa and Giuseppe De Giacomo.
Lifted Planning: Recent Advances in Planning Using First-Order Representations.
In Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), pp. 8010-8019. 2024.
(Show abstract) (Hide abstract) (PDF)
Lifted planning is usually defined as planning directly over a first-order representation. From the mid-1990s until the late 2010s, lifted planning was sidelined, as most of the state-of-the-art planners first ground the task and then solve it using a propositional representation. Moreover, it was unclear whether lifted planners could scale. But as planning problems become harder, they also become infeasible to ground. Recently, lifted planners came back into play, aiming at problems where grounding is a bottleneck. In this work, we survey recent advances in lifted planning. The main techniques rely either on state-space search or logic satisfiability. For lifted search-based planners, we show the direct connections to other areas of computer science, such as constraint satisfaction problems and databases. For lifted planners based on satisfiability, the advances in modeling are crucial to their scalability. We briefly describe the main planners available in the literature and their techniques.
-
Clemens Büchner, Remo Christen, Salomé Eriksson and Thomas Keller.
Hitting Set Heuristics for Overlapping Landmarks in Satisficing Planning.
In Proceedings of the 17th International Symposium on Combinatorial Search (SoCS 2024), pp. 198-202. 2024.
A non-archival, but longer version of this paper has been published at the HSDIP workshop of ICAPS 2024. See the separate entry for that paper. We recommend citing the SoCS paper in scholarly publications.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Landmarks are a core component of LAMA, a state-of-the-art satisficing planning system based on heuristic search. It uses landmarks to estimate the goal distance by summing up the costs of their cheapest achievers. This procedure ignores synergies between different landmarks: The cost of an action is counted multiple times if it is the cheapest achiever of several landmarks. Common admissible landmark heuristics tackle this problem by underapproximating the cost of a minimum hitting set of the landmark achievers. We suggest to overapproximate it by computing suboptimal hitting sets instead if admissibility is not a requirement. As our heuristics consider synergies between landmarks, we further propose to relax certain restrictions LAMA imposes on the number of landmarks and synergies between them. Our experimental evaluation shows a reasonable increase in the number of landmarks that leads to better guidance when used with our new heuristics.
-
Augusto B. Corrêa, Giuseppe De Giacomo, Malte Helmert and Sasha Rubin.
Planning with Object Creation.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 104-113. 2024.
An earlier work that introduces object creation to classical planning formalisms is the paper "Introducing Dynamic Object Creation to PDDL Planning" by Edelkamp, Lluch-Lafuente and Moraru in 2019. That work was submitted but not accepted to the ICAPS 2019 IPC workshop, and so we cannot provide a citable reference, but the work can be found via web search for the title or by searching on openreview.net. Our development of the topic is independent of the 2019 paper.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Classical planning problems are defined using some specification language, such as PDDL. The domain expert defines action schemas, objects, the initial state, and the goal. One key aspect of PDDL is that the set of objects cannot be modified during plan execution. While this is fine in many domains, sometimes it makes modeling more complicated. This may impact the performance of planners, and it requires the domain expert to bound the number of required objects beforehand, which can be a challenge. We introduce an extension to the classical planning formalism, where action effects can create and remove objects. This problem is semi-decidable, but it becomes decidable if we can bound the number of objects in any given state, even though the state space is still infinite. On the practical side, we extend the Powerlifted planning system to support this PDDL extension. Our results show that this extension improves the performance of Powerlifted while supporting more natural PDDL models.
-
Simon Dold and Malte Helmert.
Higher-Dimensional Potential Heuristics: Lower Bound Criterion and Connection to Correlation Complexity.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 151-161. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Correlation complexity is a measure of a planning task indicating how hard it is. The introducing work provides sufficient criteria to detect a correlation complexity of 2 on a planning task. It also introduced an example of a planning task with correlation complexity 3. In our work, we introduce a criterion to detect an arbitrary correlation complexity and extend the mentioned example to show with the new criterion that planning tasks with arbitrary correlation complexity exist.
-
Claudia Grundke, Gabriele Röger and Malte Helmert.
Formal Representations of Classical Planning Domains.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 239-248. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Planning domains are an important notion, e.g. when it comes to restricting the input for generalized planning or learning approaches. However, domains as specified in PDDL cannot fully capture the intuitive understanding of a planning domain. We close this semantic gap and propose using PDDL axioms to characterize the (typically infinite) set of legal tasks of a domain. A minor extension makes it possible to express all properties that can be determined in polynomial time. We demonstrate the suitability of the approach on established domains from the International Planning Competition.
-
Florian Pommerening, Clemens Büchner and Thomas Keller.
Transition Landmarks from Abstraction Cuts.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 445-454. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
We introduce transition-counting constraints as a principled tool to formalize constraints that must hold in every solution of a transition system. We then show how to obtain transition landmark constraints from abstraction cuts. Transition landmarks dominate operator landmarks in theory but require solving a linear program that is prohibitively large in practice. We compare different constraints that project away transition-counting variables and then further relax the constraint. For one important special case, we provide a lossless projection. We finally discuss efficient data structures to derive cuts from abstractions and store them in a way that avoids repeated computation in every state. We compare the resulting heuristics both theoretically and on benchmarks from the international planning competition.
-
Clemens Büchner, Patrick Ferber, Jendrik Seipp and Malte Helmert.
Abstraction Heuristics for Factored Tasks.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 40-49. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
One of the strongest approaches for optimal classical planning is A* search with heuristics based on abstractions of the planning task. Abstraction heuristics are well studied in planning formalisms without conditional effects such as SAS+. However, conditional effects are crucial to model many planning tasks compactly. In this paper, we focus on factored tasks which allow a specific form of conditional effect, where effects on variable x can only depend on the value of x. We generalize projections, domain abstractions, Cartesian abstractions and the counterexample-guided abstraction refinement method to this formalism. While merge-and-shrink already covers factored task in theory, we provide an implementation that does so. In our experiments, we compare these abstraction-based heuristics to other heuristics supporting conditional effects, as well as symbolic search. On our new benchmark set of factored tasks, pattern database heuristics solve the most problems, followed by symbolic approaches on par with domain abstractions. The more general Cartesian and merge-and-shrink abstractions fall behind.
-
Simon Dold.
Planning Domain Modelling Competition.
In Proceedings of the ICAPS-2024 Workshop on the International Planning Competition (WIPC 2024). 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
The international planning competition (IPC) is a recurring event that compares the performance of planners and awards the best ones. The evaluation is based on a set of benchmark problems that describe interesting planning problems in PDDL (planning domain definition language). There is a general interest in high-quality benchmarks. Not only for the IPC but for the research of automated planning in general. We explore the possibility of a planning domain modeling competition that produces them. In this work, we discuss the desirable properties of benchmarks and how they could be evaluated.
-
Clemens Büchner, Remo Christen, Salomé Eriksson and Thomas Keller.
Hitting Set Heuristics for Overlapping Landmarks in Satisficing Planning.
In Proceedings of the ICAPS 2024 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2024). 2024.
An archival, but shorter version of this paper has been published at SoCS 2024. See the separate entry for that paper. We recommend citing the SoCS paper in scholarly publications.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code, scripts and data)
Landmarks are a core component of LAMA, a state-of-the-art satisficing planning system based on heuristic search. It uses landmarks to estimate the goal distance by summing up the costs of their cheapest achievers. This procedure ignores synergies between different landmarks: The cost of an action is counted multiple times if it is the cheapest achiever of several landmarks. Common admissible landmark heuristics tackle this problem by underapproximating the cost of a minimum hitting set of the landmark achievers. We suggest to overapproximate it by computing suboptimal hitting sets instead if admissibility is not a requirement. As our heuristics consider synergies between landmarks, we further propose to relax certain restrictions LAMA imposes on the number of landmarks and synergies between them. Our experimental evaluation shows a reasonable increase in the number of landmarks that leads to better guidance when used with our new heuristics.
-
Augusto B. Corrêa and Jendrik Seipp.
Consolidating LAMA with Best-First Width Search.
In Proceedings of the ICAPS 2024 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2024). 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
One key decision for heuristic search algorithms is how to balance exploration and exploitation. In classical planning, novelty search has come out as the most successful approach in this respect. The idea is to favor states that contain previously unseen facts when searching for a plan. This is done by maintaining a record of the tuples of facts observed in previous states. Then the novelty of a state is the size of the smallest previously unseen tuple. The most successful version of novelty search is best-first width search (BFWS), which combines novelty measures with heuristic estimates. An orthogonal approach to balance exploration-exploitation is to use several open-lists. These open-lists are ordered using different heuristic estimates, which diversify the information used in the search. The search algorithm then alternates between these open-lists, trying to exploit these different estimates. This is the approach used by LAMA, a classical planner that, a decade after its release, is still considered state-of-the-art in agile planning. In this paper, we study how to combine LAMA and BFWS. We show that simply adding the strongest open-list used in BFWS to LAMA harms performance. However, we show that combining only parts of each planner leads to a new state-of-the-art agile planner.
-
Simon Dold and Malte Helmert.
Novelty vs. Potential Heuristics: A Comparison of Hardness Measures for Satisficing Planning.
In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI 2024), pp. 20692-20699. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Classical planning considers a given task and searches for a plan to solve it. Some tasks are harder to solve than others. We can measure the “hardness” of a task with the novelty width and the correlation complexity. In this work, we compare these measures. Additionally, we introduce the river measure, a new measure that is based on potential heuristics and therefore similar to the correlation complexity but also comparable to the novelty width. We show that the river measure is upper bounded by the correlation complexity and by the novelty width +1. Furthermore, we show that we can convert a planning task with a polynomial blowup of the task size to ensure that a heuristic of dimension 2 exists that gives rise to backtrack-free search.
-
Songtuan Lin, Conny Olz, Malte Helmert and Pascal Bercher.
On the Computational Complexity of Plan Verification, (Bounded) Plan-Optimality Verification, and Bounded Plan Existence.
In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI 2024), pp. 20203-20211. 2024.
(Show abstract) (Hide abstract) (PDF)
In this paper we study the computational complexity of several reasoning tasks centered around the bounded plan existence problem. We do this for standard classical planning and hierarchical task network (HTN) planning and each for a grounded and a lifted representation. Whereas bounded plan existence complexity is known for classical planning, it has not yet been studied for HTN planning. For plan verification, results were available for both formalisms except for the lifted HTN planning. We will present lower and upper bounds of the complexity of plan verification in lifted HTN planning and provide some new insights into its grounded counterpart, in which we show that verification is not just NP-complete in the general case, but already for a severely restricted special case. Finally, we show the complexity concerning verifying the optimality of a given plan and discuss its connection to the bounded plan existence problem.
-
Thorsten Klößner, Álvaro Torralba, Marcel Steinmetz and Silvan Sievers.
A Theory of Merge-and-Shrink for Stochastic Shortest Path Problems.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 203-211. 2023.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides; PDF)
The merge-and-shrink framework is a powerful tool to construct state space abstractions based on factored representations. One of its core applications in classical planning is the construction of admissible abstraction heuristics. In this paper, we develop a compositional theory of merge-and-shrink in the context of probabilistic planning, focusing on stochastic shortest path problems (SSPs). As the basis for this development, we contribute a novel factored state space model for SSPs. We show how general transformations, including abstractions, can be formulated on this model to derive admissible and/or perfect heuristics. To formalize the merge-and-shrink framework for SSPs, we transfer the fundamental merge-and-shrink transformations from the classical setting: shrinking, merging, and label reduction. We analyze the formal properties of these transformations in detail and show how the conditions under which shrinking and label reduction lead to perfect heuristics can be extended to the SSP setting.
-
Daniel Gnad, Silvan Sievers and Álvaro Torralba.
Efficient Evaluation of Large Abstractions for Decoupled Search: Merge-and-Shrink and Symbolic Pattern Databases.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 138-147. 2023.
(Show abstract) (Hide abstract) (PDF) (Supplementary Material; PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
Abstraction heuristics are a state-of-the-art technique to solve classical planning problems optimally. A common approach is to precompute many small abstractions and combine them admissibly using cost partitioning. Recent work has shown that this approach does not work out well when using such heuristics for decoupled state space search, where search nodes represent potentially large sets of states. This is due to the fact that admissibly combining the estimates of several heuristics without sacrificing accuracy is NP-hard for decoupled states. In this work we propose to use a single large abstraction instead. We focus on merge-and-shrink and symbolic pattern database heuristics, which are designed to produce such abstractions. For these heuristics, we prove that the evaluation of decoupled states is NP-hard in general, but we also identify conditions under which it is polynomial. We introduce algorithms for both the general and the polynomial case. Our experimental evaluation shows that single large abstraction heuristics lead to strong performance when the heuristic evaluation is polynomial.
-
Daniel Gnad, Malte Helmert, Peter Jonsson and Alexander Shleyfman.
Planning over Integers: Compilations and Undecidability.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 148-152. 2023.
(Show abstract) (Hide abstract) (PDF)
Restricted Tasks (RT) are a special case of numeric planning characterized by numeric conditions that involve one numeric variable per formula and numeric effects that allow only the addition of constants. Despite this, RTs form an expressive class whose planning problem is undecidable. The restricted nature of RTs often makes problem modeling awkward and unnecessarily complicated. We show that this can be alleviated by compiling mathematical operations that are not natively supported into RTs using macro-like action sequences. With that, we can encode many features found in general numeric planning such as constant multiplication, addition of linear formulas, and integer division and residue. We demonstrate how our compilations can be used to capture challenging mathematical problems such as the (in)famous Collatz conjecture. Our approach additionally gives a simple undecidability proof for RTs, and the proof shows that the number of variables needed to construct an undecidable class of RTs is surprisingly low: two numeric and one propositional variable.
-
Raphael Kreft, Clemens Büchner, Silvan Sievers and Malte Helmert.
Computing Domain Abstractions for Optimal Classical Planning with Counterexample-Guided Abstraction Refinement.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 221-226. 2023.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
Abstraction heuristics are the state of the art in optimal classical planning as heuristic search. A popular method for computing abstractions is the counterexample-guided abstraction refinement (CEGAR) principle, which has successfully been used for projections, which are the abstractions underlying pattern databases, and Cartesian abstractions. While projections are simple and fast to compute, Cartesian abstractions subsume projections and hence allow more finegrained abstractions, however at the expense of efficiency. Domain abstractions are a third class of abstractions between projections and Cartesian abstractions in terms of generality. Yet, to the best of our knowledge, they are only briefly considered in the planning literature but have not been used for computing heuristics yet. We aim to close this gap and compute domain abstractions by using the CEGAR principle. Our empirical results show that domain abstractions compare favorably against projections and Cartesian abstractions.
-
Esther Mugdan, Remo Christen and Salomé Eriksson.
Optimality Certificates for Classical Planning.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 286-294. 2023.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code)
Algorithms are usually shown to be correct on paper, but bugs in their implementations can still lead to incorrect results. In the case of classical planning, it is fortunately straightforward to check whether a computed plan is correct. For optimal planning, however, plans are additionally required to have minimal cost, which is significantly more difficult to verify. While some domain-specific approaches exists, we lack a general tool to verify optimality for arbitrary problems. We bridge this gap and introduce two approaches based on the principle of certifying algorithms, which provide a computer-verifiable certificate of correctness alongside their answer. We show that both approaches are sound and complete, analyze whether they can be generated and verified efficiently, and show how to apply them to concrete planning algorithms. The experimental evaluation shows that verifying optimality comes with a cost, but is still practically feasible. Furthermore, it confirms that the tested planner configurations provide optimal plans on the given instances, as all certificates were verified successfully.
-
Clemens Büchner, Thomas Keller, Salomé Eriksson and Malte Helmert.
Landmark Progression in Heuristic Search.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 70-79. 2023.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
The computation of high-quality landmarks and orderings for heuristic state-space search is often prohibitively expensive to be performed in every generated state. Computing information only for the initial state and progressing it from every state to its successors is a successful alternative, exploited for example in classical planning by the LAMA planner. We propose a general framework for using landmarks in any kind of best-first search. Its core component, the progression function, uses orderings and search history to determine which landmarks must still be achieved. We show that the progression function that is used in LAMA infers invalid information in the presence of reasonable orderings. We define a sound progression function that allows to exploit reasonable orderings in cost-optimal planning and show empirically that our new progression function is beneficial both in satisficing and optimal planning.
-
Augusto B. Corrêa, Markus Hecher, Malte Helmert, Davide Mario Longo, Florian Pommerening and Stefan Woltran.
Grounding Planning Tasks Using Tree Decompositions and Iterated Solving.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 100-108. 2023.
(Show abstract) (Hide abstract) (PDF) (code) (slides; PDF) (poster; PDF)
Classical planning tasks are commonly described in a first-order language. However, most classical planners translate tasks by grounding them and then rewriting them into a propositional language. In recent years, the grounding step has become a larger bottleneck. In this work, we study how to improve it. We build on top of the most common grounder for planning tasks which uses Datalog to find all reachable atoms and actions. Inspired by recent progress in lifted planning, database theory, and algorithmics, we develop a new method to ground these Datalog programs. Our algorithm can ground more instances than the baseline, and most tasks it cannot ground are out of reach from any ground planner.
-
Augusto B. Corrêa, Clemens Büchner and Remo Christen.
Zero-Knowledge Proofs for Classical Planning Problems.
In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI 2023), pp. 11955-11962. 2023.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (technical report; PDF)
In classical planning, the aim is to find a sequence of deterministic actions leading from the initial to a goal state. In this work, we consider the scenario where a party who knows the solution to a planning task, called the prover, wants to convince a second party, the verifier, that it has the solution without revealing any information about the solution itself. This is relevant in domains where privacy is important, for example when plans contain sensitive information or when the solution should not be revealed upfront. We achieve this by introducing a zero-knowledge protocol for plan existence. By restricting ourselves to tasks with polynomially-bounded plan length, we are able to construct a protocol that can be run efficiently by both prover and verifier. The resulting protocol does not rely on any reduction, has a constant number of rounds, and runs in time polynomial in the size of the task.
-
Mohammad Abdulaziz, Florian Pommerening and Augusto B. Corrêa.
Mechanically Proving Guarantees of Generalized Heuristics: First Results and Ongoing Work.
In Proceedings of the Sixth Workshop on Generalization in Planning (GenPlan 2022). 2022.
Note: This work was also presented at HSDIP 2022. Due to different page limits, the HSDIP version has some additional details.
(Show abstract) (Hide abstract) (slides; PDF)
The goal of generalized planning is to find a solution that works for all tasks of a specific planning domain. Ideally, this solution is also efficient (i.e., polynomial) in all tasks. One possible approach is to learn such a solution from training examples and then prove that this generalizes for any given task. However, such proofs are usually pen-and-paper proofs written by a human. In our paper, we aim at automating these proofs so we can use a theorem prover to show that a solution generalizes for any task. Furthermore, we want to prove that this generalization works while still preserving efficiency. Our focus is on generalized potential heuristics encoding tiered measures of progress, which can be proven to lead to a find in a polynomial number of steps in all tasks of a domain. We show our ongoing work in this direction using the interactive theorem prover Isabelle/HOL. We illustrate the key aspects of our implementation using the Miconic domain and then discuss possible obstacles and challenges to fully automating this pipeline.
-
Patrick Ferber, Liat Cohen, Jendrik Seipp and Thomas Keller.
Learning and Exploiting Progress States in Greedy Best-First Search.
In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI 2022), pp. 4740-4746. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code and data)
Previous work introduced the concept of progress states. Once greedy best-first search expanded a progress state, it expands only states with lower heuristic values. Current methods identify progress states for a single task and after a solution for the task is found. We introduce a novel approach which learns for a classical planning domain a description logic formula to characterize all its progress states. This formula can be used during future searches. We present a first use case which exploits the learned formulas: During search we break ties in favor of progress states. This often significantly reduces the search effort.
-
Daniel Heller, Patrick Ferber, Julian Bitterwolf, Matthias Hein and Jörg Hoffmann.
Neural Network Heuristic Functions: Taking Confidence into Account.
In Proceedings of the 15th International Symposium on Combinatorial Search (SoCS 2022), pp. 223-228. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code)
Neural networks (NN) are increasingly investigated in AI Planning, and are used successfully to learn heuristic functions. NNs commonly not only predict a value, but also output a confidence in this prediction. From the perspective of heuristic search with NN heuristics, it is a natural idea to take this into account, e.g. falling back to a standard heuristic where confidence is low. We contribute an empirical study of this idea. We design search methods which prune nodes, or switch between search queues, based on the confidence of NNs. We furthermore explore the possibility of out-of-distribution (OOD) training, which tries to reduce the overconfidence of NNs on inputs different to the training distribution. In experiments on IPC benchmarks, we find that our search methods improve coverage over standard methods, and that OOD training has the desired effect in terms of prediction accuracy and confidence, though its impact on search seems marginal.
-
Silvan Sievers, Daniel Gnad and Álvaro Torralba.
Additive Pattern Databases for Decoupled Search.
In Proceedings of the 15th International Symposium on Combinatorial Search (SoCS 2022), pp. 180-189. 2022.
Note: SoCS 2022 Best Paper Award.
(Show abstract) (Hide abstract) (PDF) (Additional Material; PDF) (slides; PDF) (code, scripts and data)
Abstraction heuristics are the state of the art in optimal classical planning as heuristic search. Despite their success for explicit-state search, though, abstraction heuristics are not available for decoupled state-space search, an orthogonal reduction technique that can lead to exponential savings by decomposing planning tasks. In this paper, we show how to compute pattern database (PDB) heuristics for decoupled states. The main challenge lies in how to additively employ multiple patterns, which is crucial for strong search guidance of the heuristics. We show that in the general case, for arbitrary collections of PDBs, computing the heuristic for a decoupled state is exponential in the number of leaf components of decoupled search. We derive several variants of decoupled PDB heuristics that allow to additively combine PDBs avoiding this blow-up and evaluate them empirically.
-
Lucas Galery Käser, Clemens Büchner, Augusto B. Corrêa, Florian Pommerening and Gabriele Röger.
Machetli: Simplifying Input Files for Debugging.
In System Demonstrations at the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022). 2022.
(Show abstract) (Hide abstract) (PDF) (poster; PDF) (video; MP4) (interactive demo)
Debugging can be a painful task, especially when bugs only occur for large input files. We present Machetli, a tool to help with debugging in such situations. It takes a large input file and cuts away parts of it, while still provoking the bug. The resulting file is much smaller than the original, making the bug easier to find and fix. In our experience, Machetli was able to reduce planning tasks with thousands of actions to trivial tasks that could even be solved by hand. Machetli is an open-source project and it can be extended to other use cases such as debugging SAT solvers or LaTeX compilation bugs.
-
Mohammad Abdulaziz, Florian Pommerening and Augusto B. Corrêa.
Mechanically Proving Guarantees of Generalized Heuristics: First Results and Ongoing Work.
In Proceedings of the ICAPS 2022 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2022). 2022.
Note: This work was also presented at GenPlan 2022. Due to different page limits, the HSDIP version has some additional details.
(Show abstract) (Hide abstract) (PDF)
The goal of generalized planning is to find a solution that works for all tasks of a specific planning domain. Ideally, this solution is also efficient (i.e., polynomial) in all tasks. One possible approach is to learn such a solution from training examples and then prove that this generalizes for any given task. However, such proofs are usually pen-and-paper proofs written by a human. In our paper, we aim at automating these proofs so we can use a theorem prover to show that a solution generalizes for any task. Furthermore, we want to prove that this generalization works while still preserving efficiency. Our focus is on generalized potential heuristics encoding tiered measures of progress, which can be proven to lead to a find in a polynomial number of steps in all tasks of a domain. We show our ongoing work in this direction using the interactive theorem prover Isabelle/HOL. We illustrate the key aspects of our implementation using the Miconic domain and then discuss possible obstacles and challenges to fully automating this pipeline.
-
Clemens Büchner, Patrick Ferber, Jendrik Seipp and Malte Helmert.
A Comparison of Abstraction Heuristics for Rubik's Cube.
In Proceedings of the ICAPS 2022 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2022). 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4) (code)
Since its invention in 1974, the Rubik’s Cube puzzle fascinates people of all ages. Its rules are simple: the player gets a scrambled cube and rotates the six faces until each face contains only stickers of one color. Nevertheless, finding a short sequence of rotations to solve the cube is hard. We present the first model of Rubik’s Cube for general problem solvers. To obtain a concise model, we require conditional effects. Furthermore, we extend counterexample-guided Cartesian abstraction refinement (CEGAR) to support factored effect tasks, a class of planning tasks with a specific kind of conditional effects which includes Rubik’s Cube. Finally, we evaluate how newer types of abstraction heuristics compare against pattern database (PDB) heuristics, the state-of-the-art for solving Rubik’s Cube. We find that PDBs still outperform the more general Cartesian and merge-and-shrink abstractions. However, in contrast to PDBs, Cartesian abstractions yield perfect heuristics up to a certain problem difficulty. These findings raise interesting questions for future research.
-
Thorsten Klößner, Florian Pommerening, Thomas Keller and Gabriele Röger.
Cost Partitioning Heuristics for Stochastic Shortest Path Problems.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 193-202. 2022.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF)
In classical planning, cost partitioning is a powerful method which allows to combine multiple admissible heuristics while retaining an admissible bound. In this paper, we extend the theory of cost partitioning to probabilistic planning by generalizing from deterministic transition systems to stochastic shortest path problems (SSPs). We show that fundamental results related to cost partitioning still hold in our extended theory. We also investigate how to optimally partition costs for a large class of abstraction heuristics for SSPs. Lastly, we analyze occupation measure heuristics for SSPs as well as the theory of approximate linear programming for reward-oriented Markov decision processes. All of these fit our framework and can be seen as cost-partitioned heuristics.
-
Patrick Ferber, Florian Geißer, Felipe Trevizan, Malte Helmert and Jörg Hoffmann.
Neural Network Heuristic Functions for Classical Planning: Bootstrapping and Comparison to Other Methods.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 583-587. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code and data)
How can we train neural network (NN) heuristic functions for classical planning, using only states as the NN input? Prior work addressed this question by (a) per-instance imitation learning and/or (b) per-domain learning. The former limits the approach to instances small enough for training data generation, the latter to domains where the necessary knowledge generalizes across instances. Here we explore three methods for (a) that make training data generation scalable through bootstrapping and approximate value iteration. In particular, we introduce a new bootstrapping variant that estimates search effort instead of goal distance, which as we show converges to the perfect heuristic under idealized circumstances. We empirically compare these methods to (a) and (b), aligning three different NN heuristic function learning architectures for cross-comparison in an experiment of unprecedented breadth in this context. Key lessons are that our methods and imitation learning are highly complementary; that per-instance learning often yields stronger heuristics than per-domain learning; and the LAMA planner is still dominant but our methods outperform it in one benchmark domain.
-
Marcel Steinmetz, Daniel Fišer, Hasan Ferit Enişer, Patrick Ferber, Timo Gros, Philippe Heim, Daniel Höller, Xandra Schuler, Valentin Wüstholz, Maria Christakis and Joerg Hoffmann.
Debugging a Policy: Automatic Action-Policy Testing in AI Planning.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 353-361. 2022.
(Show abstract) (Hide abstract) (PDF)
Testing is a promising way to gain trust in neural action policies π. Previous work on policy testing in sequential decision making targeted environment behavior leading to failure conditions. But if the failure is unavoidable given that behavior, then π is not actually to blame. For a situation to qualify as a “bug” in π, there must be an alternative policy π0 that does better. We introduce a generic policy testing framework based on that intuition. This raises the bug confirmation problem, deciding whether or not a state is a bug. We analyze the use of optimistic and pessimistic bounds for the design of test oracles approximating that problem. We contribute an implementation of our framework in classical planning, experimenting with several test oracles and with random-walk methods generating test states biased to poor policy performance and/or state novelty. We evaluate these techniques on policies π learned with ASNets. We find that they are able to effectively identify bugs in these π, and that our random-walk biases improve over uninformed baselines.
-
Remo Christen, Salomé Eriksson, Florian Pommerening and Malte Helmert.
Detecting Unsolvability Based on Separating Functions.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 44-52. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code, scripts, and data)
While the unsolvability IPC sparked a multitude of planners proficient in detecting unsolvable planning tasks, there are gaps where concise unsolvability arguments are known but no existing planner can capture them without prohibitive computational effort. One such example is the sliding tiles puzzle, where solvability can be decided in polynomial time with a parity argument. We introduce separating functions, which can prove that one state is unreachable from another, and show under what conditions a potential function over any nonzero ring is a separating function. We prove that we can compactly encode these conditions for potential functions over features that are pairs, and show in which cases we can efficiently synthesize functions satisfying these conditions. We experimentally evaluate a domain-independent algorithm that successfully synthesizes such separating functions from PDDL representations of the sliding tiles puzzle, the Lights Out puzzle, and Peg Solitaire.
-
Malte Helmert, Silvan Sievers, Alexander Rovner and Augusto B. Corrêa.
On the Complexity of Heuristic Synthesis for Satisficing Classical Planning: Potential Heuristics and Beyond.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 124-133. 2022.
Erratum: In the published version of the paper, the proof of Theorem 7 mistakenly says "DDA" instead of "SDDA" in some places. This is fixed in the version provided here.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Potential functions are a general class of heuristics for classical planning. For satisficing planning, previous work suggested the use of descending and dead-end avoiding (DDA) potential heuristics, which solve planning tasks by backtrack-free search. In this work we study the complexity of devising DDA potential heuristics for classical planning tasks. We show that verifying or synthesizing DDA potential heuristics is PSPACE-complete, but suitable modifications of the DDA properties reduce the complexity of these problems to the first and second level of the polynomial hierarchy. We also discuss the implications of our results for other forms of heuristic synthesis in classical planning.
-
Augusto B. Corrêa and Jendrik Seipp.
Best-First Width Search for Lifted Classical Planning.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022). 2022.
(Show abstract) (Hide abstract) (PDF) (poster; PDF) (slides; PDF) (code, scripts, and data)
Lifted planners are useful to solve tasks that are too hard to ground. Still, computing informative lifted heuristics is difficult: directly adapting ground heuristics to the lifted setting is often too expensive, and extracting heuristics from the lifted representation can be uninformative. A natural alternative for lifted planners is to use width-based search. These algorithms are among the strongest for ground planning, even the variants that do not access the action model. In this work, we adapt best-first width search to the lifted setting and show that this yields state-of-the-art performance for hard-to-ground planning tasks.
-
Silvan Sievers, Daniel Gnad and Álvaro Torralba.
Additive Pattern Databases for Decoupled Search.
In Proceedings of the ICAPS 2022 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2022). 2022.
Note: This paper is superseded by the SoCS 2022 paper by the same name.
(Show abstract) (Hide abstract) (PDF) (Additional Material; PDF) (slides; PDF) (talk; MP4) (code, scripts and data)
Abstraction heuristics are the state of the art in optimal classical planning as heuristic search. Despite their success for explicit-state search, though, abstraction heuristics are not available for decoupled state-space search, an orthogonal reduction technique that can lead to exponential savings by decomposing planning tasks. In this paper, we show how to compute pattern database (PDB) heuristics for decoupled states. The main challenge lies in how to additively employ multiple patterns, which is crucial for strong search guidance of the heuristics. We show that in the general case, for arbitrary collections of PDBs, computing the heuristic for a decoupled state is exponential in the number of leaf components of decoupled search. We derive several variants of decoupled PDB heuristics that allow to additively combine PDBs avoiding this blow-up and evaluate them empirically.
-
Augusto B. Corrêa, Florian Pommerening, Malte Helmert and Guillem Francès.
The FF Heuristic for Lifted Classical Planning.
In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), pp. 9716-9723. 2022.
(Show abstract) (Hide abstract) (PDF) (slides (short); PDF) (slides (long); PDF) (poster; PDF) (code and appendix)
Heuristics for lifted planning are not yet as informed as the best heuristics for ground planning. Recent work introduced the idea of using Datalog programs to compute the additive heuristic over lifted tasks. Based on this work, we show how to compute the more informed FF heuristic in a lifted manner. We extend the Datalog program with executable annotations that can also be used to define other delete-relaxation heuristics. In our experiments, we show that a planner using the lifted FF implementation produces state-of-the-art results for lifted planners. It also reduces the gap to state-of-the-art ground planners in domains where grounding is feasible.
-
Patrick Ferber and Jendrik Seipp.
Explainable Planner Selection for Classical Planning.
In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), pp. 9741-9749. 2022.
(Show abstract) (Hide abstract) (PDF) (slides (long); PDF) (slides (short); PDF) (poster; PDF) (code)
Since no classical planner consistently outperforms all others, it is important to select a planner that works well for a given classical planning task. The two strongest approaches for planner selection use image and graph convolutional neural networks. They have the drawback that the learned models are complicated and uninterpretable. To obtain explainable models, we identify a small set of simple task features and show that elementary and interpretable machine learning techniques can use these features to solve roughly as many tasks as the complex approaches based on neural networks.
-
Silvan Sievers and Martin Wehrle.
On Weak Stubborn Sets in Classical Planning.
In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI 2021), pp. 4167-4174. 2021.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides (short); PDF) (slides (long); PDF) (poster; PDF) (2m talk) (15m talk) (code, scripts and data)
Stubborn sets are a pruning technique for state-space search which is well established in optimal classical planning. In this paper, we show that weak stubborn sets introduced in recent work in planning are actually not weak stubborn sets in Valmari's original sense. Based on this finding, we introduce weak stubborn sets in the original sense for planning by providing a generalized definition analogously to generalized strong stubborn sets in previous work. We discuss the relationship of strong, weak and the previously called weak stubborn sets, thus providing a further step in getting an overall picture of the stubborn set approach in planning.
-
Silvan Sievers and Malte Helmert.
Merge-and-Shrink: A Compositional Theory of Transformations of Factored Transition Systems.
Journal of Artificial Intelligence Research 71, pp. 781-883. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (2m talk) (15m talk)
The merge-and-shrink framework has been introduced as a general approach for defining abstractions of large state spaces arising in domain-independent planning and related areas. The distinguishing characteristic of the merge-and-shrink approach is that it operates directly on the factored representation of state spaces, repeatedly modifying this representation through transformations such as shrinking (abstracting a factor of the representation), merging (combining two factors), label reduction (abstracting the way in which different factors interact), and pruning (removing states or transitions of a factor).
We provide a novel view of the merge-and-shrink framework as a "toolbox" or "algebra" of transformations on factored transition systems, with the construction of abstractions as only one possible application. For each transformation, we study desirable properties such as conservativeness (overapproximating the original transition system), inducedness (absence of spurious states and transitions), and refinability (reconstruction of paths in the original transition system from the transformed one). We provide the first complete characterizations of the conditions under which these desirable properties can be achieved. We also provide the first full formal account of factored mappings, the mechanism used within the merge-and-shrink framework to establish the relationship between states in the original and transformed factored transition system.
Unlike earlier attempts to develop a theory for merge-and-shrink, our approach is fully compositional: the properties of a sequence of transformations can be entirely understood by the properties of the individual transformations involved. This aspect is key to the use of merge-and-shrink as a general toolbox for transforming factored transition systems. New transformations can easily be added to our theory, with compositionality taking care of the seamless integration with the existing components. Similarly, new properties of transformations can be integrated into the theory by showing their compositionality and studying under which conditions they are satisfied by the building blocks of merge-and-shrink.
-
Dominik Drexler, Jendrik Seipp and David Speck.
Subset-Saturated Transition Cost Partitioning.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 131-139. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
Cost partitioning admissibly combines the information from multiple heuristics for optimal state-space search. One of the strongest cost partitioning algorithms is saturated cost partitioning. It considers the heuristics in sequence and assigns to each heuristic the minimal fraction of the remaining costs that are needed for preserving all heuristic estimates. Saturated cost partitioning has recently been generalized in two directions: first, by allowing to use different costs for the transitions induced by the same operator, and second, by preserving the heuristic estimates for only a subset of states. In this work, we unify these two generalizations and show that the resulting subset-saturated transition cost partitioning algorithm usually yields stronger heuristics than the two generalizations by themselves.
-
Jendrik Seipp.
Online Saturated Cost Partitioning for Classical Planning.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 317-321. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
Cost partitioning is a general method for admissibly summing up heuristic estimates for optimal state-space search. Most cost partitioning algorithms can optimize the resulting cost-partitioned heuristic for a specific state. Since computing a new cost-partitioned heuristic for each evaluated state is usually too expensive in practice, the strongest planners based on cost partitioning over abstraction heuristics precompute a set of cost-partitioned heuristics before the search and maximize over their estimates during the search. This makes state evaluations very fast, but since there is no better termination criterion than a time limit, it requires a long precomputation phase, even for the simplest planning tasks. A prototypical example for this is the Scorpion planner which computes saturated cost partitionings over abstraction heuristics offline before the search. Using Scorpion as a case study, we show that by incrementally extending the set of cost-partitioned heuristics online during the search, we drastically speed up the planning process and even often solve more tasks.
-
Álvaro Torralba, Jendrik Seipp and Silvan Sievers.
Automatic Instance Generation for Classical Planning.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 376-384. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (talk) (poster; PDF) (code)
The benchmarks from previous International Planning Competitions (IPCs) are the de-facto standard for evaluating planning algorithms. The IPC set is both a collection of planning domains and a selection of instances from these domains. Most of the domains come with a parameterized generator that generates new instances for a given set of parameter values. Due to the steady progress of planning research some of the instances that were generated for past IPCs are inadequate for evaluating current planners. To alleviate this problem, we introduce Autoscale, an automatic tool that selects instances for a given domain. Autoscale takes into account constraints from the domain designer as well as the performance of current planners to generate an instance set of appropriate difficulty, while avoiding too much bias with respect to the considered planners. We show that the resulting benchmark set is superior to the IPC set and has the potential of improving empirical evaluation of planning research.
-
Clemens Büchner, Thomas Keller and Malte Helmert.
Exploiting Cyclic Dependencies in Landmark Heuristics.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 65-73. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
Landmarks of a planning task denote properties that must be satisfied by all plans. Existing landmark heuristics exploit that each landmark must be achieved at least once. However, if the orderings between the landmarks induce cyclic dependencies, one of the landmarks in each cycle must be achieved an additional time. We propose two novel heuristics for cost-optimal planning that consider cyclic dependencies between landmarks in addition to the cost for achieving all landmarks once. We show that our heuristics dominate the minimum hitting set solution over any set of landmarks as well as h + if all delete-relaxation landmarks are considered. An experimental evaluation on benchmarks from the International Planning Competition shows that exploiting cyclic dependencies can lead to improved heuristics.
-
Florian Pommerening, Thomas Keller, Valentina Halasi, Jendrik Seipp, Silvan Sievers and Malte Helmert.
Dantzig-Wolfe Decomposition for Cost Partitioning.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 271-280. 2021.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides; PDF) (recording) (poster; PDF) (code)
Optimal cost partitioning can produce high quality heuristic estimates even from small abstractions. It can be computed with a linear program (LP) but the size of this LP often makes this impractical. Recent work used Lagrangian decomposition to speed up the computation. Here we use a different decomposition technique called Dantzig-Wolfe decomposition to tackle the problem. This gives new insights into optimal cost partitioning and has several advantages over Lagrangian decomposition: our method detects when a cost partition is optimal; it can deal with general cost functions; and it does not consider abstractions in the linear program that do not contribute to the heuristic value. We also show the advantage of the method empirically and investigate several improvements that are useful for all cost partitioning methods.
-
Rik de Graaff, Augusto B. Corrêa and Florian Pommerening.
Concept Languages as Expert Input for Generalized Planning: Preliminary Results.
In Proceedings of the ICAPS 2021 Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2021). 2021.
(Show abstract) (Hide abstract) (PDF)
Planning is an attractive approach to solving problems because of its generality and its ease of use. A domain expert often has knowledge that could improve the performance of such a solution but this knowledge may be vague or not directly actionable. For example, an expert could be aware that a certain property is important but not have a good way of using this knowledge to speed up the search for a solution. In this paper we evaluate concept languages as a possible input language for injecting expert domain knowledge into a planning system. We also explore mixed integer programming as a way to use this knowledge to improve search efficiency and to help the user find and refine useful domain knowledge.
-
Clemens Büchner and Thomas Keller.
LM-BFS: A Framework for Landmarks in Planning.
In Proceedings of the ICAPS 2021 Workshop on Heuristics and Search for Domain-Independent Planning (HSDIP 2021). 2021.
Superseded by the ICAPS 2023 paper "Landmark Progression in Heuristic Search".
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4)
It is usually prohibitively expensive to recompute landmarks in every state of a heuristic search algorithm that bases its heuristic estimate on landmarks. Successful planning systems like LAMA compute landmarks just for the initial state instead and only keep track for each state encountered during search which landmarks of the initial state are accepted or required again. The LM-A* algorithm describes a similar idea for cost-optimal heuristic search. However, it lacks support for reasonable orderings, which have recently been shown to be informative when cyclic dependencies between landmarks are exploited in the heuristic computation. We propose the novel LM-BFS framework for best-first heuristic search with landmark-based heuristics which allows to describe a variety of algorithms in terms of five components: landmark generation, landmark graph progression, landmark graph merging, landmark graph extension, and exploration strategy. We use LM-BFS to show that considering reasonable orderings in the landmark graph progression component leads to flawed landmark graphs in LAMA , and propose a novel landmark graph extension that captures the same information flawlessly. We observe empirically that considering reasonable orderings in the extension rather than the progression component has a positive effect both for LAMA and for cost-optimal planning.
-
Jendrik Seipp, Thomas Keller and Malte Helmert.
Saturated Post-hoc Optimization for Classical Planning.
In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), pp. 11947-11953. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4) (poster; PDF) (code)
Saturated cost partitioning and post-hoc optimization are two powerful cost partitioning algorithms for optimal classical planning. The main idea of saturated cost partitioning is to give each considered heuristic only the fraction of remaining operator costs that it needs to prove its estimates. We show how to apply this idea to post-hoc optimization and obtain a heuristic that dominates the original both in theory and on the IPC benchmarks.
Lifted and Generalized Representations for Classical Planning: LGR-Plan (funded by the Swiss National Science Foundation)
Classical planning is the problem of finding a sequence of actions transforming a given initial state into a state satisfying a given goal. Today’s state of the art in classical planning almost universally operates on factored representations inspired by propositional logic: for example, a state space with 2^100 states can be represented with 100 propositional state variables.
Recently, the limitations of such representations have become increasingly apparent, and there is rapidly growing interest in algorithms that directly operate on lifted representations based on first-order predicate logic or similar formalisms. Moreover, several recent works suggested lifted representations that distill information across multiple tasks in the same planning domain (infinite family of planning tasks) into a single generalized representation that applies to all tasks in the domain. A typical approach is to learn or synthesize generalized representations from small example tasks in the domain and then use them to solve more challenging tasks.
We believe that the time is right for lifted and generalized representations to challenge propositional representations as the leading representation paradigm in classical planning. We suggest two research directions in pursuit of this goal:
Lifted algorithms for classical planning: we conceive and implement planning algorithms completely based on lifted representations that outperform state-of-the-art algorithms (based on non-lifted representations). We consider both the setting of satisficing planning (finding plans with no optimality guarantees) and optimal planning (finding plans of minimum cost). This involves devising efficient lifted algorithms for all building blocks of a state-of-the-art planning system and combining them into a well-engineered whole. One major goal for this work package is a “Lifted LAMA” planner outperforming Richter and Westphal’s highly influential LAMA system on the standard benchmark suite in classical planning.
Generalized representations for classical planning: several innovative generalized representations for classical planning have been suggested in the past few years, including action schema networks, generalized potential heuristics, STRIPS hypergraph networks, and sketches. They all come with the hope of significantly improving scalability of classical planning, but so far very little is known about their strengths and weaknesses, both in absolute terms and in relation to each other. We develop an encompassing theory that provides the formal underpinnings to put these disjointed results into a coherent whole. We formally clarify the notion of “planning domain” to be able to express the problems that these approaches attempt to solve in a rigorous way. We analyze the decidability and complexity properties of these problems, and we provide expressivity and compilation results demonstrating the strengths, weaknesses and relationships of these approaches. This includes relating representations used in planning to ones developed in other areas of AI such as (hyper-) graph neural networks and neural logic machines.
Contact: Malte Helmert
Relevant publications: (Show) (Hide)-
Augusto B. Corrêa and Giuseppe De Giacomo.
Lifted Planning: Recent Advances in Planning Using First-Order Representations.
In Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), pp. 8010-8019. 2024.
(Show abstract) (Hide abstract) (PDF)
Lifted planning is usually defined as planning directly over a first-order representation. From the mid-1990s until the late 2010s, lifted planning was sidelined, as most of the state-of-the-art planners first ground the task and then solve it using a propositional representation. Moreover, it was unclear whether lifted planners could scale. But as planning problems become harder, they also become infeasible to ground. Recently, lifted planners came back into play, aiming at problems where grounding is a bottleneck. In this work, we survey recent advances in lifted planning. The main techniques rely either on state-space search or logic satisfiability. For lifted search-based planners, we show the direct connections to other areas of computer science, such as constraint satisfaction problems and databases. For lifted planners based on satisfiability, the advances in modeling are crucial to their scalability. We briefly describe the main planners available in the literature and their techniques.
-
Clemens Büchner, Remo Christen, Salomé Eriksson and Thomas Keller.
Hitting Set Heuristics for Overlapping Landmarks in Satisficing Planning.
In Proceedings of the 17th International Symposium on Combinatorial Search (SoCS 2024), pp. 198-202. 2024.
A non-archival, but longer version of this paper has been published at the HSDIP workshop of ICAPS 2024. See the separate entry for that paper. We recommend citing the SoCS paper in scholarly publications.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Landmarks are a core component of LAMA, a state-of-the-art satisficing planning system based on heuristic search. It uses landmarks to estimate the goal distance by summing up the costs of their cheapest achievers. This procedure ignores synergies between different landmarks: The cost of an action is counted multiple times if it is the cheapest achiever of several landmarks. Common admissible landmark heuristics tackle this problem by underapproximating the cost of a minimum hitting set of the landmark achievers. We suggest to overapproximate it by computing suboptimal hitting sets instead if admissibility is not a requirement. As our heuristics consider synergies between landmarks, we further propose to relax certain restrictions LAMA imposes on the number of landmarks and synergies between them. Our experimental evaluation shows a reasonable increase in the number of landmarks that leads to better guidance when used with our new heuristics.
-
Augusto B. Corrêa, Giuseppe De Giacomo, Malte Helmert and Sasha Rubin.
Planning with Object Creation.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 104-113. 2024.
An earlier work that introduces object creation to classical planning formalisms is the paper "Introducing Dynamic Object Creation to PDDL Planning" by Edelkamp, Lluch-Lafuente and Moraru in 2019. That work was submitted but not accepted to the ICAPS 2019 IPC workshop, and so we cannot provide a citable reference, but the work can be found via web search for the title or by searching on openreview.net. Our development of the topic is independent of the 2019 paper.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Classical planning problems are defined using some specification language, such as PDDL. The domain expert defines action schemas, objects, the initial state, and the goal. One key aspect of PDDL is that the set of objects cannot be modified during plan execution. While this is fine in many domains, sometimes it makes modeling more complicated. This may impact the performance of planners, and it requires the domain expert to bound the number of required objects beforehand, which can be a challenge. We introduce an extension to the classical planning formalism, where action effects can create and remove objects. This problem is semi-decidable, but it becomes decidable if we can bound the number of objects in any given state, even though the state space is still infinite. On the practical side, we extend the Powerlifted planning system to support this PDDL extension. Our results show that this extension improves the performance of Powerlifted while supporting more natural PDDL models.
-
Claudia Grundke, Gabriele Röger and Malte Helmert.
Formal Representations of Classical Planning Domains.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 239-248. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Planning domains are an important notion, e.g. when it comes to restricting the input for generalized planning or learning approaches. However, domains as specified in PDDL cannot fully capture the intuitive understanding of a planning domain. We close this semantic gap and propose using PDDL axioms to characterize the (typically infinite) set of legal tasks of a domain. A minor extension makes it possible to express all properties that can be determined in polynomial time. We demonstrate the suitability of the approach on established domains from the International Planning Competition.
-
Simon Dold.
Planning Domain Modelling Competition.
In Proceedings of the ICAPS-2024 Workshop on the International Planning Competition (WIPC 2024). 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
The international planning competition (IPC) is a recurring event that compares the performance of planners and awards the best ones. The evaluation is based on a set of benchmark problems that describe interesting planning problems in PDDL (planning domain definition language). There is a general interest in high-quality benchmarks. Not only for the IPC but for the research of automated planning in general. We explore the possibility of a planning domain modeling competition that produces them. In this work, we discuss the desirable properties of benchmarks and how they could be evaluated.
-
Clemens Büchner, Remo Christen, Salomé Eriksson and Thomas Keller.
Hitting Set Heuristics for Overlapping Landmarks in Satisficing Planning.
In Proceedings of the ICAPS 2024 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2024). 2024.
An archival, but shorter version of this paper has been published at SoCS 2024. See the separate entry for that paper. We recommend citing the SoCS paper in scholarly publications.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code, scripts and data)
Landmarks are a core component of LAMA, a state-of-the-art satisficing planning system based on heuristic search. It uses landmarks to estimate the goal distance by summing up the costs of their cheapest achievers. This procedure ignores synergies between different landmarks: The cost of an action is counted multiple times if it is the cheapest achiever of several landmarks. Common admissible landmark heuristics tackle this problem by underapproximating the cost of a minimum hitting set of the landmark achievers. We suggest to overapproximate it by computing suboptimal hitting sets instead if admissibility is not a requirement. As our heuristics consider synergies between landmarks, we further propose to relax certain restrictions LAMA imposes on the number of landmarks and synergies between them. Our experimental evaluation shows a reasonable increase in the number of landmarks that leads to better guidance when used with our new heuristics.
-
Augusto B. Corrêa and Jendrik Seipp.
Consolidating LAMA with Best-First Width Search.
In Proceedings of the ICAPS 2024 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2024). 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
One key decision for heuristic search algorithms is how to balance exploration and exploitation. In classical planning, novelty search has come out as the most successful approach in this respect. The idea is to favor states that contain previously unseen facts when searching for a plan. This is done by maintaining a record of the tuples of facts observed in previous states. Then the novelty of a state is the size of the smallest previously unseen tuple. The most successful version of novelty search is best-first width search (BFWS), which combines novelty measures with heuristic estimates. An orthogonal approach to balance exploration-exploitation is to use several open-lists. These open-lists are ordered using different heuristic estimates, which diversify the information used in the search. The search algorithm then alternates between these open-lists, trying to exploit these different estimates. This is the approach used by LAMA, a classical planner that, a decade after its release, is still considered state-of-the-art in agile planning. In this paper, we study how to combine LAMA and BFWS. We show that simply adding the strongest open-list used in BFWS to LAMA harms performance. However, we show that combining only parts of each planner leads to a new state-of-the-art agile planner.
-
Augusto B. Corrêa, Markus Hecher, Malte Helmert, Davide Mario Longo, Florian Pommerening and Stefan Woltran.
Grounding Planning Tasks Using Tree Decompositions and Iterated Solving.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 100-108. 2023.
(Show abstract) (Hide abstract) (PDF) (code) (slides; PDF) (poster; PDF)
Classical planning tasks are commonly described in a first-order language. However, most classical planners translate tasks by grounding them and then rewriting them into a propositional language. In recent years, the grounding step has become a larger bottleneck. In this work, we study how to improve it. We build on top of the most common grounder for planning tasks which uses Datalog to find all reachable atoms and actions. Inspired by recent progress in lifted planning, database theory, and algorithmics, we develop a new method to ground these Datalog programs. Our algorithm can ground more instances than the baseline, and most tasks it cannot ground are out of reach from any ground planner.
Unifying the Theory and Algorithms of Factored State-Space Search: UTA (funded by a Swiss National Science Foundation Advanced Grant)
Automated planning, also known as factored state-space search, is a fundamental problem in AI with numerous applications in computer science and elsewhere. There are three dominant algorithmic paradigms: heuristic search, symbolic search, and SAT planning. While important theoretical results exist to explain and improve the performance of some of these algorithms, almost nothing is known about the connections between them.
We build a unified theory of factored state-space search that identifies a common reasoning core encompassing all three algorithmic paradigms that allows us to directly relate these algorithms to each other in theory and build hybridized algorithms combining their individual strengths in practice. Identifying such a core also allows us to better understand the trade-off involved between representational expressiveness and efficiency of reasoning and design better algorithms to address this trade-off.
Contact: Malte Helmert and Gabriele Röger
Previous research projects
AI Planning: Prost
Prost is a domain-independent probabilistic planning system developed in collaboration between research in and outside of our research group. Prost can be used for online planning for fully observable, discrete MDPs that are described in the input language RDDL (Relational Dynamic Influence Diagram Language). The system is implemented in C++.
Prost won the fully observable MDP track of the Seventh International Planning Competition (IPC 2011) at the 21st International Conference on Automated Planning and Scheduling (ICAPS 2011) as well as of the Eighth International Planning Competition (IPC 2014) at the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014). The planner Prost-DD, implemented on top of Prost by Florian Geißer and David Speck, won the fully observable MDP track of the Ninth International Planning Competition (IPC 2018) at the 28th International Conference on Automated Planning and Scheduling (ICAPS 2018).
Further information: Prost
Contact: Thomas Keller
AIPlan4EU – Bringing AI Planning to the European AI On-Demand Platform (funded by the European Union)
Automated Planning and Scheduling is a central research area in AI that has been studied since the inception of the field and where European research has been making strong contributions over decades. Planning is a decision-making technology that consists in reasoning on a predictive model of a system being controlled and deciding how and when to act in order to achieve a desired objective. It is a relevant technology for many application areas that need quick, automated and optimal decisions, like agile manufacturing, agrifood or logistics. Although there is a wealth of techniques that are mature in terms of science and systems, several obstacles hinder their adoption, thus preventing them from making the footprint on European industry that they should make. For example, it is hard for practitioners to find the right techniques for a given planning problem, there are no shared standards to use them, and there is no easy access to expertise on how to encode domain knowledge into a planner.
The AIPlan4EU project will bring AI planning as a first-class citizen in the European AI On-Demand (AI4EU) Platform by developing a uniform, user-centered framework to access the existing planning technology and by devising concrete guidelines for innovators and practitioners on how to use this technology. To do so, we will consider use-cases from diverse application areas that will drive the design and the development of the framework, and include several available planning systems as engines that can be selected to solve practical problems. We will develop a general and planner-agnostic API that will both be served by the AI4EU platform and be available as a resource to be integrated into the users' systems. The framework will be validated on use-cases both from within the consortium and recruited by means of cascade funding; moreover, standard interfaces between the framework and common industrial technologies will be developed and made available.
Further information: AIPlan4EU Project
Contact: Gabriele Röger
Relevant publications: (Show) (Hide)-
Michael Katz, Gabriele Röger and Malte Helmert.
On Producing Shortest Cost-Optimal Plans.
In Proceedings of the 15th International Symposium on Combinatorial Search (SoCS 2022), pp. 100-108. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Cost-optimal planning is at the heart of planning research, with many existing planners that produce provably optimal solutions. While some applications pose additional restric- tions, such as producing shortest (in the number of actions) among the cost-optimal plans, standard cost-optimal planning does not provide such a guarantee. We discuss two possible approaches to produce provably the shortest among the cost- optimal plans, one corresponding to an instantiation of cost- algebraic A∗ , the other based on a cost transformation. We formally prove that the new cost-transformation method in- deed produces the shortest among the cost-optimal plans and empirically compare the performance of the approaches in different configurations.
-
Andrea Micheli, Alexandre Arnold, Arthur Bit-Monnot, Luigi Bonassi, Luca Framba, Alfonso Emilio Gerevini, Selvakumar Hastham Sathiya Satchi, Malte Helmert, Felix Ingrand, Luca Iocchi, Uwe Kockemann, Oscar Lima, Fabio Patrizi, Federico Pecora, Guillaume Poveda, Gabriele Röger, Alessandro Saetti, Alessandro Saffiotti, Enrico Scala, Ivan Serina, Sebastian Stock, Florent Teichteil-Koenigsbuch, Alessandro Trapasso, Paolo Traverso and Alessandro Valentini.
Unified Planning: A Python Library Making Planning Technology Accessible.
In System Demonstrations at the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022). 2022.
(Show abstract) (Hide abstract) (PDF) (poster; PDF) (video)
The aim of the AIPlan4EU project is to make state-of-the-art planning technology easily accessible for real-world applications and to integrate planning as a service on the European AI-on-demand platform. At the very heart of the project is the development of a unified planning framework that permits to programmatically specify planning tasks and to interact with different planning engines through a common interface. The demo presents the AIPlan4EU vision for the unified planning framework and the current state of the development.
Abstraction Heuristics for Planning and Combinatorial Search: AHPACS (funded by the Swiss National Science Foundation)
Abstraction heuristics are used in AI planning to estimate the cost of reaching a goal by solving an abstract version of the problem. Such abstractions (in particular the so-called "merge&shrink abstractions") are currently the most accurate heuristics but still have a large gap between their theoretically proven potential and their practical performance.
The project AHPACS (Abstraction Heuristics for Planning and Combinatorial Search) aims to close this gap. The theoretical side of this project is to compare abstraction heuristics with other heuristic families according to their asymptotic accuracy. Asymptotic accuracy is defined similar to asymptotic complexity using compilability between heuristics of two different families to prove statements about their relative accuracy. On the practical side the project investigates using global information provided by other heuristics and learning from mistakes with a model checking technique called counterexample-guided abstraction refinement (CEGAR). It also looks at how abstractions can be compressed or constrained to useful information to make better use of available resources. The last phase of AHPACS investigates applications for abstraction heuristics outside of automated plannning like model checking or bioinformatics.
Contact: Malte Helmert and Florian Pommerening
Relevant publications: (Show) (Hide)-
Sergiy Bogomolov, Alexandre Donzé, Goran Frehse, Radu Grosu, Taylor T. Johnson, Hamed Ladan, Andreas Podelski and Martin Wehrle.
Guided Search for Hybrid Systems Based on Coarse-Grained Space Abstractions.
International Journal on Software Tools for Technology Transfer 18 (4), pp. 449-467. 2016.
(Show abstract) (Hide abstract) (PDF)
Hybrid systems represent an important and powerful formalism for modeling real-world applications such as embedded systems. A verification tool like SpaceEx is based on the exploration of a symbolic search space (the region space). As a verification tool, it is typically optimized towards proving the absence of errors. In some settings, e.g., when the verification tool is employed in a feedback-directed design cycle, one would like to have the option to call a version that is optimized towards finding an error trajectory in the region space. A recent approach in this direction is based on guided search. Guided search relies on a cost function that indicates which states are promising to be explored, and preferably explores more promising states first. In this paper, we propose an abstraction-based cost function based on coarse-grained space abstractions for guiding the reachability analysis. For this purpose, a suitable abstraction technique that exploits the flexible granularity of modern reachability analysis algorithms is introduced. The new cost function is an effective extension of pattern database approaches that have been successfully applied in other areas. The approach has been implemented in the SpaceEx model checker. The evaluation shows its practical potential.
-
Martin Wehrle and Sebastian Kupferschmid.
Downward Pattern Refinement for Timed Automata.
International Journal on Software Tools for Technology Transfer 18 (1), pp. 41-56. 2016.
(Show abstract) (Hide abstract) (PDF)
Directed model checking is a well-established approach for detecting error states in concurrent systems. A popular variant to find shortest error traces is to apply the A* search algorithm with distance heuristics that never overestimate the real error distance. An important class of such distance heuristics is the class of pattern database heuristics. Pattern database heuristics are built on abstractions of the system under consideration. In this paper, we propose downward pattern refinement, a systematic approach for the construction of pattern database heuristics for concurrent systems of timed automata. First, we propose a general framework for pattern databases in the context of timed automata and show that desirable theoretical properties hold for the resulting pattern database. Afterward, we formally define a concept to measure the accuracy of abstractions. Based on this concept, we propose an algorithm for computing succinct abstractions that are still accurate to produce informed pattern databases. We evaluate our approach on large and complex industrial problems. The experiments show the practical potential of the resulting pattern database heuristic.
-
Florian Pommerening, Gabriele Röger, Malte Helmert and Blai Bonet.
Heuristics for Cost-Optimal Classical Planning based on Linear Programming.
In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015), pp. 4303-4309. 2015.
Note: This paper was invited for submission to the Best Papers From Sister Conferences Track, based on a paper that appeared in the International Conference on Automated Planning and Scheduling (ICAPS) 2014. When referring to this work, please cite the ICAPS 2014 paper instead of this version.
(Show abstract) (Hide abstract) (PDF)
Many heuristics for cost-optimal planning are based on linear programming. We cover several interesting heuristics of this type by a common framework that fixes the objective function of the linear program. Within the framework, constraints from different heuristics can be combined in one heuristic estimate which dominates the maximum of the component heuristics. Different heuristics of the framework can be compared on the basis of their constraints. We present theoretical results on the relation between existing heuristics and experimental results that demonstrate the potential of the proposed framework.
-
Sascha Scherrer, Florian Pommerening and Martin Wehrle.
Improved Pattern Selection for PDB Heuristics in Classical Planning (Extended Abstract).
In Proceedings of the 8th Annual Symposium on Combinatorial Search (SoCS 2015), pp. 216-217. 2015.
(Show abstract) (Hide abstract) (PDF) (poster; PDF)
We propose the application of variable neighborhood search to address the problem of local optima during the hill-climbing procedure of iPDB.
-
Malte Helmert, Gabriele Röger and Silvan Sievers.
On the Expressive Power of Non-Linear Merge-and-Shrink Representations.
In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 106-114. 2015.
(Show abstract) (Hide abstract) (PDF)
We prove that general merge-and-shrink representations are strictly more powerful than linear ones by showing that there exist problem families that can be represented compactly with general merge-and-shrink representations but not with linear ones. We also give a precise bound that quantifies the necessary blowup incurred by conversions from general merge-and-shrink representations to linear representations or BDDs/ADDs. Our theoretical results suggest an untapped potential for non-linear merging strategies and for the use of non-linear merge-and-shrink-like representations within symbolic search.
-
Jendrik Seipp, Florian Pommerening and Malte Helmert.
New Optimization Functions for Potential Heuristics.
In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 193-201. 2015.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording)
Potential heuristics, recently introduced by Pommerening et al., characterize admissible and consistent heuristics for classical planning as a set of declarative constraints. Every feasible solution for these constraints defines an admissible heuristic, and we can obtain heuristics that optimize certain criteria such as informativeness by specifying suitable objective functions.
The original paper only considered one such objective function: maximizing the heuristic value of the initial state. In this paper, we explore objectives that attempt to maximize heuristic estimates for all states (reachable and unreachable), maximize heuristic estimates for a sample of reachable states, maximize the number of detected dead ends, or minimize search effort. We also search for multiple heuristics with complementary strengths that can be combined to obtain even better heuristics.
-
Florian Pommerening, Malte Helmert, Gabriele Röger and Jendrik Seipp.
From Non-Negative to General Operator Cost Partitioning.
In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI 2015), pp. 3335-3341. 2015.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (technical report; PDF) (erratum; PDF)
Operator cost partitioning is a well-known technique to make admissible heuristics additive by distributing the operator costs among individual heuristics. Planning tasks are usually defined with non-negative operator costs and therefore it appears natural to demand the same for the distributed costs. We argue that this requirement is not necessary and demonstrate the benefit of using general cost partitioning. We show that LP heuristics for operator-counting constraints are cost-partitioned heuristics and that the state equation heuristic computes a cost partitioning over atomic projections. We also introduce a new family of potential heuristics and show their relationship to general cost partitioning.
-
Gabriele Röger and Florian Pommerening.
Linear Programming for Heuristics in Optimal Planning.
In Proceedings of the AAAI-2015 Workshop on Planning, Search, and Optimization (PlanSOpt), pp. 69-76. 2015.
(Show abstract) (Hide abstract) (PDF)
Many recent planning heuristics are based on LP optimization. However, planning experts mostly use LP solvers as a black box and it is often not obvious to them which LP techniques would be most suitable for their specific applications.
To foster the communication between the planning and the optimization community, this paper gives an easily accessible overview over these recent LP-based heuristics, namely the optimal cost partitioning heuristic for abstractions, the post-hoc optimization heuristic, a landmark heuristic, the state-equation heuristic, and a delete relaxation heuristic. All these heuristics fit the framework of so-called operator-counting constraints, which we also present.
-
Nathan R. Sturtevant, Ariel Felner and Malte Helmert.
Exploiting the Rubik's Cube 12-Edge PDB by Combining Partial Pattern Databases and Bloom Filters.
In Proceedings of the Seventh Annual Symposium on Combinatorial Search (SoCS 2014), pp. 175-183. 2014.
(Show abstract) (Hide abstract) (PDF)
Pattern Databases (PDBs) are a common form of abstraction-based heuristic which are often compressed so that a large PDB can fit in memory. Partial Pattern Databases (PPDBs) achieve this by storing only layers of the PDB which are close to the goal. This paper studies the problem of how to best compress and use the 457 GB 12-edge Rubik's cube PDB, suggesting a number of ways that Bloom filters can be used to effectively compress PPDBs. We then develop a theoretical model of the common min compression approach and our Bloom filters, showing that the original method of compressed PPDBs can never be better than min compression. We conclude with experimental results showing that Bloom filter compression of PPDBs provides superior performance to min compression in Rubik's cube.
-
Silvan Sievers, Martin Wehrle and Malte Helmert.
Generalized Label Reduction for Merge-and-Shrink Heuristics.
In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI 2014), pp. 2358-2366. 2014.
Note: AAAI 2014 Outstanding Paper Award Honorable Mention.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code, scripts and data)
Label reduction is a technique for simplifying families of labeled transition systems by dropping distinctions between certain transition labels. While label reduction is critical to the efficient computation of merge-and-shrink heuristics, current theory only permits reducing labels in a limited number of cases. We generalize this theory so that labels can be reduced in every intermediate abstraction of a merge-and-shrink tree. This is particularly important for efficiently computing merge-and-shrink abstractions based on non-linear merge strategies. As a case study, we implement a non-linear merge strategy based on the original work on merge-and-shrink heuristics in model checking by Dräger et al.
-
Florian Pommerening, Gabriele Röger, Malte Helmert and Blai Bonet.
LP-based Heuristics for Cost-optimal Planning.
In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. 2014.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Many heuristics for cost-optimal planning are based on linear programming. We cover several interesting heuristics of this type by a common framework that fixes the objective function of the linear program. Within the framework, constraints from different heuristics can be combined in one heuristic estimate which dominates the maximum of the component heuristics. Different heuristics of the framework can be compared on the basis of their constraints. With this new method of analysis, we show dominance of the recent LP-based state-equation heuristic over optimal cost partitioning on single-variable abstractions. We also show that the previously suggested extension of the state-equation heuristic to exploit safe variables cannot lead to an improved heuristic estimate. We experimentally evaluate the potential of the proposed framework on an extensive suite of benchmark tasks.
-
Jendrik Seipp and Malte Helmert.
Diverse and Additive Cartesian Abstraction Heuristics.
In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 289-297. 2014.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording)
We have recently shown how counterexample-guided abstraction refinement can be used to derive informative Cartesian abstraction heuristics for optimal classical planning. In this work we introduce two methods for producing diverse sets of heuristics within this framework, one based on goal facts, the other based on landmarks. In order to sum the heuristic estimates admissibly we present a novel way of finding cost partitionings for explicitly represented abstraction heuristics. We show that the resulting heuristics outperform other state-of-the-art abstraction heuristics on many benchmark domains.
-
Florian Pommerening, Gabriele Röger and Malte Helmert.
Getting the Most Out of Pattern Databases for Classical Planning.
In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013), pp. 2357-2364. 2013.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
The iPDB procedure by Haslum et al. is the state-of-the-art method for computing additive abstraction heuristics for domain-independent planning. It performs a hill-climbing search in the space of pattern collections, combining information from multiple patterns in the so-called canonical heuristic.
We show how stronger heuristic estimates can be obtained through linear programming. An experimental evaluation demonstrates the strength of the new technique on the IPC benchmark suite.
-
Sergiy Bogomolov, Alexandre Donzé, Goran Frehse, Radu Grosu, Taylor T. Johnson, Hamed Ladan, Andreas Podelski and Martin Wehrle.
Abstraction-Based Guided Search for Hybrid Systems.
In Proceedings of the 20th International SPIN Symposium on Model Checking of Software (SPIN 2013), pp. 117-134. 2013.
(Show abstract) (Hide abstract) (PDF)
Hybrid systems represent an important and powerful formalism for modeling real-world applications such as embedded systems. A verification tool like SpaceEx is based on the exploration of a symbolic search space (the region space). As a verification tool, it is typically optimized towards proving the absence of errors. In some settings, e.g., when the verification tool is employed in a feedback-directed design cycle, one would like to have the option to call a version that is optimized towards finding an error path in the region space. A recent approach in this direction is based on guided search. Guided search relies on a cost function that indicates which states are promising to be explored, and preferably explores more promising states first. In this paper, an abstraction-based cost function based on pattern databases for guiding the reachability analysis is proposed. For this purpose, a suitable abstraction technique that exploits the flexible granularity of modern reachability analysis algorithms is introduced. The new cost function is an effective extension of pattern database approaches that have been successfully applied in other areas. The approach has been implemented in the SpaceEx model checker. The evaluation shows its practical potential. -
Jendrik Seipp and Malte Helmert.
Counterexample-guided Cartesian Abstraction Refinement.
In Proceedings of the 23rd International Conference on Automated Planning and Scheduling (ICAPS 2013), pp. 347-351. 2013.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording)
Counterexample-guided abstraction refinement (CEGAR) is a methodological framework for incrementally computing abstractions of transition systems.
We propose a CEGAR algorithm for computing abstraction heuristics for optimal classical planning. Starting from a coarse abstraction of the planning task, we iteratively compute an optimal abstract solution, check if and why it fails for the concrete planning task and refine the abstraction so that the same failure cannot occur in future iterations.
A key ingredient of our approach is a novel class of abstractions for classical planning tasks that admits efficient and very fine-grained refinement. Our implementation performs tens of thousands of refinement steps in a few minutes and produces heuristics that are often significantly more accurate than pattern database heuristics of the same size.
-
Silvan Sievers, Manuela Ortlieb and Malte Helmert.
Efficient Implementation of Pattern Database Heuristics for Classical Planning.
In Proceedings of the Fifth Annual Symposium on Combinatorial Search (SoCS 2012), pp. 105-111. 2012.
(Show abstract) (Hide abstract) (PDF) (poster; PDF) (code and scripts)
Despite their general success in the heuristic search community, pattern database (PDB) heuristics have, until very recently, not been used by the most successful classical planning systems.
We describe a new efficient implementation of pattern database heuristics within the Fast Downward planner. A planning system using this implementation is competitive with the state of the art in optimal planning, significantly improving over results from the previous best PDB heuristic implementation in planning.
Automated Reformulation and Pruning in Factored State Spaces: ARAP (funded by the Swiss National Science Foundation)
This project continues the project SPOSSS by widening its scope to a more general class of pruning and problem transformation methods. In particular, ARAP considers the following research objectives for optimal planning:
1. Advance the theory and practice of symmetry elimination. While related work focuses on pruning symmetric states within a search algorithm, ARAP applies the theory of symmetry elimination to other aspects of state-space search, such as the automatic computation of abstractions.
2. Develop general and efficient methods for dominance pruning in transition systems. While SPOSSS focuses on equivalence pruning, ARAP investigates more general methods that can also detect and exploit situations where transition sequences are strictly less useful than others.
3. Develop problem reformulation methods that automatically enrich the description of a transition system to make it more amenable to the distance estimation techniques used in modern planning algorithms, and to abstract away parts of the system that can be solved without search.
Contact: Malte Helmert and Martin Wehrle
Relevant publications: (Show) (Hide)-
Martin Wehrle, Silvan Sievers and Malte Helmert.
Graph-Based Factorization of Classical Planning Problems.
In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI 2016), pp. 3286-3292. 2016.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
In domain-independent planning, dependencies of operators and variables often prevent the effective application of planning techniques that rely on "loosely coupled" problems (like factored planning or partial order reduction). In this paper, we propose a generic approach for
factorizing a classical planning problem into an equivalent problem with fewer operator and variable dependencies. Our approach is based on variable factorization, which can be reduced to the well-studied problem of graph factorization. While the state spaces of the original and the factorized problems are isomorphic, the factorization offers the potential to exponentially reduce the complexity of planning techniques like factored planning and partial order reduction. -
Daniel Gnad, Martin Wehrle and Jörg Hoffmann.
Decoupled Strong Stubborn Sets.
In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI 2016), pp. 3110-3116. 2016.
Note: A non-archival version of this paper has also been presented at the HSDIP workshop at ICAPS 2016.
(Show abstract) (Hide abstract) (PDF)
Recent work has introduced fork-decoupled search, addressing classical planning problems where a single center component provides preconditions for several leaf components. Given a fixed center path πC, the leaf moves compliant with πC can then be scheduled independently for each leaf. Fork-decoupled search thus searches over center paths only, maintaining the compliant paths for each leaf separately. This can yield dramatic benefits. It is empirically complementary to partial order reduction via strong stubborn sets, in that each method yields its strongest reductions in different benchmarks. Here we show that the two methods can be combined, in the form of strong stubborn sets for fork-decoupled search. This can yield exponential advantages relative to both methods. Empirically, the combination reliably inherits the best of its components, and often outperforms both.
-
Dominik Winterer, Martin Wehrle and Michael Katz.
Structural Symmetries for Fully Observable Nondeterministic Planning.
In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI 2016), pp. 3293-3299. 2016.
(Show abstract) (Hide abstract) (PDF)
Symmetry reduction has significantly contributed to the success of classical planning as heuristic search. However, it is an open question if symmetry reduction techniques can be lifted to fully observable nondeterministic (FOND) planning. We generalize the concepts of structural symmetries and symmetry reduction to FOND planning and specifically to the LAO* algorithm. Our base implementation of LAO* in the Fast Downward planner is competitive with the LAO*-based FOND planner myND. Our experiments further show that symmetry reduction can yield strong performance gains compared to our base implementation of LAO*.
-
Yusra Alkhazraji and Martin Wehrle.
Sleep Sets Meet Duplicate Elimination.
In Proceedings of the 9th Annual Symposium on Combinatorial Search (SoCS 2016), pp. 2-9. 2016.
(Show abstract) (Hide abstract) (PDF)
The sleep sets technique is a path-dependent pruning method for state space search. In the past, the combination of sleep sets with graph search algorithms that perform duplicate elimination has often shown to be error-prone. In this paper, we provide the theoretical basis for the integration of sleep sets with common search algorithms in AI that perform duplicate elimination. Specifically, we investigate approaches to safely integrate sleep sets with
optimal (best-first) search algorithms. Based on this theory, we provide an initial step towards integrating sleep sets within A* and additional state pruning techniques like strong stubborn sets. Our experiments show slight, yet consistent improvements on the number of generated search nodes across a large number of standard domains from the international planning competitions. -
Silvan Sievers, Martin Wehrle, Malte Helmert and Michael Katz.
An Empirical Case Study on Symmetry Handling in Cost-Optimal Planning as Heuristic Search.
In Proceedings of the 38th Annual German Conference on Artificial Intelligence (KI 2015), pp. 166-180. 2015.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
Symmetries provide the basis for well-established approaches to tackle the state explosion problem in state space search and in AI planning. However, although by now there are various symmetry-based techniques available, these techniques have not yet been empirically evaluated and compared to each other in a common setting. In particular, it is unclear which of them should be preferably applied, and whether there are techniques with stronger performance than others. In this paper, we shed light on this issue by providing an empirical case study. We combine and evaluate several symmetry-based techniques for cost-optimal planning as heuristic search. For our evaluation, we use state-of-the-art abstraction heuristics on a large set of benchmarks from the international planning competitions.
-
Martin Wehrle, Malte Helmert, Alexander Shleyfman and Michael Katz.
Integrating Partial Order Reduction and Symmetry Elimination for Cost-Optimal Classical Planning.
In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015), pp. 1712-1718. 2015.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF)
Pruning techniques based on partial order reduction and symmetry elimination have recently found increasing attention for optimal planning. Although these techniques appear to be rather different, they base their pruning decisions on similar ideas from a high level perspective. In this paper, we propose safe integrations of partial order reduction and symmetry elimination for cost-optimal classical planning. We show that previously proposed symmetry-based search algorithms can safely be applied with strong stubborn sets. In addition, we derive the notion of symmetrical strong stubborn sets as a more tightly integrated concept. Our experiments show the potential of our approaches.
-
Sergiy Bogomolov, Daniele Magazzeni, Stefano Minopoli and Martin Wehrle.
PDDL+ Planning with Hybrid Automata: Foundations of Translating Must Behavior.
In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 42-46. 2015.
(Show abstract) (Hide abstract) (PDF)
Planning in hybrid domains poses a special challenge due to the involved mixed discrete-continuous dynamics. A recent solving approach for such domains is based on applying model checking techniques on a translation of PDDL+ planning problems to hybrid automata. However, the proposed translation is limited because must behavior is only over-approximated, and hence, processes and events are not reflected exactly. In this paper, we present the theoretical foundation of an exact PDDL+ translation. We propose a schema to convert a hybrid automaton with must transitions into an equivalent hybrid automaton featuring only may transitions.
-
Florian Pommerening and Malte Helmert.
A Normal Form for Classical Planning Tasks.
In Proceedings of the 25th International Conference on Automated Planning and Scheduling (ICAPS 2015), pp. 188-192. 2015.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
We describe transition normal form (TNF) for classical planning tasks, where there is a unique goal state and variables occur in an operator precondition iff they appear in the effect. Tasks can be efficiently converted to TNF, all common planning heuristics are invariant under the transformation, and tasks in normal form are easier to study theoretically.
-
Dominik Winterer, Robert Mattmüller and Martin Wehrle.
Stubborn Sets for Fully Observable Nondeterministic Planning.
In Proceedings of the ICAPS-2015 Workshop on Model Checking and Automated Planning (MOCHAP), pp. 6-12. 2015.
(Show abstract) (Hide abstract) (PDF)
The stubborn set method is a state-space reduction technique, originally introduced in model checking and then transfered to classical planning. It was shown that stubborn sets significantly improve the performance of optimal deterministic planners by considering only a subset of applicable operators in a state. Fully observable nondeterministic planning (FOND) extends the formalism of classical planning by nondeterministic operators. We show that stubborn sets are also beneficial for FOND problems. We introduce nondeterministic stubborn sets, stubborn sets which preserve strong cyclic plans. We follow two approaches: Fast Incremental Planning with stubborn sets from classical planning and LAO* search with nondeterministic stubborn sets. Our experiments show that both approaches increase coverage and decrease node generations when compared to their respective baselines.
-
Silvan Sievers, Martin Wehrle, Malte Helmert, Alexander Shleyfman and Michael Katz.
Factored Symmetries for Merge-and-Shrink Abstractions.
In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI 2015), pp. 3378-3385. 2015.
(Show abstract) (Hide abstract) (PDF) (poster; PDF) (code, scripts and data)
Merge-and-shrink heuristics crucially rely on effective reduction techniques, such as bisimulation-based shrinking, to avoid the combinatorial explosion of abstractions. We propose the concept of factored symmetries for merge-and-shrink abstractions based on the established concept of symmetry reduction for state-space search. We investigate under which conditions factored symmetry reduction yields perfect heuristics and discuss the relationship to bisimulation. We also devise practical merging strategies based on this concept and experimentally validate their utility.
Beyond Distance Estimates: A New Theory of Heuristics for State-Space Search: BDE (funded by the European Research Council)
Many problems in computer science can be cast as state-space search, where the objective is to find a path from an initial state to a goal state in a directed graph called a state space. State-space search is challenging due to the state explosion problem a.k.a. curse of dimensionality: interesting state spaces are often astronomically large, defying brute-force exploration.
State-space search has been a core research problem in Artificial Intelligence since its early days and is alive as ever. Every year, a substantial fraction of research published at the ICAPS and SoCS conferences is concerned with state-space search, and the topic is very active at general AI conferences such as IJCAI and AAAI.
Algorithms in the A* family, dating back to 1968, are still the go-to approach for state-space search. A* is a graph search algorithm whose only “intelligence” stems from a so-called heuristic function, which estimates the distance from a state to the nearest goal state. The efficiency of A* depends on the accuracy of this estimate, and decades of research have pushed the envelope in devising increasingly accurate estimates. In this project, we question the "A* + distance estimator" paradigm and explore three new directions that go beyond the classical approach:
- We propose a new paradigm of declarative heuristics, where heuristic information is not represented as distance estimates, but as properties of solutions amenable to introspection and general reasoning.
- We suggest moving the burden of creativity away from the human expert by casting heuristic design as a meta-optimization problem that can be solved automatically.
- We propose abandoning the idea of exploring sequential paths in state spaces, instead transforming state-space search into combinatorial optimization problems with no explicit sequencing aspect. We argue that the curse of sequentiality is as bad as the curse of dimensionality and must be addressed head-on.
Contact: Malte Helmert and Florian Pommerening
Relevant publications: (Show) (Hide)-
Simon Dold and Malte Helmert.
Higher-Dimensional Potential Heuristics: Lower Bound Criterion and Connection to Correlation Complexity.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 151-161. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Correlation complexity is a measure of a planning task indicating how hard it is. The introducing work provides sufficient criteria to detect a correlation complexity of 2 on a planning task. It also introduced an example of a planning task with correlation complexity 3. In our work, we introduce a criterion to detect an arbitrary correlation complexity and extend the mentioned example to show with the new criterion that planning tasks with arbitrary correlation complexity exist.
-
Florian Pommerening, Clemens Büchner and Thomas Keller.
Transition Landmarks from Abstraction Cuts.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 445-454. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
We introduce transition-counting constraints as a principled tool to formalize constraints that must hold in every solution of a transition system. We then show how to obtain transition landmark constraints from abstraction cuts. Transition landmarks dominate operator landmarks in theory but require solving a linear program that is prohibitively large in practice. We compare different constraints that project away transition-counting variables and then further relax the constraint. For one important special case, we provide a lossless projection. We finally discuss efficient data structures to derive cuts from abstractions and store them in a way that avoids repeated computation in every state. We compare the resulting heuristics both theoretically and on benchmarks from the international planning competition.
-
Silvan Sievers, Thomas Keller and Gabriele Röger.
Merging or Computing Saturated Cost Partitionings? A Merge Strategy for the Merge-and-Shrink Framework.
In Proceedings of the 34th International Conference on Automated Planning and Scheduling (ICAPS 2024), pp. 541-545. 2024.
(Show abstract) (Hide abstract) (PDF) (code, scripts and data) (slides; PDF)
The merge-and-shrink framework is a powerful tool for computing abstraction heuristics for optimal classical planning. Merging is one of its name-giving transformations. It entails computing the product of two factors of a factored transition system. To decide which two factors to merge, the framework uses a merge strategy. While there exist many merge strategies, it is generally unclear what constitutes a strong merge strategy, and a previous analysis shows that there is still lots of room for improvement with existing merge strategies. In this paper, we devise a new scoring function for score-based merge strategies based on answering the question whether merging two factors has any benefits over computing saturated cost partitioning heuristics over the factors instead. Our experimental evaluation shows that our new merge strategy achieves state-of-the-art performance on IPC benchmarks.
-
Simon Dold and Malte Helmert.
Novelty vs. Potential Heuristics: A Comparison of Hardness Measures for Satisficing Planning.
In Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI 2024), pp. 20692-20699. 2024.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Classical planning considers a given task and searches for a plan to solve it. Some tasks are harder to solve than others. We can measure the “hardness” of a task with the novelty width and the correlation complexity. In this work, we compare these measures. Additionally, we introduce the river measure, a new measure that is based on potential heuristics and therefore similar to the correlation complexity but also comparable to the novelty width. We show that the river measure is upper bounded by the correlation complexity and by the novelty width +1. Furthermore, we show that we can convert a planning task with a polynomial blowup of the task size to ensure that a heuristic of dimension 2 exists that gives rise to backtrack-free search.
-
Thorsten Klößner, Álvaro Torralba, Marcel Steinmetz and Silvan Sievers.
A Theory of Merge-and-Shrink for Stochastic Shortest Path Problems.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 203-211. 2023.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides; PDF)
The merge-and-shrink framework is a powerful tool to construct state space abstractions based on factored representations. One of its core applications in classical planning is the construction of admissible abstraction heuristics. In this paper, we develop a compositional theory of merge-and-shrink in the context of probabilistic planning, focusing on stochastic shortest path problems (SSPs). As the basis for this development, we contribute a novel factored state space model for SSPs. We show how general transformations, including abstractions, can be formulated on this model to derive admissible and/or perfect heuristics. To formalize the merge-and-shrink framework for SSPs, we transfer the fundamental merge-and-shrink transformations from the classical setting: shrinking, merging, and label reduction. We analyze the formal properties of these transformations in detail and show how the conditions under which shrinking and label reduction lead to perfect heuristics can be extended to the SSP setting.
-
Daniel Gnad, Silvan Sievers and Álvaro Torralba.
Efficient Evaluation of Large Abstractions for Decoupled Search: Merge-and-Shrink and Symbolic Pattern Databases.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 138-147. 2023.
(Show abstract) (Hide abstract) (PDF) (Supplementary Material; PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
Abstraction heuristics are a state-of-the-art technique to solve classical planning problems optimally. A common approach is to precompute many small abstractions and combine them admissibly using cost partitioning. Recent work has shown that this approach does not work out well when using such heuristics for decoupled state space search, where search nodes represent potentially large sets of states. This is due to the fact that admissibly combining the estimates of several heuristics without sacrificing accuracy is NP-hard for decoupled states. In this work we propose to use a single large abstraction instead. We focus on merge-and-shrink and symbolic pattern database heuristics, which are designed to produce such abstractions. For these heuristics, we prove that the evaluation of decoupled states is NP-hard in general, but we also identify conditions under which it is polynomial. We introduce algorithms for both the general and the polynomial case. Our experimental evaluation shows that single large abstraction heuristics lead to strong performance when the heuristic evaluation is polynomial.
-
Raphael Kreft, Clemens Büchner, Silvan Sievers and Malte Helmert.
Computing Domain Abstractions for Optimal Classical Planning with Counterexample-Guided Abstraction Refinement.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 221-226. 2023.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
Abstraction heuristics are the state of the art in optimal classical planning as heuristic search. A popular method for computing abstractions is the counterexample-guided abstraction refinement (CEGAR) principle, which has successfully been used for projections, which are the abstractions underlying pattern databases, and Cartesian abstractions. While projections are simple and fast to compute, Cartesian abstractions subsume projections and hence allow more finegrained abstractions, however at the expense of efficiency. Domain abstractions are a third class of abstractions between projections and Cartesian abstractions in terms of generality. Yet, to the best of our knowledge, they are only briefly considered in the planning literature but have not been used for computing heuristics yet. We aim to close this gap and compute domain abstractions by using the CEGAR principle. Our empirical results show that domain abstractions compare favorably against projections and Cartesian abstractions.
-
Clemens Büchner, Thomas Keller, Salomé Eriksson and Malte Helmert.
Landmark Progression in Heuristic Search.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 70-79. 2023.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
The computation of high-quality landmarks and orderings for heuristic state-space search is often prohibitively expensive to be performed in every generated state. Computing information only for the initial state and progressing it from every state to its successors is a successful alternative, exploited for example in classical planning by the LAMA planner. We propose a general framework for using landmarks in any kind of best-first search. Its core component, the progression function, uses orderings and search history to determine which landmarks must still be achieved. We show that the progression function that is used in LAMA infers invalid information in the presence of reasonable orderings. We define a sound progression function that allows to exploit reasonable orderings in cost-optimal planning and show empirically that our new progression function is beneficial both in satisficing and optimal planning.
-
Patrick Ferber, Liat Cohen, Jendrik Seipp and Thomas Keller.
Learning and Exploiting Progress States in Greedy Best-First Search.
In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI 2022), pp. 4740-4746. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code and data)
Previous work introduced the concept of progress states. Once greedy best-first search expanded a progress state, it expands only states with lower heuristic values. Current methods identify progress states for a single task and after a solution for the task is found. We introduce a novel approach which learns for a classical planning domain a description logic formula to characterize all its progress states. This formula can be used during future searches. We present a first use case which exploits the learned formulas: During search we break ties in favor of progress states. This often significantly reduces the search effort.
-
Daniel Heller, Patrick Ferber, Julian Bitterwolf, Matthias Hein and Jörg Hoffmann.
Neural Network Heuristic Functions: Taking Confidence into Account.
In Proceedings of the 15th International Symposium on Combinatorial Search (SoCS 2022), pp. 223-228. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code)
Neural networks (NN) are increasingly investigated in AI Planning, and are used successfully to learn heuristic functions. NNs commonly not only predict a value, but also output a confidence in this prediction. From the perspective of heuristic search with NN heuristics, it is a natural idea to take this into account, e.g. falling back to a standard heuristic where confidence is low. We contribute an empirical study of this idea. We design search methods which prune nodes, or switch between search queues, based on the confidence of NNs. We furthermore explore the possibility of out-of-distribution (OOD) training, which tries to reduce the overconfidence of NNs on inputs different to the training distribution. In experiments on IPC benchmarks, we find that our search methods improve coverage over standard methods, and that OOD training has the desired effect in terms of prediction accuracy and confidence, though its impact on search seems marginal.
-
Silvan Sievers, Daniel Gnad and Álvaro Torralba.
Additive Pattern Databases for Decoupled Search.
In Proceedings of the 15th International Symposium on Combinatorial Search (SoCS 2022), pp. 180-189. 2022.
Note: SoCS 2022 Best Paper Award.
(Show abstract) (Hide abstract) (PDF) (Additional Material; PDF) (slides; PDF) (code, scripts and data)
Abstraction heuristics are the state of the art in optimal classical planning as heuristic search. Despite their success for explicit-state search, though, abstraction heuristics are not available for decoupled state-space search, an orthogonal reduction technique that can lead to exponential savings by decomposing planning tasks. In this paper, we show how to compute pattern database (PDB) heuristics for decoupled states. The main challenge lies in how to additively employ multiple patterns, which is crucial for strong search guidance of the heuristics. We show that in the general case, for arbitrary collections of PDBs, computing the heuristic for a decoupled state is exponential in the number of leaf components of decoupled search. We derive several variants of decoupled PDB heuristics that allow to additively combine PDBs avoiding this blow-up and evaluate them empirically.
-
Clemens Büchner, Patrick Ferber, Jendrik Seipp and Malte Helmert.
A Comparison of Abstraction Heuristics for Rubik's Cube.
In Proceedings of the ICAPS 2022 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2022). 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4) (code)
Since its invention in 1974, the Rubik’s Cube puzzle fascinates people of all ages. Its rules are simple: the player gets a scrambled cube and rotates the six faces until each face contains only stickers of one color. Nevertheless, finding a short sequence of rotations to solve the cube is hard. We present the first model of Rubik’s Cube for general problem solvers. To obtain a concise model, we require conditional effects. Furthermore, we extend counterexample-guided Cartesian abstraction refinement (CEGAR) to support factored effect tasks, a class of planning tasks with a specific kind of conditional effects which includes Rubik’s Cube. Finally, we evaluate how newer types of abstraction heuristics compare against pattern database (PDB) heuristics, the state-of-the-art for solving Rubik’s Cube. We find that PDBs still outperform the more general Cartesian and merge-and-shrink abstractions. However, in contrast to PDBs, Cartesian abstractions yield perfect heuristics up to a certain problem difficulty. These findings raise interesting questions for future research.
-
Thorsten Klößner, Florian Pommerening, Thomas Keller and Gabriele Röger.
Cost Partitioning Heuristics for Stochastic Shortest Path Problems.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 193-202. 2022.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF)
In classical planning, cost partitioning is a powerful method which allows to combine multiple admissible heuristics while retaining an admissible bound. In this paper, we extend the theory of cost partitioning to probabilistic planning by generalizing from deterministic transition systems to stochastic shortest path problems (SSPs). We show that fundamental results related to cost partitioning still hold in our extended theory. We also investigate how to optimally partition costs for a large class of abstraction heuristics for SSPs. Lastly, we analyze occupation measure heuristics for SSPs as well as the theory of approximate linear programming for reward-oriented Markov decision processes. All of these fit our framework and can be seen as cost-partitioned heuristics.
-
Remo Christen, Salomé Eriksson, Florian Pommerening and Malte Helmert.
Detecting Unsolvability Based on Separating Functions.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 44-52. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code, scripts, and data)
While the unsolvability IPC sparked a multitude of planners proficient in detecting unsolvable planning tasks, there are gaps where concise unsolvability arguments are known but no existing planner can capture them without prohibitive computational effort. One such example is the sliding tiles puzzle, where solvability can be decided in polynomial time with a parity argument. We introduce separating functions, which can prove that one state is unreachable from another, and show under what conditions a potential function over any nonzero ring is a separating function. We prove that we can compactly encode these conditions for potential functions over features that are pairs, and show in which cases we can efficiently synthesize functions satisfying these conditions. We experimentally evaluate a domain-independent algorithm that successfully synthesizes such separating functions from PDDL representations of the sliding tiles puzzle, the Lights Out puzzle, and Peg Solitaire.
-
Malte Helmert, Silvan Sievers, Alexander Rovner and Augusto B. Corrêa.
On the Complexity of Heuristic Synthesis for Satisficing Classical Planning: Potential Heuristics and Beyond.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 124-133. 2022.
Erratum: In the published version of the paper, the proof of Theorem 7 mistakenly says "DDA" instead of "SDDA" in some places. This is fixed in the version provided here.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
Potential functions are a general class of heuristics for classical planning. For satisficing planning, previous work suggested the use of descending and dead-end avoiding (DDA) potential heuristics, which solve planning tasks by backtrack-free search. In this work we study the complexity of devising DDA potential heuristics for classical planning tasks. We show that verifying or synthesizing DDA potential heuristics is PSPACE-complete, but suitable modifications of the DDA properties reduce the complexity of these problems to the first and second level of the polynomial hierarchy. We also discuss the implications of our results for other forms of heuristic synthesis in classical planning.
-
Silvan Sievers, Daniel Gnad and Álvaro Torralba.
Additive Pattern Databases for Decoupled Search.
In Proceedings of the ICAPS 2022 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2022). 2022.
Note: This paper is superseded by the SoCS 2022 paper by the same name.
(Show abstract) (Hide abstract) (PDF) (Additional Material; PDF) (slides; PDF) (talk; MP4) (code, scripts and data)
Abstraction heuristics are the state of the art in optimal classical planning as heuristic search. Despite their success for explicit-state search, though, abstraction heuristics are not available for decoupled state-space search, an orthogonal reduction technique that can lead to exponential savings by decomposing planning tasks. In this paper, we show how to compute pattern database (PDB) heuristics for decoupled states. The main challenge lies in how to additively employ multiple patterns, which is crucial for strong search guidance of the heuristics. We show that in the general case, for arbitrary collections of PDBs, computing the heuristic for a decoupled state is exponential in the number of leaf components of decoupled search. We derive several variants of decoupled PDB heuristics that allow to additively combine PDBs avoiding this blow-up and evaluate them empirically.
-
Silvan Sievers and Martin Wehrle.
On Weak Stubborn Sets in Classical Planning.
In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI 2021), pp. 4167-4174. 2021.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides (short); PDF) (slides (long); PDF) (poster; PDF) (2m talk) (15m talk) (code, scripts and data)
Stubborn sets are a pruning technique for state-space search which is well established in optimal classical planning. In this paper, we show that weak stubborn sets introduced in recent work in planning are actually not weak stubborn sets in Valmari's original sense. Based on this finding, we introduce weak stubborn sets in the original sense for planning by providing a generalized definition analogously to generalized strong stubborn sets in previous work. We discuss the relationship of strong, weak and the previously called weak stubborn sets, thus providing a further step in getting an overall picture of the stubborn set approach in planning.
-
Silvan Sievers and Malte Helmert.
Merge-and-Shrink: A Compositional Theory of Transformations of Factored Transition Systems.
Journal of Artificial Intelligence Research 71, pp. 781-883. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (2m talk) (15m talk)
The merge-and-shrink framework has been introduced as a general approach for defining abstractions of large state spaces arising in domain-independent planning and related areas. The distinguishing characteristic of the merge-and-shrink approach is that it operates directly on the factored representation of state spaces, repeatedly modifying this representation through transformations such as shrinking (abstracting a factor of the representation), merging (combining two factors), label reduction (abstracting the way in which different factors interact), and pruning (removing states or transitions of a factor).
We provide a novel view of the merge-and-shrink framework as a "toolbox" or "algebra" of transformations on factored transition systems, with the construction of abstractions as only one possible application. For each transformation, we study desirable properties such as conservativeness (overapproximating the original transition system), inducedness (absence of spurious states and transitions), and refinability (reconstruction of paths in the original transition system from the transformed one). We provide the first complete characterizations of the conditions under which these desirable properties can be achieved. We also provide the first full formal account of factored mappings, the mechanism used within the merge-and-shrink framework to establish the relationship between states in the original and transformed factored transition system.
Unlike earlier attempts to develop a theory for merge-and-shrink, our approach is fully compositional: the properties of a sequence of transformations can be entirely understood by the properties of the individual transformations involved. This aspect is key to the use of merge-and-shrink as a general toolbox for transforming factored transition systems. New transformations can easily be added to our theory, with compositionality taking care of the seamless integration with the existing components. Similarly, new properties of transformations can be integrated into the theory by showing their compositionality and studying under which conditions they are satisfied by the building blocks of merge-and-shrink.
-
Dominik Drexler, Jendrik Seipp and David Speck.
Subset-Saturated Transition Cost Partitioning.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 131-139. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
Cost partitioning admissibly combines the information from multiple heuristics for optimal state-space search. One of the strongest cost partitioning algorithms is saturated cost partitioning. It considers the heuristics in sequence and assigns to each heuristic the minimal fraction of the remaining costs that are needed for preserving all heuristic estimates. Saturated cost partitioning has recently been generalized in two directions: first, by allowing to use different costs for the transitions induced by the same operator, and second, by preserving the heuristic estimates for only a subset of states. In this work, we unify these two generalizations and show that the resulting subset-saturated transition cost partitioning algorithm usually yields stronger heuristics than the two generalizations by themselves.
-
Jendrik Seipp.
Online Saturated Cost Partitioning for Classical Planning.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 317-321. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
Cost partitioning is a general method for admissibly summing up heuristic estimates for optimal state-space search. Most cost partitioning algorithms can optimize the resulting cost-partitioned heuristic for a specific state. Since computing a new cost-partitioned heuristic for each evaluated state is usually too expensive in practice, the strongest planners based on cost partitioning over abstraction heuristics precompute a set of cost-partitioned heuristics before the search and maximize over their estimates during the search. This makes state evaluations very fast, but since there is no better termination criterion than a time limit, it requires a long precomputation phase, even for the simplest planning tasks. A prototypical example for this is the Scorpion planner which computes saturated cost partitionings over abstraction heuristics offline before the search. Using Scorpion as a case study, we show that by incrementally extending the set of cost-partitioned heuristics online during the search, we drastically speed up the planning process and even often solve more tasks.
-
Clemens Büchner, Thomas Keller and Malte Helmert.
Exploiting Cyclic Dependencies in Landmark Heuristics.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 65-73. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code)
Landmarks of a planning task denote properties that must be satisfied by all plans. Existing landmark heuristics exploit that each landmark must be achieved at least once. However, if the orderings between the landmarks induce cyclic dependencies, one of the landmarks in each cycle must be achieved an additional time. We propose two novel heuristics for cost-optimal planning that consider cyclic dependencies between landmarks in addition to the cost for achieving all landmarks once. We show that our heuristics dominate the minimum hitting set solution over any set of landmarks as well as h + if all delete-relaxation landmarks are considered. An experimental evaluation on benchmarks from the International Planning Competition shows that exploiting cyclic dependencies can lead to improved heuristics.
-
Florian Pommerening, Thomas Keller, Valentina Halasi, Jendrik Seipp, Silvan Sievers and Malte Helmert.
Dantzig-Wolfe Decomposition for Cost Partitioning.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 271-280. 2021.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides; PDF) (recording) (poster; PDF) (code)
Optimal cost partitioning can produce high quality heuristic estimates even from small abstractions. It can be computed with a linear program (LP) but the size of this LP often makes this impractical. Recent work used Lagrangian decomposition to speed up the computation. Here we use a different decomposition technique called Dantzig-Wolfe decomposition to tackle the problem. This gives new insights into optimal cost partitioning and has several advantages over Lagrangian decomposition: our method detects when a cost partition is optimal; it can deal with general cost functions; and it does not consider abstractions in the linear program that do not contribute to the heuristic value. We also show the advantage of the method empirically and investigate several improvements that are useful for all cost partitioning methods.
-
Clemens Büchner and Thomas Keller.
LM-BFS: A Framework for Landmarks in Planning.
In Proceedings of the ICAPS 2021 Workshop on Heuristics and Search for Domain-Independent Planning (HSDIP 2021). 2021.
Superseded by the ICAPS 2023 paper "Landmark Progression in Heuristic Search".
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4)
It is usually prohibitively expensive to recompute landmarks in every state of a heuristic search algorithm that bases its heuristic estimate on landmarks. Successful planning systems like LAMA compute landmarks just for the initial state instead and only keep track for each state encountered during search which landmarks of the initial state are accepted or required again. The LM-A* algorithm describes a similar idea for cost-optimal heuristic search. However, it lacks support for reasonable orderings, which have recently been shown to be informative when cyclic dependencies between landmarks are exploited in the heuristic computation. We propose the novel LM-BFS framework for best-first heuristic search with landmark-based heuristics which allows to describe a variety of algorithms in terms of five components: landmark generation, landmark graph progression, landmark graph merging, landmark graph extension, and exploration strategy. We use LM-BFS to show that considering reasonable orderings in the landmark graph progression component leads to flawed landmark graphs in LAMA , and propose a novel landmark graph extension that captures the same information flawlessly. We observe empirically that considering reasonable orderings in the extension rather than the progression component has a positive effect both for LAMA and for cost-optimal planning.
-
Jendrik Seipp, Thomas Keller and Malte Helmert.
Saturated Post-hoc Optimization for Classical Planning.
In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI 2021), pp. 11947-11953. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4) (poster; PDF) (code)
Saturated cost partitioning and post-hoc optimization are two powerful cost partitioning algorithms for optimal classical planning. The main idea of saturated cost partitioning is to give each considered heuristic only the fraction of remaining operator costs that it needs to prove its estimates. We show how to apply this idea to post-hoc optimization and obtain a heuristic that dominates the original both in theory and on the IPC benchmarks.
-
Florian Pommerening, Gabriele Röger, Malte Helmert, Hadrien Cambazard, Louis-Martin Rousseau and Domenico Salvagnin.
Lagrangian Decomposition for Classical Planning (Extended Abstract).
In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020), pp. 4770-4774. 2020.
Note: This paper was invited for submission to the Best Papers From Sister Conferences Track, based on a paper that appeared in the International Conference on Automated Planning and Scheduling (ICAPS) 2019. When referring to this work, please cite the ICAPS 2019 paper instead of this version.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (short recording; MP4) (long recording; MP4) (poster; PDF)
Optimal cost partitioning of classical planning heuristics has been shown to lead to excellent heuristic values but is often prohibitively expensive to compute. We analyze the application of Lagrangian decomposition, a classical tool in mathematical programming, to cost partitioning of operator-counting heuristics. This allows us to view the computation as an iterative process that can be seeded with any cost partitioning and that improves over time. In the case of non-negative cost partitioning of abstraction heuristics the computation reduces to independent shortest path problems and does not require an LP solver.
-
Silvan Sievers, Florian Pommerening, Thomas Keller and Malte Helmert.
Cost-Partitioned Merge-and-Shrink Heuristics for Optimal Classical Planning.
In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020), pp. 4152-4160. 2020.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides (short); PDF) (slides (long); PDF) (poster; PDF) (5m talk; MP4) (15m talk; MP4) (code, scripts and data)
Cost partitioning is a method for admissibly combining admissible heuristics. In this work, we extend this concept to merge-and-shrink (M&S) abstractions that may use labels that do not directly correspond to operators. We investigate how optimal and saturated cost partitioning (SCP) interact with M&S transformations and develop a method to compute SCPs during the computation of M&S. Experiments show that SCP significantly improves M&S on standard planning benchmarks.
-
Jendrik Seipp, Samuel von Allmen and Malte Helmert.
Incremental Search for Counterexample-Guided Cartesian Abstraction Refinement.
In Proceedings of the 30th International Conference on Automated Planning and Scheduling (ICAPS 2020), pp. 244-248. 2020.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording) (poster; PDF) (code)
Counterexample-guided Cartesian abstraction refinement has been shown to yield informative heuristics for optimal classical planning. The algorithm iteratively finds an abstract solution and uses it to decide how to refine the abstraction. Since the abstraction grows in each step, finding solutions is the main bottleneck of the refinement loop. We cast the refinements as an incremental search problem and show that this drastically reduces the time for computing abstractions.
-
Jendrik Seipp.
Online Saturated Cost Partitioning for Classical Planning.
In Proceedings of the ICAPS 2020 Workshop on Heuristics and Search for Domain-Independent Planning (HSDIP 2020), pp. 16-22. 2020.
Superseded by the ICAPS 2021 paper by the same name.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording) (code)
Saturated cost partitioning is a general method for admissibly adding heuristic estimates for optimal state-space search. The algorithm strongly depends on the order in which it considers the heuristics. The strongest previous approach precomputes a set of diverse orders and the corresponding saturated cost partitionings before the search. This makes evaluating the overall heuristic very fast, but requires a long precomputation phase. By diversifying the set of orders online during the search we drastically speed up the planning process and even solve slightly more tasks.
-
Álvaro Torralba, Jendrik Seipp and Silvan Sievers.
Automatic Configuration of Benchmark Sets for Classical Planning.
In Proceedings of the ICAPS 2020 Workshop on Heuristics and Search for Domain-Independent Planning (HSDIP 2020), pp. 58-66. 2020.
Superseded by the ICAPS 2021 paper "Automatic Instance Generation for Classical Planning".
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording)
The benchmarks from previous International Planning Competitions are commonly used to evaluate new planning algorithms. Since this set has grown organically over the years, it has several flaws: it contains duplicate tasks, unsolvable tasks, trivially solvable domains, and domains with modelling errors. Also, diverse domain sizes complicate aggregating results. Most importantly, however, the range of task difficulty is very small in many domains. We propose an automated method for creating benchmarks that solves these issues. To find a good scaling in difficulty, we automatically configure the parameters of benchmark domains. We show that the resulting benchmark set improves empirical comparisons by allowing to differentiate between planners more easily.
-
Florian Geißer, David Speck and Thomas Keller.
Trial-based Heuristic Tree Search for MDPs with Factored Action Spaces.
In Proceedings of the 13th Annual Symposium on Combinatorial Search (SoCS 2020), pp. 38-47. 2020.
(Show abstract) (Hide abstract) (PDF)
MDPs with factored action spaces, i.e. where actions are described as assignments to a set of action variables, allow reasoning over action variables instead of action states, yet most algorithms only consider a grounded action representation. This includes algorithms that are instantiations of the trial-based heuristic tree search (THTS) framework, such as AO* or UCT.
To be able to reason over factored action spaces, we propose a generalisation of THTS where nodes that branch over all applicable actions are replaced with subtrees that consist of nodes that represent the decision for a single action variable. We show that many THTS algorithms retain their theoretical properties under the generalised framework, and show how to approximate any state-action heuristic to a heuristic for partial action assignments. This allows to guide a UCT variant that is able to create exponentially fewer nodes than the same algorithm that considers ground actions. An empirical evaluation on the benchmark set of the probabilistic track of the latest International Planning Competition validates the benefits of the approach.
-
Gabriele Röger, Malte Helmert, Jendrik Seipp and Silvan Sievers.
An Atom-Centric Perspective on Stubborn Sets.
In Proceedings of the 13th Annual Symposium on Combinatorial Search (SoCS 2020), pp. 57-65. 2020.
Note: SoCS 2020 Best Paper Award.
(Show abstract) (Hide abstract) (PDF) (slides; HTML) (recording; MP4) (code, scripts and data)
Stubborn sets are an optimality-preserving pruning technique for factored state-space search, for example in classical planning. Their applicability is limited by their computational overhead. We describe a new algorithm for computing stubborn sets that is based on the state variables of the state space, while previous algorithms are based on its actions. Typical factored state spaces tend to have far fewer state variables than actions, and therefore our new algorithm is much more efficient than the previous state of the art, making stubborn sets a viable technique in many cases where they previously were not.
-
Jendrik Seipp, Thomas Keller and Malte Helmert.
Saturated Cost Partitioning for Optimal Classical Planning.
Journal of Artificial Intelligence Research 67, pp. 129-167. 2020.
(Show abstract) (Hide abstract) (PDF) (code)
Cost partitioning is a method for admissibly combining a set of admissible heuristic estimators by distributing operator costs among the heuristics. Computing an optimal cost partitioning, i.e., the operator cost distribution that maximizes the heuristic value, is often prohibitively expensive to compute. Saturated cost partitioning is an alternative that is much faster to compute and has been shown to yield high-quality heuristics. However, its greedy nature makes it highly susceptible to the order in which the heuristics are considered. We propose a greedy algorithm to generate orders and show how to use hill-climbing search to optimize a given order. Combining both techniques leads to significantly better heuristic estimates than using the best random order that is generated in the same time. Since there is often no single order that gives good guidance on the whole state space, we use the maximum of multiple orders as a heuristic that is significantly better informed than any single-order heuristic, especially when we actively search for a set of diverse orders.
-
Álvaro Torralba and Silvan Sievers.
Merge-and-Shrink Task Reformulation for Classical Planning.
In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 5644-5652. 2019.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (poster; PDF) (code and scripts) (data)
The performance of domain-independent planning systems heavily depends on how the planning task has been modeled. This makes task reformulation an important tool to get rid of unnecessary complexity and increase the robustness of planners with respect to the model chosen by the user. In this paper, we represent tasks as factored transition systems (FTS), and use the merge-and-shrink (M&S) framework for task reformulation for optimal and satisficing planning. We prove that the flexibility of the underlying representation makes the M&S reformulation methods more powerful than the counterparts based on the more popular finite-domain representation. We adapt delete-relaxation and M&S heuristics to work on the FTS representation and evaluate the impact of our reformulation.
-
Malte Helmert, Tor Lattimore, Levi H. S. Lelis, Laurent Orseau and Nathan R. Sturtevant.
Iterative Budgeted Exponential Search.
In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 1249-1257. 2019.
(Show abstract) (Hide abstract) (PDF)
We tackle two long-standing problems related to re-expansions in heuristic search algorithms. For graph search, A* can require Ω(2^n) expansions, where n is the number of states within the final f bound. Existing algorithms that address this problem like B and B' improve this bound to Ω(n^2). For tree search, IDA* can also require Ω(n^2) expansions. We describe a new algorithmic framework that iteratively controls an expansion budget and solution cost limit, giving rise to new graph and tree search algorithms for which the number of expansions is O(n log C*), where C* is the optimal solution cost. Our experiments show that the new algorithms are robust in scenarios where existing algorithms fail. In the case of tree search, our new algorithms have no overhead over IDA* in scenarios to which IDA* is well suited and can therefore be recommended as a general replacement for IDA*.
-
Jendrik Seipp.
Pattern Selection for Optimal Classical Planning with Saturated Cost Partitioning.
In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 5621-5627. 2019.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code)
Pattern databases are the foundation of some of the strongest admissible heuristics for optimal classical planning. Experiments showed that the most informative way of combining information from multiple pattern databases is to use saturated cost partitioning. Previous work selected patterns and computed saturated cost partitionings over the resulting pattern database heuristics in two separate steps. We introduce a new method that uses saturated cost partitioning to select patterns and show that it outperforms all existing pattern selection algorithms.
-
Hao Cui, Thomas Keller and Roni Khardon.
Stochastic Planning with Lifted Symbolic Trajectory Optimization.
In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 119-127. 2019.
Erratum: We fixed a bug in the IPC 2018 script that compiles enum-valued into binary fluents and obtained slightly different results in Table 1. The discussion of the results and the conclusions are not affected by this.
(Show abstract) (Hide abstract) (PDF)
This paper investigates online stochastic planning for problems with large factored state and action spaces. One promising approach in recent work estimates the quality of applicable actions in the current state through aggregate simulation from the states they reach. This leads to significant speedup, compared to search over concrete states and actions, and suffices to guide decision making in cases where the performance of a random policy is informative of the quality of a state. The paper makes two significant improvements to this approach. The first, taking inspiration from lifted belief propagation, exploits the structure of the problem to derive a more compact computation graph for aggregate simulation. The second improvement replaces the random policy embedded in the computation graph with symbolic variables that are optimized simultaneously with the search for high quality actions. This expands the scope of the approach to problems that require deep search and where information is lost quickly with random steps. An empirical evaluation shows that these ideas significantly improve performance, leading to state of the art performance on hard planning problems.
-
Florian Pommerening, Gabriele Röger, Malte Helmert, Hadrien Cambazard, Louis-Martin Rousseau and Domenico Salvagnin.
Lagrangian Decomposition for Optimal Cost Partitioning.
In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 338-347. 2019.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Optimal cost partitioning of classical planning heuristics has been shown to lead to excellent heuristic values but is often prohibitively expensive to compute. Lagrangian decomposition and Lagrangian relaxation are classical tools in mathematical programming that apply to optimization problems with a special block structure. We analyze the application of Lagrangian decomposition to cost partitioning in the context of operator-counting heuristics and interpret Lagrangian multipliers as cost functions for the combined heuristics. This allows us to view the computation of an optimal cost partitioning as an iterative process that can be seeded with any cost partitioning and improves over time. We derive an any-time algorithm to compute an optimal non-negative cost partitioning of abstraction heuristics without involving an LP solver. In each iteration, the computation reduces to independent shortest path problems in all abstractions. Finally, we discuss the extension to general cost functions.
-
Augusto B. Corrêa and Florian Pommerening.
An Empirical Study of Perfect Potential Heuristics.
In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 114-118. 2019.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code)
Potential heuristics are weighted functions over state features of a planning task. A recent study defines the complexity of a task as the minimum required feature complexity for a potential heuristic that makes a search backtrack-free. This gives an indication of how complex potential heuristics need to be to achieve good results in satisficing planning. However, these results do not directly transfer to optimal planning.
In this paper, we empirically study how complex potential heuristics must be to represent the perfect heuristic and how close to perfect heuristics can get with a limited number of features. We aim to identify the practical trade-offs between size, complexity and time for the quality of potential heuristics. Our results show that, even for simple planning tasks, finding perfect potential heuristics might be harder than expected.
-
Michael Katz, Emil Keyder, Florian Pommerening and Dominik Winterer.
Oversubscription Planning as Classical Planning with Multiple Cost Functions.
In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 237-245. 2019.
(Show abstract) (Hide abstract) (PDF)
The aim of classical planning is to minimize the summed cost of operators among those plans that achieve a fixed set of goals. Oversubscription planning (OSP), on the other hand, seeks to maximize the utility of the set of facts achieved by a plan, while keeping the cost of the plan at or below some specified bound. Here, we investigate the use of reformulations that yield planning problems with two separate cost functions, but no utilities, for solving OSP tasks. Such re-formulations have also been proposed in the context of net-benefit planning, where the planner tries to maximize the difference between the utility achieved and the cost of the plan. One of our reformulations is adapted directly from that setting, while the other is novel. In both cases, they allow for easy adaptation of existing classical planning heuristics to the OSP problem within a simple branch and bound search. We validate our approach using state of the art admissible heuristics in this framework, and report our results.
-
Jendrik Seipp and Malte Helmert.
Subset-Saturated Cost Partitioning for Optimal Classical Planning.
In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 391-400. 2019.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides; PDF) (code)
Cost partitioning is a method for admissibly adding multiple heuristics for state-space search. Saturated cost partitioning considers the given heuristics in sequence, assigning to each heuristic the minimum fraction of remaining costs that it needs to preserve its estimates for all states. We generalize saturated cost partitioning by allowing to preserve the heuristic values of only a subset of states and show that this often leads to stronger heuristics.
-
Silvan Sievers, Gabriele Röger, Martin Wehrle and Michael Katz.
Theoretical Foundations for Structural Symmetries of Lifted PDDL Tasks.
In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 446-454. 2019.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code and scripts) (data)
We transfer the notion of structural symmetries to lifted planning task representations, based on abstract structures which we define to model planning tasks. We show that symmetries are preserved by common grounding methods and we shed some light on the relation to previous symmetry concepts used in planning. Using a suitable graph representation of lifted tasks, our experimental analysis of common planning benchmarks reveals that symmetries occur in the lifted representation of many domains. Our work establishes the theoretical ground for exploiting symmetries beyond their previous scope, such as for faster grounding and mutex generation, as well as for state space transformations and reductions.
-
Florian Geißer, David Speck and Thomas Keller.
An Analysis of the Probabilistic Track of the IPC 2018.
In Proceedings of the ICAPS-2019 Workshop on the International Planning Competition (WIPC 2019), pp. 27-35. 2019.
(Show abstract) (Hide abstract) (PDF)
The International Planning Competition 2018 consisted of several tracks on classical, temporal and probabilistic plan- ning. In this paper, we focus on the discrete MDP track of the probabilistic portion of the competition.
We discuss the changes to the input language RDDL, which give rise to new challenges for planning systems, and ana- lyze each of the eight competition domains separately and highlight unique properties. We demonstrate flaws of the used evaluation criterion, the IPC score, and discuss the need for optimal upper bounds. An evaluation of the three top- performers, including their post-competition versions, and a brief analysis of their performance highlights the strengths and weaknesses of the individual approaches.
-
Jendrik Seipp.
Planner Metrics Should Satisfy Independence of Irrelevant Alternatives (position paper).
In Proceedings of the ICAPS-2019 Workshop on the International Planning Competition (WIPC 2019), pp. 40-41. 2019.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
We argue that planner evaluation metrics should satisfy the independence of irrelevant alternatives criterion, i.e., the decision whether planner A is ranked higher or lower than planner B should be independent of planner C. We show that three metrics used in classical planning competitions do not necessarily satisfy this criterion and highlight alternative metrics that do so.
-
Jendrik Seipp.
Pattern Selection for Optimal Classical Planning with Saturated Cost Partitioning.
In Proceedings of the ICAPS-2019 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2019), pp. 72-80. 2019.
Superseded by the IJCAI 2019 paper by the same name.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Pattern databases are the foundation of some of the strongest admissible heuristics for optimal classical planning. Experiments showed that the most informative way of combining information from multiple pattern databases is to use saturated cost partitioning. Previous work selected patterns and computed saturated cost partitionings over the resulting pattern database heuristics in two separate steps. We introduce a new method that uses saturated cost partitioning to select patterns and show that it outperforms all existing pattern selection algorithms.
-
Álvaro Torralba and Silvan Sievers.
Merge-and-Shrink Task Reformulation for Classical Planning.
In Proceedings of the ICAPS-2019 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2019), pp. 18-27. 2019.
Superseded by the IJCAI 2019 paper by the same name.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
The performance of domain-independent planning systems heavily depends on how the planning task has been modeled. This makes task reformulation an important tool to get rid of unnecessary complexity and increase the robustness of planners with respect to the model chosen by the user. In this paper, we represent tasks as factored transition systems (FTS), and use the merge-and-shrink (M&S) framework for task reformulation for optimal and satisficing planning. We prove that the flexibility of the underlying representation makes the M&S reformulation methods more powerful than the counterparts based on the more popular finite-domain representation. We adapt delete-relaxation and M&S heuristics to work on the FTS representation and evaluate the impact of our reformulation.
-
Jendrik Seipp.
Kronk (planner abstract).
In Sparkle Planning Challenge 2019. 2019.
(PDF)
Certified Correctness and Guaranteed Performance for Domain-Independent Planning: CCGP-Plan (funded by the Swiss National Science Foundation)
The research area of domain-independent planning is concerned with the following problem: given formal representations of the world an autonomous agent is situated in, of its capabilities, and of its goals, find a plan (a sequence of actions executable by the agent) that achieves the agent's goals, or prove that no such plan exists. We focus on the classical setting, one of the traditional core areas of artificial intelligence (AI). After decades of intensive research, planning algorithms have recently reached a level of maturity that makes them useful for real-world decision-making scenarios. With this increased maturity of planning algorithms, raw performance is no longer automatically the main concern for AI researchers or AI engineers interested in using planning algorithms in the systems they build. Besides efficiency, increasing demands are placed on the reliability of planning algorithms. We identify two ways in which current planning algorithms are brittle, and we suggest ways to improve their reliability. Specifically, we consider the following two challenges:
-
Certified Correctness: As planning is a model-based approach, there are clear notions of correctness: a sound and complete planning algorithm must return a solution if one exists and report unsolvability when no solution exists. An optimal planning algorithm must additionally guarantee that all solutions it returns are of optimal quality. This means that planning algorithms can be incorrect in three ways. They can return solutions that do not actually solve the given problem, falsely claim that a problem is unsolvable, or falsely claim that a solution is optimal (or bounded suboptimal). While the first kind of error can be detected with plan validators, the other two are much more challenging. In critical applications, possible algorithmic errors, implementation bugs, or even malicious programming (for example due to manipulated programs or ulterior motives of the algorithm implementers) are serious concerns. We investigate planning algorithms that always justify their answers in a way that can be verified independently, automatically, and efficiently.
-
Guaranteed Performance: In many scenarios where a planning algorithm could be used, guaranteeing its correctness is not sufficient. It is also necessary to ensure that in all relevant situations, the algorithm reliably produces a plan within the available computational resources. This is especially true in (common) settings where many similar problems must be solved repeatedly. Current algorithms do not provide performance guarantees, and small differences in the input can make the difference between solutions found within milliseconds, days, or not at all. Due to the PSPACE-completeness of classical planning, there will always be cases where quick solutions are not feasible. However, in many scenarios efficient planning is possible. We propose algorithms that identify such scenarios for individual planning tasks, for planning task structures, and for planning domains.
Contact: Malte Helmert and Salomé Eriksson
Relevant publications: (Show) (Hide)-
Esther Mugdan, Remo Christen and Salomé Eriksson.
Optimality Certificates for Classical Planning.
In Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023), pp. 286-294. 2023.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code)
Algorithms are usually shown to be correct on paper, but bugs in their implementations can still lead to incorrect results. In the case of classical planning, it is fortunately straightforward to check whether a computed plan is correct. For optimal planning, however, plans are additionally required to have minimal cost, which is significantly more difficult to verify. While some domain-specific approaches exists, we lack a general tool to verify optimality for arbitrary problems. We bridge this gap and introduce two approaches based on the principle of certifying algorithms, which provide a computer-verifiable certificate of correctness alongside their answer. We show that both approaches are sound and complete, analyze whether they can be generated and verified efficiently, and show how to apply them to concrete planning algorithms. The experimental evaluation shows that verifying optimality comes with a cost, but is still practically feasible. Furthermore, it confirms that the tested planner configurations provide optimal plans on the given instances, as all certificates were verified successfully.
-
Augusto B. Corrêa, Clemens Büchner and Remo Christen.
Zero-Knowledge Proofs for Classical Planning Problems.
In Proceedings of the 37th AAAI Conference on Artificial Intelligence (AAAI 2023), pp. 11955-11962. 2023.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (technical report; PDF)
In classical planning, the aim is to find a sequence of deterministic actions leading from the initial to a goal state. In this work, we consider the scenario where a party who knows the solution to a planning task, called the prover, wants to convince a second party, the verifier, that it has the solution without revealing any information about the solution itself. This is relevant in domains where privacy is important, for example when plans contain sensitive information or when the solution should not be revealed upfront. We achieve this by introducing a zero-knowledge protocol for plan existence. By restricting ourselves to tasks with polynomially-bounded plan length, we are able to construct a protocol that can be run efficiently by both prover and verifier. The resulting protocol does not rely on any reduction, has a constant number of rounds, and runs in time polynomial in the size of the task.
-
Mohammad Abdulaziz, Florian Pommerening and Augusto B. Corrêa.
Mechanically Proving Guarantees of Generalized Heuristics: First Results and Ongoing Work.
In Proceedings of the Sixth Workshop on Generalization in Planning (GenPlan 2022). 2022.
Note: This work was also presented at HSDIP 2022. Due to different page limits, the HSDIP version has some additional details.
(Show abstract) (Hide abstract) (slides; PDF)
The goal of generalized planning is to find a solution that works for all tasks of a specific planning domain. Ideally, this solution is also efficient (i.e., polynomial) in all tasks. One possible approach is to learn such a solution from training examples and then prove that this generalizes for any given task. However, such proofs are usually pen-and-paper proofs written by a human. In our paper, we aim at automating these proofs so we can use a theorem prover to show that a solution generalizes for any task. Furthermore, we want to prove that this generalization works while still preserving efficiency. Our focus is on generalized potential heuristics encoding tiered measures of progress, which can be proven to lead to a find in a polynomial number of steps in all tasks of a domain. We show our ongoing work in this direction using the interactive theorem prover Isabelle/HOL. We illustrate the key aspects of our implementation using the Miconic domain and then discuss possible obstacles and challenges to fully automating this pipeline.
-
Mohammad Abdulaziz, Florian Pommerening and Augusto B. Corrêa.
Mechanically Proving Guarantees of Generalized Heuristics: First Results and Ongoing Work.
In Proceedings of the ICAPS 2022 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2022). 2022.
Note: This work was also presented at GenPlan 2022. Due to different page limits, the HSDIP version has some additional details.
(Show abstract) (Hide abstract) (PDF)
The goal of generalized planning is to find a solution that works for all tasks of a specific planning domain. Ideally, this solution is also efficient (i.e., polynomial) in all tasks. One possible approach is to learn such a solution from training examples and then prove that this generalizes for any given task. However, such proofs are usually pen-and-paper proofs written by a human. In our paper, we aim at automating these proofs so we can use a theorem prover to show that a solution generalizes for any task. Furthermore, we want to prove that this generalization works while still preserving efficiency. Our focus is on generalized potential heuristics encoding tiered measures of progress, which can be proven to lead to a find in a polynomial number of steps in all tasks of a domain. We show our ongoing work in this direction using the interactive theorem prover Isabelle/HOL. We illustrate the key aspects of our implementation using the Miconic domain and then discuss possible obstacles and challenges to fully automating this pipeline.
-
Patrick Ferber, Florian Geißer, Felipe Trevizan, Malte Helmert and Jörg Hoffmann.
Neural Network Heuristic Functions for Classical Planning: Bootstrapping and Comparison to Other Methods.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 583-587. 2022.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code and data)
How can we train neural network (NN) heuristic functions for classical planning, using only states as the NN input? Prior work addressed this question by (a) per-instance imitation learning and/or (b) per-domain learning. The former limits the approach to instances small enough for training data generation, the latter to domains where the necessary knowledge generalizes across instances. Here we explore three methods for (a) that make training data generation scalable through bootstrapping and approximate value iteration. In particular, we introduce a new bootstrapping variant that estimates search effort instead of goal distance, which as we show converges to the perfect heuristic under idealized circumstances. We empirically compare these methods to (a) and (b), aligning three different NN heuristic function learning architectures for cross-comparison in an experiment of unprecedented breadth in this context. Key lessons are that our methods and imitation learning are highly complementary; that per-instance learning often yields stronger heuristics than per-domain learning; and the LAMA planner is still dominant but our methods outperform it in one benchmark domain.
-
Marcel Steinmetz, Daniel Fišer, Hasan Ferit Enişer, Patrick Ferber, Timo Gros, Philippe Heim, Daniel Höller, Xandra Schuler, Valentin Wüstholz, Maria Christakis and Joerg Hoffmann.
Debugging a Policy: Automatic Action-Policy Testing in AI Planning.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022), pp. 353-361. 2022.
(Show abstract) (Hide abstract) (PDF)
Testing is a promising way to gain trust in neural action policies π. Previous work on policy testing in sequential decision making targeted environment behavior leading to failure conditions. But if the failure is unavoidable given that behavior, then π is not actually to blame. For a situation to qualify as a “bug” in π, there must be an alternative policy π0 that does better. We introduce a generic policy testing framework based on that intuition. This raises the bug confirmation problem, deciding whether or not a state is a bug. We analyze the use of optimistic and pessimistic bounds for the design of test oracles approximating that problem. We contribute an implementation of our framework in classical planning, experimenting with several test oracles and with random-walk methods generating test states biased to poor policy performance and/or state novelty. We evaluate these techniques on policies π learned with ASNets. We find that they are able to effectively identify bugs in these π, and that our random-walk biases improve over uninformed baselines.
-
Augusto B. Corrêa and Jendrik Seipp.
Best-First Width Search for Lifted Classical Planning.
In Proceedings of the 32nd International Conference on Automated Planning and Scheduling (ICAPS 2022). 2022.
(Show abstract) (Hide abstract) (PDF) (poster; PDF) (slides; PDF) (code, scripts, and data)
Lifted planners are useful to solve tasks that are too hard to ground. Still, computing informative lifted heuristics is difficult: directly adapting ground heuristics to the lifted setting is often too expensive, and extracting heuristics from the lifted representation can be uninformative. A natural alternative for lifted planners is to use width-based search. These algorithms are among the strongest for ground planning, even the variants that do not access the action model. In this work, we adapt best-first width search to the lifted setting and show that this yields state-of-the-art performance for hard-to-ground planning tasks.
-
Augusto B. Corrêa, Florian Pommerening, Malte Helmert and Guillem Francès.
The FF Heuristic for Lifted Classical Planning.
In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), pp. 9716-9723. 2022.
(Show abstract) (Hide abstract) (PDF) (slides (short); PDF) (slides (long); PDF) (poster; PDF) (code and appendix)
Heuristics for lifted planning are not yet as informed as the best heuristics for ground planning. Recent work introduced the idea of using Datalog programs to compute the additive heuristic over lifted tasks. Based on this work, we show how to compute the more informed FF heuristic in a lifted manner. We extend the Datalog program with executable annotations that can also be used to define other delete-relaxation heuristics. In our experiments, we show that a planner using the lifted FF implementation produces state-of-the-art results for lifted planners. It also reduces the gap to state-of-the-art ground planners in domains where grounding is feasible.
-
Patrick Ferber and Jendrik Seipp.
Explainable Planner Selection for Classical Planning.
In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), pp. 9741-9749. 2022.
(Show abstract) (Hide abstract) (PDF) (slides (long); PDF) (slides (short); PDF) (poster; PDF) (code)
Since no classical planner consistently outperforms all others, it is important to select a planner that works well for a given classical planning task. The two strongest approaches for planner selection use image and graph convolutional neural networks. They have the drawback that the learned models are complicated and uninterpretable. To obtain explainable models, we identify a small set of simple task features and show that elementary and interpretable machine learning techniques can use these features to solve roughly as many tasks as the complex approaches based on neural networks.
-
Augusto B. Corrêa, Guillem Francès, Florian Pommerening and Malte Helmert.
Delete-Relaxation Heuristics for Lifted Classical Planning.
In Proceedings of the 31st International Conference on Automated Planning and Scheduling (ICAPS 2021), pp. 94-102. 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording) (poster; PDF) (code)
Recent research in classical planning has shown the importance of search techniques that operate directly on the lifted representation of the problem, particularly in domains where the ground representation is prohibitively large. In this paper, we show how to compute the additive and maximum heuristics from the lifted representation of a problem. We do this by adapting well-known reachability analysis techniques based on a Datalog formulation of the delete relaxation of the problem. Our adaptation allows us to obtain not only the desired heuristic value, but also other useful heuristic information such as helpful actions. Our empirical evaluation shows that our lifted version of the additive heuristic is competitive with its ground counterpart on most of the standard international competition benchmarks, and significantly outperforms other state-of-the-art lifted heuristic methods in the literature.
-
Patrick Ferber, Florian Geißer, Felipe Trevizan, Jörg Hoffmann and Malte Helmert.
Neural Network Heuristic Functions for Classical Planning: Reinforcement Learning and Comparison to Other Methods.
In Proceedings of the ICAPS 2021 Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning (PRL 2021). 2021.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (talk; MP4) (code and data)
How can we train neural network (NN) heuristic functions for classical planning, using only states as the NN input? Prior work addressed this question by (a) supervised learning and/or (b) per-domain learning generalizing over problem instances. The former limits the approach to instances small enough for training data generation, the later to domains and instance distributions where the necessary knowledge generalizes across instances. Clearly, reinforcement learning (RL) on large instances can potentially avoid both difficulties. We explore this here in terms of three methods drawing on previous ideas relating to bootstrapping and approximate value iteration, including a new bootstrapping variant that estimates search effort instead of goal distance. We empirically compare these methods to (a) and (b), aligning three different NN heuristic function learning architectures for cross-comparison in an experiment of unprecedented breadth in this context. Key lessons from this experiment are that our methods and supervised learning are highly complementary; that per-instance learning often yields stronger heuristics than per-domain learning; and that LAMA is still dominant but is outperformed by our methods in one benchmark domain.
-
Rik de Graaff, Augusto B. Corrêa and Florian Pommerening.
Concept Languages as Expert Input for Generalized Planning: Preliminary Results.
In Proceedings of the ICAPS 2021 Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2021). 2021.
(Show abstract) (Hide abstract) (PDF)
Planning is an attractive approach to solving problems because of its generality and its ease of use. A domain expert often has knowledge that could improve the performance of such a solution but this knowledge may be vague or not directly actionable. For example, an expert could be aware that a certain property is important but not have a good way of using this knowledge to speed up the search for a solution. In this paper we evaluate concept languages as a possible input language for injecting expert domain knowledge into a planning system. We also explore mixed integer programming as a way to use this knowledge to improve search efficiency and to help the user find and refine useful domain knowledge.
-
Salomé Eriksson and Malte Helmert.
Certified Unsolvability for SAT Planning with Property Directed Reachability.
In Proceedings of the 30th International Conference on Automated Planning and Scheduling (ICAPS 2020), pp. 90-100. 2020.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4) (poster; PDF)
While classical planning systems can usually detect if a task is unsolvable, only recent research introduced a way to verify such a claim. These methods have already been applied to a variety of explicit and symbolic search algorithms, but so far no planning technique based on SAT has been covered with them. We fill this gap by showing how property directed reachability can produce proofs while only minimally altering the framework by allowing to utilize certificates for unsolvable SAT queries within the proof. We additionally show that a variant of the algorithm that does not use SAT calls can produce proofs that fit into the existing framework without requiring any changes.
-
Augusto B. Corrêa, Florian Pommerening, Malte Helmert and Guillem Francès.
Lifted Successor Generation using Query Optimization Techniques.
In Proceedings of the 30th International Conference on Automated Planning and Scheduling (ICAPS 2020), pp. 80-89. 2020.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording) (poster; PDF)
The standard PDDL language for classical planning uses several first-order features, such as schematic actions. Yet, most classical planners ground this first-order representation into a propositional one as a preprocessing step. While this simplifies the design of other parts of the planner, in several benchmarks the grounding process causes an exponential blowup that puts otherwise solvable tasks out of reach of the planners. In this work, we take a step towards planning with lifted representations. We tackle the successor generation task, a key operation in forward-search planning, directly on the lifted representation using well-known techniques from database theory. We show how computing the variable substitutions that make an action schema applicable in a given state is essentially a query evaluation problem. Interestingly, a large number of the action schemas in the standard benchmarks are acyclic conjunctive queries, for which query evaluation is tractable. Our empirical results show that our approach is competitive with the standard (grounded) successor generation techniques in a few domains and outperforms them on benchmarks where grounding is challenging or infeasible.
-
Patrick Ferber, Malte Helmert and Jörg Hoffmann.
Reinforcement Learning for Planning Heuristics.
In Proceedings of the ICAPS 2020 Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning (PRL 2020), pp. 119-126. 2020.
(Show abstract) (Hide abstract) (PDF) (poster; PDF) (code and data)
Informed heuristics are essential for the success of heuristic search algorithms. But, it is difficult to develop a new heuristic which is informed on various tasks. Instead, we propose a framework that trains a neural network as heuristic for the tasks it is supposed to solve. We present two reinforcement learning approaches to learn heuristics for fixed state spaces and fixed goals. Our first approach uses approximate value iteration, our second approach uses searches to generate training data. We show that in some domains our approaches outperform previous work, and we point out potentials for future improvements.
-
Patrick Ferber and Jendrik Seipp.
Explainable Planner Selection.
In Proceedings of the ICAPS 2020 Workshop on Explainable AI Planning (XAIP 2020). 2020.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4) (poster; PDF) (code and data)
Since no classical planner consistently outperforms all others, it is important to select a planner that works well for a given classical planning task. The two strongest approaches for planner selection use image and graph convolutional neural networks. They have the drawback that the learned mod- els are not interpretable. To obtain explainable models, we identify a small set of simple task features and show that elementary and interpretable machine learning techniques can use these features to solve as many tasks as the approaches based on neural networks.
-
Patrick Ferber.
Simplified Planner Selection.
In Proceedings of the ICAPS 2020 Workshop on Heuristics and Search for Domain-Independent Planning (HSDIP 2020), pp. 102-110. 2020.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4) (code and data)
There exists no planning algorithm that outperforms all others. Therefore, it is important to know which algorithm works well on a task. A recently published approach uses either image or graph convolutional neural networks to solve this problem and achieves top performance. Especially the transforma- tion from the task to an image ignores a lot of information. Thus, we would like to know what the network is learning and if this is reasonable. As this is currently not possible, we take one step back. We identify a small set of simple graph features and show that elementary and interpretable machine learning techniques can use those features to outperform the neural network based approach. Furthermore, we evaluate the importance of those features and verify that the performance of our approach is robust to changes in the training and test data.
-
Patrick Ferber, Malte Helmert and Jörg Hoffmann.
Neural Network Heuristics for Classical Planning: A Study of Hyperparameter Space.
In Frontiers in Artificial Intelligence and Applications 325 (ECAI 2020), pp. 2346-2353. 2020.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording; MP4) (code) (models)
Neural networks (NN) have been shown to be powerful state-value predictors in several complex games. Can similar successes be achieved in classical planning? Towards a systematic exploration of that question, we contribute a study of hyperparameter space in the most canonical setup: input = state, feed-forward NN, supervised learning, generalization only over initial state. We investigate a broad range of hyperparameters pertaining to NN design and training. We evaluate these techniques through their use as heuristic functions in Fast Downward. The results on IPC benchmarks show that highly competitive heuristics can be learned, yielding substantially smaller search spaces than standard techniques on some do- mains. But the heuristic functions are costly to evaluate, and the range of domains where useful heuristics are learned is limited. Our study provides the basis for further research improving on current weaknesses.
-
Guillem Francès, Augusto B. Corrêa, Cedric Geissmann and Florian Pommerening.
Generalized Potential Heuristics for Classical Planning.
In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI 2019), pp. 5554-5561. 2019.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides IJCAI 2019; PDF) (slides GenPlan 2020; PDF) (poster; PDF)
Generalized planning aims at computing solutions that work for all instances of the same domain. In this paper, we show that several interesting planning domains possess compact generalized heuristics that can guide a greedy search in guaranteed polynomial time to the goal, and which work for any instance of the domain. These heuristics are weighted sums of state features that capture the number of objects satisfying a certain first-order logic property in any given state. These features have a meaningful interpretation and generalize naturally to the whole domain. Additionally, we present an approach based on mixed integer linear programming to compute such heuristics automatically from the observation of small training instances. We develop two variations of the approach that progressively refine the heuristic as new states are encountered. We illustrate the approach empirically on a number of standard domains, where we show that the generated heuristics will correctly generalize to all possible instances.
-
Alexander Rovner, Silvan Sievers and Malte Helmert.
Counterexample-Guided Abstraction Refinement for Pattern Selection in Optimal Classical Planning.
In Proceedings of the 29th International Conference on Automated Planning and Scheduling (ICAPS 2019), pp. 362-367. 2019.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides; PDF) (code and scripts) (data)
We describe a new algorithm for generating pattern collections for pattern database heuristics in optimal classical planning. The algorithm uses the counterexample-guided abstraction refinement (CEGAR) principle to guide the pattern selection process. Our experimental evaluation shows that a single run of the CEGAR algorithm can compute informative pattern collections in a fairly short time. Using multiple CEGAR algorithm runs, we can compute much larger pattern collections, still in shorter time than existing approaches, which leads to a planner that outperforms the state-of-the-art pattern selection methods by a significant margin.
Control Knowledge for Domain-independent AI Planning: KontWiss (funded by the German Research Foundation)
The aim of this project is to enhance the efficiency of heuristic forward search in AI planning by means of automatically generated control knowledge.
One objective is to define a suitable formalism which subsumes diverse families of control knowledge in a common framework. The emphasis lies on a uniform representation that allows utilizing several types of control knowledge with similar or identical methods. Another objective is the comparison of the representational power of existing approaches for generating domain knowledge, aiming at a better understanding that can be used to improve and generalize the existing techniques. Finally, we plan to enhance heuristic search algorithms with control knowledge by new methods that are orthogonal to heuristic estimates.
Contact: Gabriele Röger
Relevant publications: (Show) (Hide)-
Gabriele Röger, Florian Pommerening and Malte Helmert.
Optimal Planning in the Presence of Conditional Effects: Extending LM-Cut with Context-Splitting.
In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI 2014), pp. 765-770. 2014.
(Show abstract) (Hide abstract) (PDF)
The LM-Cut heuristic is currently the most successful heuristic in optimal STRIPS planning but it cannot be applied in the presence of conditional effects. Keyder, Hoffmann and Haslum recently showed that the obvious extensions to such effects ruin the nice theoretical properties of LM-Cut. We propose a new method based on context splitting that preserves these properties.
-
Florian Pommerening and Malte Helmert.
Incremental LM-Cut.
In Proceedings of the 23rd International Conference on Automated Planning and Scheduling (ICAPS 2013), pp. 162-170. 2013.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
In heuristic search and especially in optimal classical planning the computation of accurate heuristic values can take up the majority of runtime. In many cases, the heuristic computations for a search node and its successors are very similar, leading to significant duplication of effort. For example most landmarks of a node that are computed by the LM-cut algorithm are also landmarks for the node's successors.
We propose to reuse these landmarks and incrementally compute new ones to speed up the LM-cut calculation. The speed advantage obtained by incremental computation is offset by higher memory usage. We investigate different search algorithms that reduce memory usage without sacrificing the faster computation, leading to a substantial increase in coverage for benchmark domains from the International Planning Competitions.
-
Yusra Alkhazraji, Martin Wehrle, Robert Mattmüller and Malte Helmert.
A Stubborn Set Algorithm for Optimal Planning.
In Proceedings of the 20th European Conference on Artificial Intelligence (ECAI 2012), pp. 891-892. 2012.
(Show abstract) (Hide abstract) (PDF)
We adapt a partial order reduction technique based on stubborn sets, originally proposed for detecting dead ends in Petri Nets, to the setting of optimal planning. We demonstrate that stubborn sets can provide significant state space reductions on standard planning benchmarks, outperforming the expansion core method.
-
Florian Pommerening and Malte Helmert.
Optimal Planning for Delete-free Tasks with Incremental LM-cut.
In Proceedings of the 22nd International Conference on Automated Planning and Scheduling (ICAPS 2012), pp. 363-367. 2012.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Optimal plans of delete-free planning tasks are interesting both in domains that have no delete effects and as the relaxation heuristic h+ in general planning. Many heuristics for optimal and satisficing planning approximate the h+ heuristic, which is well-informed and admissible but intractable to compute. In this work, branch-and-bound and IDA* search are used in a search space tailored to delete-free planning together with an incrementally computed version of the LM-cut heuristic. The resulting algorithm for optimal delete-free planning exceeds the performance of A* with the LM-cut heuristic in the state-of-the-art planner Fast Downward.
-
Malte Helmert and Gabriele Röger.
Relative-Order Abstractions for the Pancake Problem.
In Proceedings of the 19th European Conference on Artificial Intelligence (ECAI 2010), pp. 745-750. 2010.
(Show abstract) (Hide abstract) (PDF)
The pancake problem is a famous search problem where the objective is to sort a sequence of objects (pancakes) through a minimal number of prefix reversals (flips). The best approaches for the problem are based on heuristic search with abstraction (pattern database) heuristics. We present a new class of abstractions for the pancake problem called relative-order abstractions. Relative-order abstractions have three advantages over the object-location abstractions considered in previous work. First, they are size-independent, i.e., do not need to be tailored to a particular instance size of the pancake problem. Second, they are more compact in that they can represent a larger number of pancakes within abstractions of bounded size. Finally, they can exploit symmetries in the problem specification to allow multiple heuristic lookups, significantly improving search performance over a single lookup. Our experiments show that compared to object-location abstractions, our new techniques lead to an improvement of one order of magnitude in runtime and up to three orders of magnitude in the number of generated states.
-
Gabriele Röger and Malte Helmert.
The More, the Merrier: Combining Heuristic Estimators for Satisficing Planning.
In Proceedings of the 20th International Conference on Automated Planning and Scheduling (ICAPS 2010), pp. 246-249. 2010.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF)
We empirically examine several ways of exploiting the information of multiple heuristics in a satisficing best-first search algorithm, comparing their performance in terms of coverage, plan quality, speed, and search guidance. Our results indicate that using multiple heuristics for satisficing search is indeed useful. Among the combination methods we consider, the best results are obtained by the alternation method of the "Fast Diagonally Downward" planner.
Directed Model Checking
Model checking describes the task of checking whether a given (mathematical) model of a system satisfies a given property. Directed model checking is a variant of model checking which is optimized towards efficiently detecting error states in faulty systems. Directed model checking can be considered as the application of heuristic search to model checking. In this approach, the exploration of the state space is (heuristically) guided towards error states. As a consequence, error states can often be found by only exploring a small part of the entire state space.
Mcta is a directed model checking tool for real-time systems modeled as networks of timed automata. It is implemented in Python and C++. Mcta is released under the terms of the GPL. Further information (including the sources and precompiled binaries) can be obtained from the Mcta website.
Further information: Mcta website
Contact: Martin Wehrle
European Robotic Goal-oriented Autonomous Controller: ERGO (funded by the European Union)
In recent years, there has been an increasing interest in the autonomy of space missions such as earth observation, space station operations, planetary robotic exploration, and deep space probes. The capabilities of such systems have grown drastically, but the dependency on human supervision slows down many space missions significantly.
The ERGO project is part of the European strategic reasearch cluster on "Space Robotic Technologies". Its main goal is to realise a software framework for the development of highly autonomous space robotics missions. Given a high level goal, the robot will plan and schedule actions such that a goal is achieved under consideration of temporal, spatial and resource constraints. Uncertainty in the plan execution is met with a monitoring and replanning approach.
Limited on-board resources and the need for real-time behaviour will force the planner to compromise between investment of time and energy in planning versus spending resources on execution. An optimal plan is therefore a plan that optimally balances the demand for planning-time resources against the anticipated demand for execution-time resources. Planning techniques where a resource-intensive precomputation (which can be performed on-ground) allows for informative yet compact guidance of the on-board planner are therefore the key challenge of our research.
Contact: Thomas Keller
Relevant publications: (Show) (Hide)-
Amanda Coles, Andrew Coles, Moisés Martínez, Emre Savas, Thomas Keller, Florian Pommerening and Malte Helmert.
On-board Planning for Robotic Space Missions using Temporal PDDL.
In Proceedings of the 11th International Workshop on Planning and Scheduling for Space (IWPSS 2019), pp. 34-42. 2019.
(Show abstract) (Hide abstract) (PDF)
The desire for persistent autonomy in ambitious robotic space missions has seen renewed interest in the use of on-board planning. This poses particular challenges in terms of the limited computational power of radiation-hardened hardware, and also the need to integrate with functions external to the planner itself. In this paper, we discuss the development of a forward-chaining temporal planner for use in this setting. We present results illustrating its lightweight resource footprint compared to contemporary planners, and its application to two proposed robotic space scenarios.
-
Jorge Ocón, Francisco Colmenero, Iulia Dragomir, Enrique Heredia, Mercedes Alonso, Joaquin Estremera, Robert Marc, Piotr Weclewski, Thomas Keller, Mark Woods and Spyros Karachalios.
Testing Autonomous Robots: A Discussion on Performances Obtained During the ERGO Field Tests.
In Proceedings of the 15th Symposium on Advanced Space Technologies in Robotics and Automation (ASTRA 2019). 2019.
(Show abstract) (Hide abstract) (PDF)
The European Robotic Goal-Oriented Autonomous Controller (ERGO) (http://www.h2020-ergo.eu) is one of the PERASPERA SRC first call projects. The focus of this project is to develop a framework for long-range autonomy that allows commanding a spacecraft via high-level goals. The developed ERGO framework achieves this aim by providing a new paradigm based on components and tools. Two main components, that include Artificial Intelligence technology, are at the core of ERGO: an on-board mission planner, able to dynamically generate plans on-board from high-level goals, and a scientific agent, that detects targets of interest from images. Additionally, other autonomous capabilities that can be used and tailored for different robotics platforms, such as rover navigation and robotic arm motion planning, are provided by ERGO. In this paper we discuss the results and performances achieved in the field tests of two applications developed with the ERGO framework: an orbital and a planetary mission.
-
Jorge Ocón, Francisco Colmenero, Joaquin Estremera, Karl Buckley, Mercedes Alonso, Enrique Heredia, Javier Garcia, Andrew Coles, Amanda Coles, Moises Martinez Munoz, Emre Savas, Florian Pommerening, Thomas Keller, Spyros Karachalios, Mark Woods, Iulia Dragomir, Saddek Bensalem, Pierre Dissaux and Arnaud Schach.
The ERGO Framework and its Use in Planetary/Orbital Scenarios.
In Proceedings of the 69th International Astronautical Congress (IAC 2018). 2018.
(Show abstract) (Hide abstract) (PDF)
ERGO (European Robotic Goal-Oriented Autonomous Controller) is one of the six space robotic projects in the frame of the first call of the PERASPERA SRC. ERGO is aimed to future space missions, in which space robots will require a higher level of autonomy (e.g. Exomars or Mars2020). As a framework, ERGO provides a set of components that can be reused and tailored for robots space missions (Orbital, Deep Space or Planetary Explorations) in which the on-board system has to work autonomously, performing complex operations in hazardous environments without human intervention. The concept of autonomy can be applied to a whole set of operations to be performed on-board with no human supervision, such as Martian exploration rovers, deep space probes, or in- orbit assembly robots. In the last decades, the advantages of increasing the level of autonomy in spacecraft have been demonstrated in planetary rovers. At the same time, orbital space missions have already successfully applied autonomy concepts on board, in particular for autonomous event detection and on-board activities planning.
ERGO provides a framework for on-board autonomy systems based on a specific paradigm aimed to facilitate an easy integration and/or expansion covering future mission needs; by using this paradigm, both reactive and deliberative capabilities can be orchestrated on-board. In ERGO, deliberative capabilities are provided via AI techniques: automated planning and machine-learning based vision systems. ERGO also provides a set of tools for developing safety-critical space mission applications and FDIR systems. Moreover, specific components for motion planning, path planning, hazard avoidance and trajectory control are also part of the framework. Finally, ERGO is integrated with the TASTE middleware. All ERGO components are now being tested in an orbital and a planetary scenario.
This paper discusses the ERGO components, its main characteristics, and how they have been applied to an orbital and a planetary scenario. It provides an overview of the evolution of the ERGO system; its main components and the future extensions planned for it.
-
Jorge Ocón, Karl Buckley, Francisco Colmenero, Saddek Bensalem, Iulia Dragomir, Spyros Karachalios, Mark Woods, Florian Pommerening and Thomas Keller.
Using the ERGO Framework for Space Robotics in a Planetary and an Orbital Scenario.
In Proceedings of the 14th International Symposium on Artificial Intelligence, Robotics and Automation in Space (i-SAIRAS 2018). 2018.
(Show abstract) (Hide abstract) (PDF)
The European Robotic Goal-Oriented Autonomous Controller ERGO is one of the six space robotic projects in the frame of the PERASPERA SRC. Its main objective is to provide an autonomous framework for future space robots that will be able to perform its activities without the need of constant human supervision and control. Future space missions, in particular those aimed at Deep Space or planetary exploration, such as Exomars, or Mars2020 demand a greater level of autonomy. The concept of autonomy applies here to a whole set of operations to be performed on-board without human supervision; for instance, a Martian rover has to avoid getting stuck in the sand while traversing, autonomously recharge its batteries periodically, and communicate with Earth occasionally each sol.
Additionally, it will need to be able to detect serendipitous events (e.g. a rock that has a specific property). A deep space probe has to take the right measurements to approach an asteroid, and due to the latency of the communication with Ground, these measurements need to be taken autonomously on board. Orbital space missions have already successfully applied autonomy concepts on board, in particular for autonomous event detection and on- board activities planning.
In ERGO we provide a framework for autonomy aimed to cover a wide set of a capabilities, ranging from reactive capabilities (i.e. capabilities that demand a quick response) to deliberative capabilities (that consider different courses of actions, and evaluate among the different possibilities the best alternative).
This paper will discuss the process of the design of robotic systems using the paradigm provided by this framework applied to two different scenarios: a Sample Fetching Rover (SFR), and also an On-Orbit Servicing mission, where a damaged spacecraft can have one or several of its modules replaced autonomously by a servicer spacecraft. We will describe the methodology, the main problems found, the design decisions taken to overcome these problems, as well as an overview of the final design of both systems.
-
Jorge Ocón, Juan Manuel Delfa, Alberto Medina, Daisy Lachat, Robert Marc, Mark Woods, Ian Wallace, Andrew Coles, Amanda Coles, Derek Long, Thomas Keller, Malte Helmert and Saddek Bensalem.
ERGO: A Framework for the Development of Autonomous Robots.
In Proceedings of the 14th ESA Symposium on Advanced Space Technologies for Robotics and Automation (ASTRA 2017). 2017.
(Show abstract) (Hide abstract) (PDF)
The European Robotic Goal-Oriented Autonomous Controller ERGO (http://www.h2020-ergo.eu/) is one of the six space robotic projects in the frame of the PERASPERA SRC (http://www.h2020-peraspera.eu/). Its goal is to provide an Autonomy Framework capable of operating at different levels of autonomy, from tele- operations to full on-board autonomy. Even though it has been originally conceived for space robotics, its domain independent design facilitates its application to any terrestrial robotic system. This paper presents the approach followed, current status and future steps.
Reasoning about Plans and Heuristics for Planning and Combinatorial Search: RAPAHPACS (funded by the Swiss National Science Foundation)
The most successful AI planning algorithms of the last decade fall into the general class of heuristic search methods, usually based on A* or a related search algorithm. Hence, much of the classical planning research has been concerned with developing better and better heuristics and furthering our theoretical understanding of existing heuristics. This has led to increasingly more sophisticated approaches. A large amount of human ingenuity has gone into devising today's advanced approaches and developing the underlying theory, especially in the case of optimal planning, where heuristics must satisfy a lower-bound property (be admissible) in order to be safe to use.
The central idea of this project is to shift this burden of sophistication and ingenuity away from the human heuristic designer and onto the machines that perform the heuristic search. Equipped with appropriate reasoning algorithms and representations, computers can derive knowledge about planning tasks more quickly, more reliably and in more complex settings than we humans are capable of. A planning researcher or practitioner should be able to focus on which information a planning algorithm ought to exploit, without having to worry about how to exploit it in the best way. In other words, we are suggesting a declarative approach to heuristic search planning that enables planning algorithms to reason about plans and heuristics in powerful and general ways.
A key research challenge in this endeavour is identifying an appropriate level of representation at which this declarative information is specified. It should be general enough to capture the concepts that today's state-of-the-art planning algorithms rely on in order to obtain accurate heuristic estimates, yet limited enough to afford efficient reasoning algorithms. In this project, we explore this design space in depth, both theoretically and with an emphasis on efficient practical algorithms.
Contact: Malte Helmert and Gabriele Röger
Relevant publications: (Show) (Hide)-
Jendrik Seipp, Thomas Keller and Malte Helmert.
A Comparison of Cost Partitioning Algorithms for Optimal Classical Planning.
In Proceedings of the 27th International Conference on Automated Planning and Scheduling (ICAPS 2017), pp. 259-268. 2017.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (recording) (poster; PDF)
Cost partitioning is a general and principled approach for constructing additive admissible heuristics for state-space search. Cost partitioning approaches for optimal classical planning include optimal cost partitioning, uniform cost partitioning, zero-one cost partitioning, saturated cost partitioning, post-hoc optimization and the canonical heuristic for pattern databases. We compare these algorithms theoretically, showing that saturated cost partitioning dominates greedy zero-one cost partitioning. As a side effect of our analysis, we obtain a new cost partitioning algorithm dominating uniform cost partitioning. We also evaluate these algorithms experimentally on pattern databases, Cartesian abstractions and landmark heuristics, showing that saturated cost partitioning is usually the method of choice on the IPC benchmark suite.
-
Silvan Sievers, Martin Wehrle, Malte Helmert and Michael Katz.
Strengthening Canonical Pattern Databases with Structural Symmetries.
In Proceedings of the ICAPS-2017 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2017), pp. 75-83. 2017.
Superseded by the SoCS 2017 paper by the same name.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Symmetry-based state space pruning techniques have proved to greatly improve heuristic search based classical planners. Similarly, abstraction heuristics in general and pattern databases in particular are key ingredients of such planners. However, only little work has dealt with how the abstraction heuristics behave under symmetries. In this work, we investigate the symmetry properties of the popular canonical pattern databases heuristic. Exploiting structural symmetries, we strengthen the canonical pattern databases by adding symmetric pattern databases, making the resulting heuristic invariant under structural symmetry, thus making it especially attractive for symmetry-based pruning search methods. Further, we prove that this heuristic is at least as informative as using symmetric lookups over the original heuristic. An experimental evaluation confirms these theoretical results.
-
Malte Helmert, Nathan R. Sturtevant and Ariel Felner.
On Variable Dependencies and Compressed Pattern Databases.
In Proceedings of the 10th Annual Symposium on Combinatorial Search (SoCS 2017), pp. 129-133. 2017.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Pattern databases are among the strongest known heuristics for many classical search benchmarks such as sliding-tile puzzles, the 4-peg Towers of Hanoi puzzles, Rubik's Cube, and TopSpin. Min-compression is a generally applicable technique for augmenting pattern database heuristics that has led to marked experimental improvements in some settings, while being ineffective in others. We provide a theoretical explanation for these experimental phenomena by studying the interaction between the ranking function used to order abstract states in a pattern database, the compression scheme used to abstract states, and the dependencies between state variables in the problem representation.
-
Silvan Sievers, Martin Wehrle, Malte Helmert and Michael Katz.
Strengthening Canonical Pattern Databases with Structural Symmetries.
In Proceedings of the 10th Annual Symposium on Combinatorial Search (SoCS 2017), pp. 91-99. 2017.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code, scripts and data)
Symmetry-based state space pruning techniques have proved to greatly improve heuristic search based classical planners. Similarly, abstraction heuristics in general and pattern databases in particular are key ingredients of such planners. However, only little work has dealt with how the abstraction heuristics behave under symmetries. In this work, we investigate the symmetry properties of the popular canonical pattern databases heuristic. Exploiting structural symmetries, we strengthen the canonical pattern databases by adding symmetric pattern databases, making the resulting heuristic invariant under structural symmetry, thus making it especially attractive for symmetry-based pruning search methods. Further, we prove that this heuristic is at least as informative as using symmetric lookups over the original heuristic. An experimental evaluation confirms these theoretical results.
-
Jendrik Seipp.
Better Orders for Saturated Cost Partitioning in Optimal Classical Planning.
In Proceedings of the 10th Annual Symposium on Combinatorial Search (SoCS 2017), pp. 149-153. 2017.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Cost partitioning is a general method for adding multiple heuristic values admissibly. In the setting of optimal classical planning, saturated cost partitioning has recently been shown to be the cost partitioning algorithm of choice for pattern database heuristics found by hill climbing, systematic pattern database heuristics and Cartesian abstraction heuristics. To evaluate the synergy of the three heuristic types, we compute the saturated cost partitioning over the combined sets of heuristics and observe that the resulting heuristic is outperformed by the heuristic that simply maximizes over the three saturated cost partitioning heuristics computed separately for each heuristic type. Our new algorithm for choosing the orders in which saturated cost partitioning considers the heuristics allows us to compute heuristics outperforming not only the maximizing heuristic but even state-of-the-art planners.
-
Gerald Paul, Gabriele Röger, Thomas Keller and Malte Helmert.
Optimal Solutions to Large Logistics Planning Domain Problems.
In Proceedings of the 10th Annual Symposium on Combinatorial Search (SoCS 2017), pp. 73-81. 2017.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF)
We propose techniques for efficiently determining optimal solutions to large logistics planning domain problems. We map a problem instance to a directed graph and show that no more than one vehicle per weakly connected component of the graph is needed for an optimal solution. We propose techniques for efficiently finding the vehicles which must be employed for an optimal solution. Also we develop a strong admissible heuristic based on the analysis of a directed graph, the cycles of which represent situations in the problem state in which a vehicle must visit a location more than once. To the best of our knowledge, ours is the first method that determines optimal solutions for large logistics instances (including the largest instances in the IPC 1998 and IPC 2000 problem sets).
-
Nathan R. Sturtevant, Ariel Felner and Malte Helmert.
Value Compression of Pattern Databases.
In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI 2017), pp. 912-918. 2017.
(Show abstract) (Hide abstract) (PDF)
One common pattern database compression technique is to merge adjacent database entries and store the minimum of merged entries to maintain heuristic admissibility. In this paper we propose a compression technique that preserves every entry, but reduces the number of bits used to store each entry, therefore limiting the values that can be represented. Even when this technique throws away low values in the heuristic, it can still have better performance than the traditional approach. We develop a theoretical basis for selecting which values to keep and show improved performance in both unidirectional and bidirectional search.
-
Jendrik Seipp, Florian Pommerening, Gabriele Röger and Malte Helmert.
Correlation Complexity of Classical Planning Domains.
In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI 2016), pp. 3242-3250. 2016.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
We analyze how complex a heuristic function must be to directly guide a state-space search algorithm towards the goal. As a case study, we examine functions that evaluate states with a weighted sum of state features. We measure the complexity of a domain by the complexity of the required features. We analyze conditions under which the search algorithm runs in polynomial time and show complexity results for several classical planning domains.
-
Gerald Paul and Malte Helmert.
Optimal Solitaire Game Solutions using A* Search and Deadlock Analysis.
In Proceedings of the 9th Annual Symposium on Combinatorial Search (SoCS 2016), pp. 135-136. 2016.
A non-archival, but longer version of this paper has been published at the HSDIP workshop of ICAPS 2016. See the separate entry for that paper. We recommend citing the SoCS paper in scholarly publications.
(Show abstract) (Hide abstract) (PDF)
We propose an efficient method for determining optimal solutions to such skill-based solitaire card games as Freecell. We use A* search with an admissible heuristic function based on analyzing a directed graph whose cycles represent deadlock situations in the game state. To the best of our knowledge, ours is the first algorithm that efficiently determines optimal solutions for Freecell games. We believe that the underlying ideas should be applicable not only to games but also to other classical planning problems which manifest deadlocks.
-
Silvan Sievers, Martin Wehrle and Malte Helmert.
An Analysis of Merge Strategies for Merge-and-Shrink Heuristics.
In Proceedings of the 26th International Conference on Automated Planning and Scheduling (ICAPS 2016), pp. 294-298. 2016.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
The merge-and-shrink framework provides a general basis for the computation of abstraction heuristics for factored transition systems. Recent experimental and theoretical research demonstrated the utility of non-linear merge strategies, which have not been studied in depth. We experimentally analyze the quality of state-of-the-art merge strategies by comparing them to random strategies and with respect to tie-breaking, showing that there is considerable room for improvement. We finally describe a new merge strategy that experimentally outperforms the current state of the art.
-
Gerald Paul and Malte Helmert.
Optimal Solitaire Game Solutions using A* Search and Deadlock Analysis.
In Proceedings of the ICAPS-2016 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2016), pp. 52-60. 2016.
An archival, but shorter version of this paper has been published at SoCS 2016. See the separate entry for that paper. We recommend citing the SoCS paper in scholarly publications.
(Show abstract) (Hide abstract) (PDF)
We propose and implement an efficient method for determining optimal solutions to such skill-based solitaire card games as Freecell and King Albert solitaire. We use A* search together with an admissible heuristic function that is based on analyzing a directed graph whose cycles represent deadlock situations in the game state. To the best of our knowledge, ours is the first algorithm that efficiently determines optimal solutions for Freecell games. We believe that the underlying ideas should be applicable not only to games but also to other classical planning problems which manifest deadlocks.
-
Jendrik Seipp, Florian Pommerening, Gabriele Röger and Malte Helmert.
Correlation Complexity of Classical Planning Domains.
In Proceedings of the ICAPS-2016 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP 2016), pp. 12-20. 2016.
Superseded by the IJCAI 2016 paper by the same name.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
We analyze how complex a heuristic function must be to directly guide a state-space search algorithm towards the goal. As a case study, we examine functions that evaluate states with a weighted sum of state features. We measure the complexity of a domain by the complexity of the required features. We analyze conditions under which the search algorithm runs in polynomial time and show complexity results for several classical planning domains.
Safe Pruning in Optimal State Space Search: SPOSSS (funded by the Swiss National Science Foundation)
This project investigates pruning techniques in optimal state space planning. In particular, we consider techniques based on partial order reduction and symmetry reduction.
Partial order reduction has been originally proposed in the area of computer aided verification. The basic idea is to tackle the state explosion problem by avoiding a blow-up that is induced by independent actions. Recent results have shown that such techniques can be powerful in the area of automated planning as well. In SPOSSS, we investigate partial order reduction techniques for computer aided verification and for planning both theoretically and practically.
Similar to partial order reduction, techniques based on symmetry reduction have been originally proposed in the area of computer aided verification. In contrast to partial order reduction which exploits symmetric action sequences, techniques based on symmetry reduction compute a quotient structure of the state space, where symmetrical states are considered equivalent. In SPOSSS, we investigate symmetry reduction techniques and their applications to planning.
Contact: Malte Helmert and Martin Wehrle
Relevant publications: (Show) (Hide)-
Alexander Shleyfman, Michael Katz, Malte Helmert, Silvan Sievers and Martin Wehrle.
Heuristics and Symmetries in Classical Planning.
In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI 2015), pp. 3371-3377. 2015.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (technical report; PDF)
Heuristic search is a state-of-the-art approach to classical planning. Several heuristic families were developed over the years to automatically estimate goal distance information from problem descriptions. Orthogonally to the development of better heuristics, recent years have seen an increasing interest in symmetry-based state space pruning techniques that aim at reducing the search effort. However, little work has dealt with how the heuristics behave under symmetries. We investigate the symmetry properties of existing heuristics and reveal that many of them are invariant under symmetries.
-
Robert C. Holte, Yusra Alkhazraji and Martin Wehrle.
A Generalization of Sleep Sets Based on Operator Sequence Redundancy.
In Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI 2015), pp. 3291-3297. 2015.
(Show abstract) (Hide abstract) (PDF)
Pruning techniques have recently been shown to speed up search algorithms by reducing the branching factor of large search spaces. One such technique is sleep sets, which were originally introduced as a pruning technique for model checking, and which have recently been investigated on a theoretical level for planning. In this paper, we propose a generalization of sleep sets and prove its correctness. While the original sleep sets were based on the commutativity of operators, generalized sleep sets are based on a more general notion of operator sequence redundancy. As a result, our approach dominates the original sleep sets variant in terms of pruning power. On a practical level, our experimental evaluation shows the potential of sleep sets and their generalizations on a large and common set of planning benchmarks.
-
Silvan Sievers, Martin Wehrle and Malte Helmert.
Bounded Intention Planning Revisited.
In Proceedings of the 21st European Conference on Artificial Intelligence (ECAI 2014), pp. 1097-1098. 2014.
Erratum: On the first page, right column, the paper describes a zero-cost "freeze operator intention" operator Freeze(v, x). In addition to the stated precondition and effect, there should also be a prevail condition: prv[v]=x.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF) (poster; PDF)
Bounded intention planning provides a pruning technique for optimal planning that has been proposed several years ago. In addition, partial order reduction techniques based on stubborn sets have recently been investigated for this purpose. In this paper, we revisit bounded intention planning in the view of stubborn sets.
-
Stephan Arlt, Andreas Podelski and Martin Wehrle.
Reducing GUI Test Suites via Program Slicing.
In Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 2014), pp. 270-281. 2014.
(Show abstract) (Hide abstract) (PDF)
A crucial problem in GUI testing is the identification of accurate event sequences that encode corresponding user interactions with the GUI. Ultimately, event sequences should be both feasible (i.e., executable on the GUI) and relevant (i.e., cover as much of the code as possible). So far, most work on GUI testing focused on approaches to generate feasible event sequences. In addition, based on event dependency analyses, a recently proposed static analysis approach systematically aims at selecting both relevant and feasible event sequences. However, statically analysing event dependencies can cause the generation of a huge number of event sequences, leading to unmanageable GUI test suites that are not executable within reasonable time.
In this paper we propose a refined static analysis approach based on program slicing. On the theoretical side, our approach identifies and eliminates redundant event sequences in GUI test suites. Redundant event sequences have the property that they are guaranteed to not affect the test effectiveness. On the practical side, we have implemented a slicing-based test suite reduction algorithm that approximatively identifies redundant event sequences. Our experiments on six open source GUI applications show that our reduction algorithm significantly reduces the size of GUI test suites. As a result, the overall execution time could significantly be reduced without losing test effectiveness.
-
Yusra Alkhazraji, Michael Katz, Robert Mattmüller, Florian Pommerening, Alexander Shleyfman and Martin Wehrle.
Metis: Arming Fast Downward with Pruning and Incremental Computation.
In Eighth International Planning Competition (IPC 2014), Deterministic Part, pp. 88-92. 2014.
(PDF)
-
Martin Wehrle and Malte Helmert.
Efficient Stubborn Sets: Generalized Algorithms and Selection Strategies.
In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 323-331. 2014.
(Show abstract) (Hide abstract) (PDF)
Strong stubborn sets have recently been analyzed and successfully applied as a pruning technique for planning as heuristic search. Strong stubborn sets are defined declaratively as constraints over operator sets. We show how these constraints can be relaxed to offer more freedom in choosing stubborn sets while maintaining the correctness and optimality of the approach. In general, many operator sets satisfy the definition of stubborn sets. We study different strategies for selecting among these possibilities and show that existing approaches can be considerably improved by rather simple strategies, eliminating most of the overhead of the previous state of the art.
-
Manuel Heusner, Martin Wehrle, Florian Pommerening and Malte Helmert.
Under-Approximation Refinement for Classical Planning.
In Proceedings of the 24th International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 365-369. 2014.
(Show abstract) (Hide abstract) (PDF)
A general and important problem of search-based planning techniques is the state explosion problem, which is usually tackled with approaches to reduce the branching factor of the planning task. Such approaches often implicitly exploit the observation that the number of available operators is higher than the number of operators that are actually needed to find a plan. In this paper, we propose a simple, but general under-approximation refinement framework for satisficing planning that explicitly exploits this observation. Our approach iteratively searches for plans with operator subsets, which are refined if necessary by adding operators that appear to be needed. Our evaluation shows that even a straight-forward instantiation of this framework yields a competitive planner that often finds plans with small operator sets.
-
Martin Wehrle, Malte Helmert, Yusra Alkhazraji and Robert Mattmüller.
The Relative Pruning Power of Strong Stubborn Sets and Expansion Core.
In Proceedings of the 23rd International Conference on Automated Planning and Scheduling (ICAPS 2013), pp. 251-259. 2013.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF)
In the last years, pruning techniques based on partial order reduction have found increasing attention in the planning community. One recent result is that the expansion core method is a special case of the strong stubborn sets method proposed in model checking. However, it is still an open question if there exist efficiently computable strong stubborn sets with strictly higher pruning power than expansion core. In this paper, we prove that the pruning power of strong stubborn sets is strictly higher than the pruning power of expansion core even for a straight-forward instantiation of strong stubborn sets. This instantiation is as efficiently computable as expansion core. Hence, our theoretical results suggest that strong stubborn sets should be preferred to expansion core. Our empirical evaluation on all optimal benchmarks from the international planning competitions up to 2011 supports the theoretical results.
State Space Exploration: Principles, Algorithms and Applications: SSX (funded by the European Research Council)
State-space search, i.e., finding paths in huge, implicitly given graphs, is a fundamental problem in artificial intelligence and other areas of computer science. State-space search algorithms like A*, IDA* and greedy best-first search are major success stories in artificial intelligence. However, despite their success, these algorithms have deficiencies that have not been sufficiently addressed in the past:
1. They explore a monolithic model of the world rather than applying a factored perspective.
2. They do not learn from mistakes and hence tend to commit the same mistake repeatedly.
3. For satisficing (i.e., suboptimal) search, the design of the major algorithms like greedy best-first search has been based on rather ad-hoc intuitions.
In this project, we target these three deficiencies. We develop a theory of factored state-space search, a theory of learning from information gathered during search, as well as a decision-theoretic foundation for satisficing search algorithms. Based on these insights, the project aims at designing new state-space search algorithms that improve on the current state of the art.
Contact: Malte Helmert
Relevant publications: (Show) (Hide)-
Silvan Sievers, Michael Katz, Shirin Sohrabi, Horst Samulowitz and Patrick Ferber.
Deep Learning for Cost-Optimal Planning: Task-Dependent Planner Selection.
In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI 2019), pp. 7715-7723. 2019.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF) (code, scripts and data)
As classical planning is known to be computationally hard, no single planner is expected to work well across many planning domains. One solution to this problem is to use online portfolio planners that select a planner for a given task. These portfolios perform a classification task, a well-known and well-researched task in the field of machine learning. The classification is usually performed using a representation of planning tasks with a collection of hand-crafted statistical features. Recent techniques in machine learning that are based on automatic extraction of features have not been employed yet due to the lack of suitable representations of planning tasks. In this work, we alleviate this barrier. We suggest representing planning tasks by images, allowing to exploit arguably one of the most commonly used and best developed techniques in deep learning. We explore some of the questions that inevitably rise when applying such a technique, and present various ways of building practically useful online portfolio-based planners. An evidence of the usefulness of our proposed technique is a planner that won the cost-optimal track of the International Planning Competition 2018.
-
Blai Bonet, Guillem Francès and Héctor Geffner.
Learning Features and Abstract Actions for Computing Generalized Plans.
In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI 2019), pp. 2703-2710. 2019.
(Show abstract) (Hide abstract) (PDF)
Generalized planning is concerned with the computation of plans that solve not one but multiple instances of a planning domain. Recently, it has been shown that generalized plans can be expressed as mappings of feature values into actions, and that they can often be computed with fully observable non-deterministic (FOND) planners. The actions in such plans, however, are not the actions in the instances themselves, which are not necessarily common to other instances, but abstract actions that are defined on a set of common features. The formulation assumes that the features and the abstract actions are given. In this work, we address this limitation by showing how to learn them automatically. The resulting account of generalized planning combines learning and planning in a novel way: a learner, based on a Max SAT formulation, yields the features and abstract actions from sampled state transitions, and a FOND planner uses this information, suitably transformed, to produce the general plans. Correctness guarantees are given and experimental results on several domains are reported.
-
Manuel Heusner, Thomas Keller and Malte Helmert.
Best-Case and Worst-Case Behavior of Greedy Best-First Search.
In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 1463-1470. 2018.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
We study the impact of tie-breaking on the behavior of greedy best-first search with a fixed state space and fixed heuristic. We prove that it is NP-complete to determine the number of states that need to be expanded by greedy best-first search in the best case or in the worst case. However, the best- and worst-case behavior can be computed in polynomial time for undirected state spaces. We perform computational experiments on benchmark tasks from the International Planning Competitions that compare the best and worst cases of greedy best-first search to FIFO, LIFO and random tie-breaking. The experiments demonstrate the importance of tie-breaking in greedy best-first search.
-
Salomé Eriksson, Gabriele Röger and Malte Helmert.
Inductive Certificates of Unsolvability for Domain-Independent Planning.
In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 5244-5248. 2018.
Note: This paper was invited for submission to the Best Papers From Sister Conferences Track, based on a paper that appeared in the International Conference on Automated Planning and Scheduling (ICAPS) 2017. When referring to this work, please cite the ICAPS 2017 paper instead of this version.
(Show abstract) (Hide abstract) (PDF)
If a planning system outputs a solution for a given problem, it is simple to verify that the solution is valid. However, if a planner claims that a task is unsolvable, we currently have no choice but to trust the planner blindly. We propose a sound and complete class of certificates of unsolvability, which can be verified efficiently by an independent program. To highlight their practical use, we show how these certificates can be generated for a wide range of state-of-the-art planning techniques with only polynomial overhead for the planner.
-
Manuel Heusner, Thomas Keller and Malte Helmert.
Search Progress and Potentially Expanded States in Greedy Best-First Search.
In Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), pp. 5269-5273. 2018.
Note: This paper was invited for submission to the Best Papers From Sister Conferences Track, based on a paper that appeared in the Symposium on Combinatorial Search (SoCS) 2017. When referring to this work, please cite the SoCS 2017 paper instead of this version.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
A classical result in optimal search shows that A* with an admissible and consistent heuristic expands every state whose f-value is below the optimal solution cost and no state whose f-value is above the optimal solution cost. For satisficing search algorithms, a similarly clear understanding is currently lacking. We examine the search behavior of greedy best-first search (GBFS) in order to make progress towards such an understanding.
-
Silvan Sievers.
Merge-and-Shrink Heuristics for Classical Planning: Efficient Implementation and Partial Abstractions.
In Proceedings of the 11th Annual Symposium on Combinatorial Search (SoCS 2018), pp. 90-98. 2018.
Erratum: This version of the paper fixes small mistakes in Tables 2 and 4: in the former, MIASM now reads MIASMdfp for clarity, and in the latter, the last two vertical blocks labeled RL are now correctly labeled sbMIASM and SCCdfp. Addtionally, in Table 4 the row "Search time" was removed since it was not intended to be included in the table. Also, the caption contained a typo.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code and scripts) (data)
Merge-and-shrink heuristics are a successful class of abstraction heuristics used for optimal classical planning. With the recent addition of generalized label reduction, merge-and-shrink can be understood as an algorithm framework that repeatedly applies transformations to a factored representation of a given planning task to compute an abstraction. In this paper, we describe an efficient implementation of the framework and its transformations, comparing it to its previous implementation in Fast Downward. We further discuss partial merge-and-shrink abstractions that do not consider all aspects of the concrete state space. To compute such partial abstractions, we stop the merge-and-shrink computation early by imposing simple limits on the resource consumption of the algorithm. Our evaluation shows that the efficient implementation indeed improves over the previous one, and that partial merge-and-shrink abstractions further push the efficiency of merge-and-shrink planners.
-
Jendrik Seipp and Malte Helmert.
Counterexample-Guided Cartesian Abstraction Refinement for Classical Planning.
Journal of Artificial Intelligence Research 62, pp. 535-577. 2018.
(Show abstract) (Hide abstract) (PDF)
Counterexample-guided abstraction refinement (CEGAR) is a method for incrementally computing abstractions of transition systems. We propose a CEGAR algorithm for computing abstraction heuristics for optimal classical planning. Starting from a coarse abstraction of the planning task, we iteratively compute an optimal abstract solution, check if and why it fails for the concrete planning task and refine the abstraction so that the same failure cannot occur in future iterations. A key ingredient of our approach is a novel class of abstractions for classical planning tasks that admits efficient and very fine-grained refinement. Since a single abstraction usually cannot capture enough details of the planning task, we also introduce two methods for producing diverse sets of heuristics within this framework, one based on goal atoms, the other based on landmarks. In order to sum their heuristic estimates admissibly we introduce a new cost partitioning algorithm called saturated cost partitioning. We show that the resulting heuristics outperform other state-of-the-art abstraction heuristics in many benchmark domains.
-
Salomé Eriksson, Gabriele Röger and Malte Helmert.
A Proof System for Unsolvable Planning Tasks.
In Proceedings of the 28th International Conference on Automated Planning and Scheduling (ICAPS 2018), pp. 65-73. 2018.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (poster; PDF)
While traditionally classical planning concentrated on finding plans for solvable tasks, detecting unsolvable instances has recently attracted increasing interest. To preclude wrong results, it is desirable that the planning system provides a certificate of unsolvability that can be independently verified. We propose a rule-based proof system for unsolvability where a proof establishes a knowledge base of verifiable basic statements and applies a set of derivation rules to infer the unsolvability of the task from these statements. We argue that this approach is more flexible than a recent proposal of inductive certificates of unsolvability and show how our proof system can be used for a wide range of planning techniques.
-
Gabriele Röger, Silvan Sievers and Michael Katz.
Symmetry-based Task Reduction for Relaxed Reachability Analysis.
In Proceedings of the 28th International Conference on Automated Planning and Scheduling (ICAPS 2018), pp. 208-217. 2018.
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (code, scripts and data)
Relaxed reachability analysis is relevant to efficient grounding, invariant synthesis as well as the computation of relaxation-based heuristics. Planning domains are typically specified in a lifted representation, where the size of the tasks grows exponentially with the number of objects in the world. This growth also affects the analysis of relaxed reachability. We present a task reduction based on symmetries of the lifted representation that allows to perform the same analysis on smaller tasks.
-
Gabriele Röger.
Towards Certified Unsolvability in Classical Planning.
In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017), pp. 5141-5145. 2017.
Early career track.
(Show abstract) (Hide abstract) (PDF)
While it is easy to verify that an action sequence is a solution for a classical planning task, there is no such verification capability if a task is reported unsolvable. We are therefore interested in certifi- cates that allow an independent verification of the absence of solutions. We identify promising con- cepts for certificates that can be generated by a wide range of planning approaches. We present a first proposal of unsolvability certificates and sketch ideas how the underlying concepts can be used as part of a more flexible unsolvability proof system.
-
Salomé Eriksson, Gabriele Röger and Malte Helmert.
Unsolvability Certificates for Classical Planning.
In Proceedings of the 27th International Conference on Automated Planning and Scheduling (ICAPS 2017), pp. 88-97. 2017.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
The plans that planning systems generate for solvable planning tasks are routinely verified by independent validation tools. For unsolvable planning tasks, no such validation capabilities currently exist. We describe a family of
certificates of unsolvability for classical planning tasks that can be efficiently verified and are sufficiently general for a wide range of planning approaches including heuristic search with delete relaxation, critical-path, pattern database and linear merge-and-shrink heuristics, symbolic search with binary decision diagrams, and the Trapper algorithm for detecting dead ends. We also augmented a classical planning system with the ability to emit certificates of unsolvability and implemented a planner-independent certificate validation tool. Experiments show that the overhead for producing such certificates is tolerable and that their validation is practically feasible. -
Florian Pommerening, Malte Helmert and Blai Bonet.
Abstraction Heuristics, Cost Partitioning and Network Flows.
In Proceedings of the 27th International Conference on Automated Planning and Scheduling (ICAPS 2017), pp. 228-232. 2017.
(Show abstract) (Hide abstract) (PDF)
Cost partitioning is a well-known technique to make admissible heuristics for classical planning additive. The optimal cost partitioning of explicit-state abstraction heuristics can be computed in polynomial time with a linear program, but the size of the model is often prohibitive. We study this model from a dual perspective and develop several simplification rules to reduce its size. We use these rules to answer open questions about extensions of the state equation heuristic and their relation to cost partitioning.
-
Manuel Heusner, Thomas Keller and Malte Helmert.
Understanding the Search Behaviour of Greedy Best-First Search.
In Proceedings of the 10th Annual Symposium on Combinatorial Search (SoCS 2017), pp. 47-55. 2017.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
A classical result in optimal search shows that A* with an admissible and consistent heuristic expands every state whose f-value is below the optimal solution cost and no state whose f-value is above the optimal solution cost. For satisficing search algorithms, a similarly clear understanding is currently lacking. We examine the search behaviour of greedy best-first search (GBFS) in order to make progress towards such an understanding. We introduce the concept of high-water mark benches, which separate the search space into areas that are searched by a GBFS algorithm in sequence. High-water mark benches allow us to exactly determine the set of states that are not expanded under any GBFS tie-breaking strategy. For the remaining states, we show that some are expanded by all GBFS searches, while others are expanded only if certain conditions are met.
-
Florian Pommerening, Malte Helmert and Blai Bonet.
Higher-Dimensional Potential Heuristics for Optimal Classical Planning.
In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI 2017), pp. 3636-3643. 2017.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
Potential heuristics for state-space search are defined as weighted sums over simple state features.
Atomic features consider the value of a single state variable in a factored state representation, whilebinary features consider joint assignments to two state variables. Previous work showed that the set of all admissible and consistent potential heuristics usingatomic features can be characterized by a compact set of linear constraints. We generalize this result tobinary features and prove a hardness result for features of higher dimension. Furthermore, we prove a tractability result based on the treewidth of a new graphical structure we call thecontext-dependency graph . Finally, we study the relationship of potential heuristics totransition cost partitioning . Experimental results show that binary potential heuristics are significantly more informative than the previously considered atomic ones. -
Jendrik Seipp, Thomas Keller and Malte Helmert.
Narrowing the Gap Between Saturated and Optimal Cost Partitioning for Classical Planning.
In Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI 2017), pp. 3651-3657. 2017.
(Show abstract) (Hide abstract) (PDF) (slides; PDF)
In classical planning, cost partitioning is a method for admissibly combining a set of heuristic estimators by distributing operator costs among the heuristics. An optimal cost partitioning is often prohibitively expensive to compute. Saturated cost partitioning is an alternative that is much faster to compute and has been shown to offer high-quality heuristic guidance on Cartesian abstractions. However, its greedy nature makes it highly susceptible to the order in which the heuristics are considered. We show that searching in the space of orders leads to significantly better heuristic estimates than with previously considered orders. Moreover, using multiple orders leads to a heuristic that is significantly better informed than any single-order heuristic. In experiments with Cartesian abstractions, the resulting heuristic approximates the optimal cost partitioning very closely.
-
Thomas Keller, Florian Pommerening, Jendrik Seipp, Florian Geißer and Robert Mattmüller.
State-dependent Cost Partitionings for Cartesian Abstractions in Classical Planning.
In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI 2016), pp. 3161-3169. 2016.
(Show abstract) (Hide abstract) (PDF) (technical report; PDF)
Abstraction heuristics are a popular method to guide optimal search algorithms in classical planning. Cost partitionings allow to sum heuristic estimates admissibly by distributing action costs among the heuristics. We introduce state-dependent cost partitionings which take context information of actions into account, and show that an optimal state-dependent cost partitioning dominates its state-independent counterpart. We demonstrate the potential of our idea with a state-dependent variant of the recently proposed saturated cost partitioning, and show that it has the potential to improve not only over its state-independent counterpart, but even over the optimal state-independent cost partitioning. Our empirical results give evidence that ignoring the context of actions in the computation of a cost partitioning leads to a significant loss of information.
-
Salomé Simon and Gabriele Röger.
Finding and Exploiting LTL Trajectory Constraints in Heuristic Search.
In Proceedings of the 8th Annual Symposium on Combinatorial Search (SoCS 2015), pp. 113-121. 2015.
A non-archival version of this paper was also presented at the ICAPS-2015 Workshop on Heuristics and Search for Domain-independent Planning (HSDIP).
(Show abstract) (Hide abstract) (PDF) (slides; PDF) (slides HSDIP; PDF)
We suggest the use of linear temporal logic (LTL) for expressing declarative information about optimal solutions of search problems. We describe a general framework that associates LTLf formulas with search nodes in a heuristic search algorithm. Compared to previous approaches that integrate specific kinds of path information like landmarks into heuristic search, the approach is general, easy to prove correct and easy to integrate with other kinds of path information.
Workforce Scheduling Problems: Combinatorial Optimization Meets Machine Learning (funded by Innosuisse – Swiss Innovation Agency)
In the healthcare sector, the creation and management of employee shift schedules is a complex and time-consuming mathematical, regulatory, and inter-personal modeling exercise. Attempts to make automated scheduling systems usable at scale using optimisation technology have failed in the market due to identifiable technological limitations. Traditional auto-scheduling systems are slow to run, and their results are brittle, difficult to adapt to the frequent unplanned changes in the real world.
Our unique value proposition is a two-mode auto-scheduling system: a “batch mode” for generating initial full schedules by an order of magnitude faster; and a “real time mode” to instantly recommend near-optimal modifications to an existing schedule, e.g. caused by an unexpected employee absence. Such a system requires the use of modern machine learning, alone and in conjunction with established combinatorial optimisation techniques.
Contact: Malte Helmert and Florian Pommerening