Bridging the Gap Between AI Planning and Reinforcement Learning (PRL @ ICAPS) – Workshop at ICAPS 2022 (June 13)
This site presents the most up-to-date information about the PRL @ ICAPS workshop. Please, visit ICAPS 2022 for information about the general conference.
NEWS: This year the PRL workshop will have two editions: The first one at ICAPS 2022, and the second one at IJCAI 2022. PRL@ICAPS authors will be given the opportunity to specify whether they want their submission to be considered for PRL@IJCAI as well. The final decision will be made separately for each of the workshop locations. Please see the website for additional information on the two editions.
While AI Planning and Reinforcement Learning communities focus on similar sequential decision-making problems, these communities remain somewhat unaware of each other on specific problems, techniques, methodologies, and evaluation.
This workshop aims to encourage discussion and collaboration between the researchers in the fields of AI planning and reinforcement learning. We aim to bridge the gap between the two communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration across the fields. We solicit interest from AI researchers that work in the intersection of planning and reinforcement learning, in particular, those that focus on intelligent decision making. As such, the joint workshop program is an excellent opportunity to gather a large and diverse group of interested researchers.
The workshop solicits work at the intersection of the fields of reinforcement learning and planning. One example is so-called goal-directed reinforcement learning, where a goal must be achieved, and no partial credit is given for getting closer to the goal. In this case, a usual metric is success rate. We also solicit work solely in one area that can influence advances in the other so long as the connections are clearly articulated in the submission.
Submissions are invited for topics on, but not limited to:
- Theoretical aspects of planning and reinforcement learning
- Goal-oriented sequential decision methods combining planning, RL or other ML methods.
- Goal-directed reinforcement learning (model-based, Bayesian, deep, etc.)
- Safe Reinforcement Learning and Planning
- Certification/analysis of learned policies/models
- Planning using approximated/uncertain (learned) models
- Monte Carlo Planning
- Learning search heuristics for planner guidance
- Model representation and learning for planning
- Applications of both reinforcement learning and planning
- Various levels of generalization (across goals, objects/domain, domains)
- Reinforcement Learning and planning competition(s)
- Veronique Ventos - Nook: a new generation AI dedicated to the game of Bridge
- Abstract: On March 25 2022, at the end of a two-day Bridge tournament against eight world champions the Bridge AI Nook was declared victorious. This is a world première the game of bridge still being a great challenge to Artificial Intelligence.
NooK is a new generation AI according to several aspects. The first one is related to the fact that Nook is hybrid since it is made up of symbolic rule-based modules and neural network one. Rather than learning by playing a huge amount of games, it begins by recovering and modeling human expertise in a Background Knowledge described using a relational logic. Moreover Nook is able to provide explanations related to each decision.
The robot is developed by NukkAI, a French start-up that we will present at the start of the talk. In the following we will give the basics of the game of bridge and its distinguishing characteristics from other mind games The other two parts will be devoted to the challenge and the description of the NooK modules.
- Abstract: On March 25 2022, at the end of a two-day Bridge tournament against eight world champions the Bridge AI Nook was declared victorious. This is a world première the game of bridge still being a great challenge to Artificial Intelligence. NooK is a new generation AI according to several aspects. The first one is related to the fact that Nook is hybrid since it is made up of symbolic rule-based modules and neural network one. Rather than learning by playing a huge amount of games, it begins by recovering and modeling human expertise in a Background Knowledge described using a relational logic. Moreover Nook is able to provide explanations related to each decision.
- Sergey Levine - Planning with Reinforcement Learning
- Abstract: In this talk, I will discuss how reinforcement learning algorithms can provide useful tools for solving problems that are conventionally perceived as planning problems, such as performing long-horizon robotic control tasks. I will discuss how we can use reinforcement learning to acquire abstractions that are well-suited for planning, and then further discuss how offline reinforcement learning methods can actually stitch together parts of previously observed trajectories to solve problems that we typically think of as sequential planning problems.
- Subbarao Kambhampati - Planning to Advise and Explain Reinforcement Learning
- Abstract: I will discuss how symbolic planning methods can be leveraged to advise reinforcement learning systems, as well as help them explain their decisions to humans in the loop.
The event will be fully virtual, consisting of:
- Invited talks. 40m + 10-20m for Q&A.
- Short presentations for accepted papers. 6m + 1m for Q&A.
- Two Grouped Discussions (Virtual Poster) sessions. 90m.
- One Plenary Discussion session. 20m.
The workshop June 13
Mon, June 13.
- Goal Recognition as Reinforcement Learning (L. R. Amado, R. Mirsky and F. Meneguzzi)
- Learning First-Order Symbolic Planning Representations That Are Grounded (A. Occhipinti, B. Bonet and H. Geffner)
- Action Space Reduction for Planning Domains (H. Kokel, J. Lee, M. Katz, S. Sohrabi and K. Srinivas)
- Learning Generalized Policies Without Supervision Using GNNs (S. Ståhlberg, B. Bonet and H. Geffner)
- Leveraging Approximate Symbolic Models for Reinforcement Learning via Skill Diversity (L. Guan, S. Sreedharan and S. Kambhampati)
- State Representation Learning for Goal-Conditioned Reinforcement Learning (L. Steccanella and A. Jonsson)
- PG3: Policy-Guided Planning for Generalized Policy Generation (R. Yang, T. Silver, A. Curtis, T. Lozano-Perez and L. Kaelbling)
- Relational Abstractions for Generalized Reinforcement Learning on Symbolic Problems (R. Karia and S. Srivastava)
- Learning Domain-Independent Policies for Open List Selection (A. Biedenkapp, D. Speck, S. Sievers, F. Hutter, M. Lindauer and J. Seipp)
- World Value Functions: Knowledge Representation for Learning and Planning (G.-N. Tasse, B. Rosman and S. James)
- GoalNet: Inferring Conjunctive Goal Predicates from Human Plan Demonstrations for Robot Instruction Following (S. Sharma, J. Gupta, S. Tuli, R. Paul and Mausam)
- Hierarchies of Reward Machines (D. Furelos-Blanco, M. Law, A. Jonsson, K. Broda and A. Russo)
- Model-Based Adaptation to Novelty in Open-World AI (R. Stern, W. Piotrowski, M. Klenk, J. de Kleer, A. Perez, J. Le and S. Mohan)
- POGEMA: Partially Observable Grid Environment for Multiple Agents (A, Skrynnik, A. Andreychuk, K. Yakovlev and A. I. Panov)
- A Proposal to Generate Planning Problems with Graph Neural Networks (C. Núñez-Molina, P. Mesejo and J. Fernández-Olivares)
- Submission deadline:
Friday, April 1st, 2022 (UTC-12 timezone)
- Notification date:
Friday, April 29th, 2022
- Camera-ready deadline: Friday 10 June, 2022
- Workshop date: Virtual, June 13, 2022
We solicit workshop paper submissions relevant to the above call of the following types:
- Long papers – up to 8 pages + unlimited references / appendices
- Short papers – up to 4 pages + unlimited references / appendices
- Extended abstracts – up to 2 pages + unlimited references / appendices
Please format submissions in AAAI style (see instructions in Author Kit 2021 at AAAI, http://www.aaai.org/Publications/Templates/AuthorKit22.zip). Authors considering submitting to the workshop papers rejected from other conferences, please ensure you do your utmost to address the comments given by the reviewers. Papers accepted to the main ICAPS conference may be considered in the extended abstract form.
Some accepted long papers will be accepted as contributed talks. All accepted long and short papers and extended abstracts will be given a slot in the poster presentation session. Extended abstracts are intended as brief summaries of already published papers, preliminary work, position papers or challenges that might help bridge the gap.
As the main purpose of this workshop is to solicit discussion, the authors are invited to use the appendix of their submissions for that purpose.
Please send your inquiries by email to the organizers at firstname.lastname@example.org.
For up-to-date information, please visit the PRL website, https://prl-theworkshop.github.io.
- Michael Katz, IBM T.J. Watson Research Center, NY, USA
- Hector Palacios, ServiceNow Research, Montreal, Canada
- Vicenç Gómez, Universitat Pompeu Fabra, Barcelona, Spain