PRL @ ICAPS 2024
ICAPS’24
Banff, Alberta, Canada
Date: TBA
prl.theworkshop@gmail.com
Aim and Scope
While AI Planning and Reinforcement Learning communities focus on similar sequential decision-making problems, these communities remain somewhat unaware of each other on specific problems, techniques, methodologies, and evaluations.
This workshop aims to encourage discussion and collaboration between researchers in the fields of AI planning and reinforcement learning. We aim to bridge the gap between the two communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration across the fields. We solicit interest from AI researchers that work in the intersection of planning and reinforcement learning, in particular, those that focus on intelligent decision-making. This is the seventh edition of the PRL workshop series that started at ICAPS 2020.
Topics of Interest
We invite submissions at the intersection of AI Planning and Reinforcement Learning. The topics of interest include, but are not limited to, the following
- Reinforcement learning (model-based, Bayesian, deep, hierarchical, etc.)
- Safe RL
- Monte Carlo planning
- Model representation and learning for planning
- Planning using approximated/uncertain (learned) models
- Learning search heuristics for planner guidance
- Theoretical aspects of planning and reinforcement learning
- Action policy analysis or certification
- Reinforcement learning and planning competition(s)
- Multi-agent planning and learning
- Applications of both reinforcement learning and planning
Important Dates
- Paper submission deadline:
March 22thApril 5thApril 7th, AOE (final extension) - Paper acceptance notification: April 28th, AOE (Decision are out)
ICAPS will be in-person this year. Authors of accepted workshop papers are expected to physically attend the conference and present in person.
List of Accepted Papers
- Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning Guy Azran, Mohamad Hosein Danesh, Stefano V Albrecht, Sarah Keren
- Beyond Training: Optimizing Reinforcement Learning Based Job Shop Scheduling Through Adaptive Action Sampling Constantin Waubert de Puiseau, Christian Dörpelkus, Jannik Peters, Hasan Tercan, Tobias Meisen
- Online Planning in MDPs with Stochastic Durative Actions Tal Berman, Ronen Brafman, Erez Karpas
- ModelDiff: Leveraging Models for Policy Transfer with Value Lower Bounds Xiaotian Liu, Jihwan Jeong, Ayal Taitler, Michael Gimelfarb, Scott Sanner
- Solving Minecraft Tasks via Model Learning Yarin Benyamin, Argaman Mordoch, Shahaf S. Shperberg, Roni Stern
- A New View on Planning in Online Reinforcement Learning Kevin Roice, Parham Mohammad Panahi, Scott M. Jordan, Adam White, Martha White
- Conviction-Based Planning for Sparse Reward Reinforcement Learning Problems Simon Ouellette, Eric Beaudry, Mohamed Bouguessa
- Q* Search: Heuristic Search with Deep Q-Networks Forest Agostinelli, Shahaf S. Shperberg, Alexander Shmakov, Stephen Marcus McAleer, Roy Fox, Pierre Baldi
- Finding Reaction Mechanism Pathways with Deep Reinforcement Learning and Heuristic Search Rojina Panta, Mohammadamin Tavakoli, Christian Geils, Pierre Baldi, Forest Agostinelli
- Planning with Language Models Through The Lens of Efficiency Michael Katz, Harsha Kokel, Kavitha Srinivas, Shirin Sohrabi
- Guiding Hiearchical Reinforcement Learning in Partially Observable Environments with AI Planning Brandon Rozek, Junkyu Lee, Harsha Kokel, Michael Katz, Shirin Sohrabi
- Monte Carlo Tree Search for Integrated Planning, Learning, and Execution in Nondeterministic Python Rich Levinson
- Exploring Simultaneity: Learning Earliest-time Semantics for Automated Planning Ángel Aso-Mollar, Óscar Sapena, Eva Onaindia
- Numeric Reward Machines Kristina Levina, Nikolaos Pappas, Athanasios Karapantelakis, Aneta Vulgarakis Feljan, Jendrik Seipp
- POSGGym: A Library for Decision-Theoretic Planning and Learning in Partially Observable, Multi-Agent Environments Jonathon Schwartz, Rhys Newbury, Dana Kulic, Hanna Kurniawati
- The Case for Developing a Foundation Model for Planning-like Tasks from Scratch Biplav Srivastava, Vishal Pallagani
- Equivalence-Based Abstractions for Learning General Policies Dominik Drexler, Simon Ståhlberg, Blai Bonet, Hector Geffner
- Automating the Generation of Prompts for LLM-based Action Choice in PDDL Planning Katharina Stein, Daniel Fišer, Jörg Hoffmann, Alexander Koller
- Comparing State-of-the-art Graph Neural Networks and Transformers for General Policy Learning Nicola J. Müller, Pablo Sanchez Martin, Jörg Hoffmann, Verena Wolf, Timo P. Gros
- Towards Neurosymbolic RL via Inductive Learning of Answer Set Programs Celeste Veronese, Daniele Meli, Alessandro Farinelli
- SLOPE: Search with Learned Optimal Pruning-based Expansion Davor Bokan, Zlatan Ajanović, Bakir Lacevic
Submission Details
We solicit workshop paper submissions relevant to the above call of the following types:
- Long papers – up to 8 pages + unlimited references / appendices
- Short papers – up to 4 pages + unlimited references / appendices
- Extended abstracts – up to 2 pages + unlimited references/appendices
Please format submissions in AAAI style (see instructions in the Author Kit). Authors submitting papers rejected from other conferences, please ensure you do your utmost to address the comments given by the reviewers. Please do not submit papers that are already accepted for the main ICAPS conference to the workshop.
Some accepted long papers will be invited for contributed talks. All accepted papers (long as well as short) and extended abstracts will be given a slot in the poster presentation session. Extended abstracts are intended as brief summaries of already published papers, preliminary work, position papers, or challenges that might help bridge the gap.
As the main purpose of this workshop is to solicit discussion, the authors are invited to use the appendix of their submissions for that purpose.
Paper submissions should be made through OpenReview.
We do not insist on papers being submitted anonymously initially; this decision is left to the discretion of the author. If a paper is simultaneously being considered at a venue where anonymity is required, you have the option to submit it without author details, considering the possibility of a shared reviewer pool. However, please be aware that upon acceptance, the paper will be publicly posted on the PRL website with full author information.
Organizing Committee
- Timo P. Gros, German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany
- Steven James, University of the Witwatersrand, Johannesburg, South Africa
- Harsha Kokel, IBM Research, San Jose, USA
- Simon Ståhlberg, Linköping University, Linköping, Sweden
- Marcel Steinmetz, LAAS-CNRS and University of Toulouse, Toulouse, France
- Ayal Taitler, University of Toronto, Toronto, Canada
Please send your inquiries to prl.theworkshop@gmail.com