Bridging the Gap Between AI Planning and Reinforcement Learning (PRL @ IJCAI 2023) – Workshop at IJCAI 2023
< Link to other workshops in the series
IJCAI’23 Workshop
Macao, S.A.R
August 20, 2023 (Full Day)
Room: Almaty 6003
Schedule
Start Time (Macao local time) | Title |
---|---|
9:00 | Opening Remarks |
Session 1 | |
9:10 | Learning Neuro-Symbolic World Models with Logical Neural Networks |
9:30 | Learning to Plan with Tree Search via Deep RL |
9:50 | Learning Parameterized Policies for Planning Annotated RL |
10:10 | A Learnable Similarity Metric for Transfer Learning with Dynamics Mismatch |
10:30 | —Coffee break— |
11:00 | Invited Talk: Siddharth Srivastava Learning Abstractions for Generalizable Planning, Learning, and Reinforcement Learning. |
Session 2 | |
11:50 | Learning to Create Abstraction Hierarchies for Motion Planning under Uncertainty |
12:10 | Learn to Follow: Lifelong Multi-agent Pathfinding with Decentralized Replanning |
12:30 | —Lunch— |
14:00 | Invited Talk: Akhil Bagaria Skill Discovery for Exploration and Planning |
Session 3 | |
14:50 | Learning State Reachability as a Graph in Translation Invariant Goal-based Reinforcement Learning Tasks |
15:10 | Object-Centric Learning of Neural Policies for Zero-shot Transfer over Domains with Varying Quantities of Interest |
15:30 | —Coffee break— |
Session 4 | |
16:00 | Using Reverse Reinforcement Learning for Assembly Tasks |
16:20 | Concluding Remarks (and poster setup) |
16:30 | Poster Session |
17:30 | – END – |
Invited Talks
Siddharth Srivastava
Learning Abstractions for Generalizable Planning, Learning, and Reinforcement Learning
Can we build autonomous agents that learn generalizable knowledge and use it to reliably accomplish previously unseen tasks? In this talk, I will present our recent advances in neuro-symbolic learning for a range of sequential decision-making problems that feature long horizons and sparse rewards. Using results from our recent work, I will discuss how learning and using abstractions not only reduces the need for human input, but also helps ensure correctness and extends generalizability of learned knowledge to problems that were not seen during training. Throughout the talk, I will illustrate research advances with results in a variety of sequential decision-making settings including long-horizon planning under uncertainty, reinforcement learning, and robot planning.
Siddharth Srivastava is an Associate Professor of Computer Science in the School of Computing and Augmented Intelligence at Arizona State University. He received his PhD in Computer Science at the University of Massachusetts, Amherst, and did his postdoctoral research at UC Berkeley. His research focuses on safe and reliable taskable AI systems, AI assessment, and AI safety. He is a recipient of the NSF CAREER award, a Best Paper award at the International Conference on Automated Planning and Scheduling (ICAPS), an Outstanding Dissertation award at UMass Amherst, and a Best Final Year Thesis Award at IIT Kanpur. He served as conference Co-Chair for ICAPS 2019 and currently serves as an Associate Editor for the Journal of AI Research.
Akhil Bagaria
Skill Discovery for Exploration and Planning.
To create generally capable, intelligent machines, we must equip our agents with the ability to autonomously acquire their own abstractions. My research addresses this question using tools from reinforcement learning and planning. In this talk, I will discuss ways in which agents may discover temporally-extended abstract actions (or options). Next, I will discuss how discovered options induce a form of state abstraction, which can be used for planning. Finally, I will discuss how the discovery of abstract states and actions can be interleaved in a never-ending cycle to create agents that continually increase their competence in the world.
Akhil Bagaria is a PhD candidate at Brown University working on RL with George Konidaris. Prior to that, he worked on the multitouch team at Apple where he developed gesture recognition algorithms that are shipping on Macbooks and iPads. He did his undergrad from Harvey Mudd College in Southern California, where his research included tracking sharks with autonomous underwater robots.
List of Accepted Papers
- [oral+poster] A Learnable Similarity Metric for Transfer Learning with Dynamics Mismatch
- [poster only] SymNet 3.0: Exploiting Long-Range Influences in Learning Generalized Neural Policies for Relational MDPs
- [poster only] Generalized Planning in PDDL Domains with Pretrained Large Language Models
- [oral+poster] Learn to Follow: Lifelong Multi-agent Pathfinding with Decentralized Replanning
- [oral+poster] Learning Neuro-Symbolic World Models with Logical Neural Networks
- [oral+poster] Learning Parameterized Policies for Planning Annotated RL
- [oral+poster] Learning State Reachability as a Graph in Translation Invariant Goal-based Reinforcement Learning Tasks
- [oral+poster] Learning to Create Abstraction Hierarchies for Motion Planning under Uncertainty
- [oral+poster] Learning to Plan with Tree Search via Deep RL
- [oral+poster] Object-Centric Learning of Neural Policies for Zero-shot Transfer over Domains with Varying Quantities of Interest
- [poster only] Optimistic Exploration in Reinforcement Learning Using Symbolic Model Estimates
- [poster only] Task Scoping: Generating Task-Specific Simplifications of Open-Scope Planning Problems
- [poster only] Towards More Likely Models for AI Planning
- [oral+poster] Using Reverse Reinforcement Learning for Assembly Tasks
- [poster only] Theoretically Guaranteed Policy Improvement Distilled from Model-Based Planning
Aim and Scope of the Workshop
While AI Planning and Reinforcement Learning communities focus on similar sequential decision-making problems, these communities remain somewhat unaware of each other on specific problems, techniques, methodologies, and evaluations.
This workshop aims to encourage discussion and collaboration between researchers in the fields of AI planning and reinforcement learning. We aim to bridge the gap between the two communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration across the fields. We solicit interest from AI researchers that work in the intersection of planning and reinforcement learning, in particular, those that focus on intelligent decision-making. This is the sixth edition of the PRL workshop series that started at ICAPS 2020.
Topics of Interest
We invite submissions at the intersection of AI Planning and Reinforcement Learning. The topics of interest include, but are not limited to, the following
- Reinforcement learning (model-based, Bayesian, deep, hierarchical, etc.)
- Safe RL
- Monte Carlo planning
- Model representation and learning for planning
- Planning using approximated/uncertain (learned) models
- Learning search heuristics for planner guidance
- Theoretical aspects of planning and reinforcement learning
- Action policy analysis or certification
- Reinforcement Learning and planning competition(s)
- Multi-agent planning and learning
- Applications of both reinforcement learning and planning
Important Dates
- Paper submission deadline:
May, 4th, AOEMay 11th, AOEMay 18th, AOE - Paper acceptance notification:
June 5th, AOEJune 9th, AOE
Submission Details
We solicit workshop paper submissions relevant to the above call of the following types:
- Long papers – up to 8 pages + unlimited references / appendices
- Short papers – up to 4 pages + unlimited references / appendices
- Extended abstracts – up to 2 pages + unlimited references/appendices
Please format submissions in AAAI style (see instructions in the Author Kit). Authors submitting papers rejected from other conferences, please ensure you do your utmost to address the comments given by the reviewers. Please do not submit papers that are already accepted for the main IJCAI conference to the workshop.
Note for NeurIPS resubmissions: For authors resubmitting their NeurIPS submissions to PRL, please ensure they are anonymized. For these resubmissions, there is no need to reformat to AAAI. Authors can keep the NeurIPS formatting as we will allow the papers in NeurIPS format to be nine pages.
Some accepted long papers will be invited for contributed talks. All accepted papers (long as well as short) and extended abstracts will be given a slot in the poster presentation session. Extended abstracts are intended as brief summaries of already published papers, preliminary work, position papers, or challenges that might help bridge the gap.
As the main purpose of this workshop is to solicit discussion, the authors are invited to use the appendix of their submissions for that purpose.
Paper submissions should be made through OpenReview.
Organizing Committee
- Cameron Allen, Brown University, RI, USA
- Timo P. Gros, Saarland University, Germany
- Michael Katz, IBM T.J. Watson Research Center, NY, USA
- Harsha Kokel, University of Texas at Dallas, TX, USA
- Hector Palacios, ServiceNow Research, Montreal, Canada
- Sarath Sreedharan, Colorado State University, CO, USA
Please send your inquiries to prl.theworkshop@gmail.com