PRL Workshop Series

Bridging the Gap Between AI Planning and Reinforcement Learning

Bridging the Gap Between AI Planning and Reinforcement Learning (PRL @ IJCAI) – Workshop at IJCAI 2022 (July 24)

< Link to other workshops in the series

This site presents the most up-to-date information about the PRL @ IJCAI workshop. Please, visit IJCAI 2022 for information about the general conference.

While AI Planning and Reinforcement Learning communities focus on similar sequential decision-making problems, these communities remain somewhat unaware of each other on specific problems, techniques, methodologies, and evaluation.

This workshop aims to encourage discussion and collaboration between the researchers in the fields of AI planning and reinforcement learning. We aim to bridge the gap between the two communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration across the fields. We solicit interest from AI researchers that work in the intersection of planning and reinforcement learning, in particular, those that focus on intelligent decision making. As such, the joint workshop program is an excellent opportunity to gather a large and diverse group of interested researchers.

Workshop topics

The workshop solicits work at the intersection of the fields of reinforcement learning and planning. One example is so-called goal-directed reinforcement learning, where a goal must be achieved, and no partial credit is given for getting closer to the goal. In this case, a usual metric is success rate. We also solicit work solely in one area that can influence advances in the other so long as the connections are clearly articulated in the submission.

Submissions are invited for topics on, but not limited to:

IJCAI will be in-person this year. Authors of accepted workshop papers are expected to physically attend the conference and present in person.

Program

The event consists of:

Schedule

Time (Vienna) Title
9:00 Opening Remarks
9:05 Keynote: Giuseppe De Giacomo – Deciding and Learning How to Act in Non-Markovian Settings
10:05 Session 1
  PG3: Policy-Guided Planning for Generalized Policy Generation. Ryan Yang, Tom Silver, Aidan Curtis, Tomas Lozano-Perez and Leslie Kaelbling.
  Heuristic Search Planning with Deep Neural Networks using Imitation, Attention and Curriculum Learning. Leah Chrestien, Tomáš Pevný, Stefan Edelkamp and Antonín Komenda.
10:35 Coffee break
11:00 Session 2
  State Representation Learning for Goal-Conditioned Reinforcement Learning. Lorenzo Steccanella and Anders Jonsson.
  Scaling up ML-based Black-box Planning with Partial STRIPS Models. Matias Greco, Álvaro Torralba, Jorge A. Baier and Hector Palacios.
  Graph-Based Representation of Automata Cascades with an Application to Regular Decision Processes. Alessandro Ronca and Giuseppe De Giacomo.
  Relational Abstractions for Generalized Reinforcement Learning on Symbolic Problems. Rushang Karia and Siddharth Srivastava.
12:00 Discussion
12:30 Lunch
14:00 Keynote: Sriraam Natarajan - Neurosymbolic learning via Integration of (Relational) Planning and (Deep) RL
15:00 Coffee break
15:30 Session 3
  Exploiting Multiple Levels of Abstractions in Episodic RL via Reward Shaping. Roberto Cipollone, Giuseppe De Giacomo, Marco Favorito, Luca Iocchi and Fabio Patrizi.
  Compositional Reinforcement Learning from Logical Specifications. Kishor Jothimurugan, Suguman Bansal, Osbert Bastani and Rajeev Alur.
16:00 Session 4
  Deep Policy Learning for Perfect Rectangle Packing. Boris Doux, Satya Tamby, Benjamin Negrevergne and Tristan Cazenave.
  Generalizing Behavior Trees and Motion-Generator (BTMG) Policy Representation for Robotic Tasks Over Scenario Parameters. Faseeh Ahmad, Matthias Mayr, Elin Anna Topp, Jacek Malec and Volker Krueger.
  Speeding-up Continual Learning through Information Gaines in Novel Experiences. Pierrick Lorang, Shivam Goel, Patrik Zips, Jivko Sinapov and Matthias Scheutz.
  An attention model for the formation of collectives in real-world domains. Adrià Fenoy Barceló, Filippo Bistaffa and Alessandro Farinelli.
17:00 Closing remarks, discussion
17:30 End

Invited Speakers

Giuseppe De Giacomo

Sriraam Natarajan

Accepted submissions

Submission Procedure

We solicit workshop paper submissions relevant to the above call of the following types:

Please format submissions in AAAI style (see instructions in Author Kit 2021 at AAAI, http://www.aaai.org/Publications/Templates/AuthorKit22.zip). Authors considering submitting to the workshop papers rejected from other conferences, please ensure you do your utmost to address the comments given by the reviewers.

New: NeurIPS format is also accepted with the same number of pages and references as the call-for-papers for the main-track.

Some accepted long papers will be accepted as contributed talks. All accepted long and short papers and extended abstracts will be given a slot in the poster presentation session. Extended abstracts are intended as brief summaries of already published papers, preliminary work, position papers or challenges that might help bridge the gap.

As the main purpose of this workshop is to solicit discussion, the authors are invited to use the appendix of their submissions for that purpose.

Paper submissions should be made through EasyChair.

Please send your inquiries by email to the organizers at prl.theworkshop@gmail.com.

For up-to-date information, please visit the PRL website, https://prl-theworkshop.github.io.

Important Dates

Previous Editions

Organizers