Reinforcement Learning Papers Accepted to ICLR 2020

by Seungjae Ryan Lee

EDIT (2020-04-03): The paper Action Semantics Network: Considering the Effects of Actions in Multiagent Systems was missing from the list! The paper has now been added. Thank you Weixun for reporting this.

Here is a list of all papers that use reinforcement learning accepted to ICLR 2020. For this post, I used the data collected by shaohua0116/ICLR2020-OpenReviewData.

For all accepted ICLR 2020 papers, check this post instead.

Rank Average Rating Title Ratings Variance Decision
1 8.00 Dynamics-aware Unsupervised Skill Discovery 8 8 8 0.00 Accept (Talk)
1 8.00 Contrastive Learning Of Structured World Models 8 8 8 0.00 Accept (Talk)
1 8.00 Implementation Matters In Deep Rl: A Case Study On Ppo And Trpo 8 8 8 0.00 Accept (Talk)
1 8.00 Gendice: Generalized Offline Estimation Of Stationary Values 8 8 8 0.00 Accept (Talk)
1 8.00 Causal Discovery With Reinforcement Learning 8 8 8 0.00 Accept (Talk)
2 7.33 Is A Good Representation Sufficient For Sample Efficient Reinforcement Learning? 8 8 6 0.89 Accept (Spotlight)
2 7.33 Harnessing Structures For Value-based Planning And Reinforcement Learning 6 8 8 0.89 Accept (Talk)
2 7.33 Explain Your Move: Understanding Agent Actions Using Focused Feature Saliency 6 8 8 0.89 Accept (Poster)
2 7.33 Meta-q-learning 8 8 6 0.89 Accept (Talk)
2 7.33 Discriminative Particle Filter Reinforcement Learning For Complex Partial Observations 8 6 8 0.89 Accept (Poster)
2 7.33 Disagreement-regularized Imitation Learning 6 8 8 0.89 Accept (Spotlight)
2 7.33 Doubly Robust Bias Reduction In Infinite Horizon Off-policy Estimation 6 8 8 0.89 Accept (Spotlight)
2 7.33 Seed Rl: Scalable And Efficient Deep-rl With Accelerated Central Inference 8 6 8 0.89 Accept (Talk)
2 7.33 The Ingredients Of Real World Robotic Reinforcement Learning 6 8 8 0.89 Accept (Spotlight)
2 7.33 Watch The Unobserved: A Simple Approach To Parallelizing Monte Carlo Tree Search 8 6 8 0.89 Accept (Talk)
2 7.33 Meta-learning Acquisition Functions For Transfer Learning In Bayesian Optimization 8 6 8 0.89 Accept (Spotlight)
2 7.33 A Closer Look At Deep Policy Gradients 8 6 8 0.89 Accept (Talk)
2 7.33 Fast Task Inference With Variational Intrinsic Successor Features 8 6 8 0.89 Accept (Talk)
2 7.33 Learning To Plan In High Dimensions Via Neural Exploration-exploitation Trees 8 8 6 0.89 Accept (Spotlight)
3 7.00 Dream To Control: Learning Behaviors By Latent Imagination 8 6 6 8 1.00 Accept (Spotlight)
4 6.67 Making Efficient Use Of Demonstrations To Solve Hard Exploration Problems 6 8 6 0.89 Accept (Poster)
4 6.67 Intrinsic Motivation For Encouraging Synergistic Behavior 6 8 6 0.89 Accept (Poster)
4 6.67 Sqil: Imitation Learning Via Reinforcement Learning With Sparse Rewards 8 6 6 0.89 Accept (Poster)
4 6.67 Reinforcement Learning With Competitive Ensembles Of Information-constrained Primitives 8 6 6 0.89 Accept (Poster)
4 6.67 Multi-agent Interactions Modeling With Correlated Policies 6 6 8 0.89 Accept (Poster)
4 6.67 Influence-based Multi-agent Exploration 6 6 8 0.89 Accept (Spotlight)
4 6.67 Learning The Arrow Of Time For Problems In Reinforcement Learning 6 6 8 0.89 Accept (Poster)
4 6.67 Amrl: Aggregated Memory For Reinforcement Learning 6 6 8 0.89 Accept (Poster)
4 6.67 Model Based Reinforcement Learning For Atari 6 8 6 0.89 Accept (Spotlight)
4 6.67 Variational Recurrent Models For Solving Partially Observable Control Tasks 6 6 8 0.89 Accept (Poster)
4 6.67 Sample Efficient Policy Gradient Methods With Recursive Variance Reduction 6 8 6 0.89 Accept (Poster)
4 6.67 Exploring Model-based Planning With Policy Networks 6 8 6 0.89 Accept (Poster)
4 6.67 Reinforcement Learning Based Graph-to-sequence Model For Natural Question Generation 6 6 8 0.89 Accept (Poster)
4 6.67 Ride: Rewarding Impact-driven Exploration For Procedurally-generated Environments 6 6 8 0.89 Accept (Poster)
4 6.67 Learning Expensive Coordination: An Event-based Deep Rl Approach 6 8 6 0.89 Accept (Poster)
4 6.67 Evolutionary Population Curriculum For Scaling Multi-agent Reinforcement Learning 6 8 6 0.89 Accept (Poster)
4 6.67 Making Sense Of Reinforcement Learning And Probabilistic Inference 6 6 8 0.89 Accept (Spotlight)
4 6.67 Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs 8 6 6 0.89 Accept (Poster)
4 6.67 Never Give Up: Learning Directed Exploration Strategies 6 6 8 0.89 Accept (Poster)
4 6.67 Robust Reinforcement Learning For Continuous Control With Model Misspecification 6 6 8 0.89 Accept (Poster)
4 6.67 Synthesizing Programmatic Policies That Inductively Generalize 6 8 6 0.89 Accept (Poster)
4 6.67 Adaptive Correlated Monte Carlo For Contextual Categorical Sequence Generation 6 6 8 0.89 Accept (Poster)
4 6.67 Improving Generalization In Meta Reinforcement Learning Using Neural Objectives 6 6 8 0.89 Accept (Spotlight)
5 6.33 Single Episode Transfer For Differing Environmental Dynamics In Reinforcement Learning 3 8 8 5.56 Accept (Poster)
5 6.33 Decentralized Distributed Ppo: Mastering Pointgoal Navigation 3 8 8 5.56 Accept (Poster)
6 6.25 Geometric Insights Into The Convergence Of Nonlinear Td Learning 8 3 6 8 4.19 Accept (Poster)
6 6.25 Dynamics-aware Embeddings 3 8 6 8 4.19 Accept (Poster)
7 6.20 Reanalysis Of Variance Reduced Temporal Difference Learning 8 8 6 3 6 3.36 Accept (Poster)
8 6.00 Q-learning With Ucb Exploration Is Sample Efficient For Infinite-horizon Mdp 6 6 6 6 0.00 Accept (Poster)
8 6.00 Automated Curriculum Generation Through Setter-solver Interactions 6 6 6 0.00 Accept (Poster)
8 6.00 Optimistic Exploration Even With A Pessimistic Initialisation 6 6 6 0.00 Accept (Poster)
8 6.00 Multi-agent Reinforcement Learning For Networked System Control 6 6 6 0.00 Accept (Poster)
8 6.00 A Learning-based Iterative Method For Solving Vehicle Routing Problems 6 6 6 0.00 Accept (Poster)
8 6.00 Sharing Knowledge In Multi-task Deep Reinforcement Learning 6 6 6 0.00 Accept (Poster)
8 6.00 Rtfm: Generalising To New Environment Dynamics Via Reading 6 6 6 0.00 Accept (Poster)
8 6.00 Meta Reinforcement Learning With Autonomous Inference Of Subtask Dependencies 6 6 6 0.00 Accept (Poster)
8 6.00 Projection Based Constrained Policy Optimization 6 6 6 0.00 Accept (Poster)
8 6.00 Graph Constrained Reinforcement Learning For Natural Language Action Spaces 6 6 6 0.00 Accept (Poster)
8 6.00 V-mpo: On-policy Maximum A Posteriori Policy Optimization For Discrete And Continuous Control 6 6 6 0.00 Accept (Poster)
8 6.00 Thinking While Moving: Deep Reinforcement Learning With Concurrent Control 6 6 6 0.00 Accept (Poster)
8 6.00 Keep Doing What Worked: Behavior Modelling Priors For Offline Reinforcement Learning 6 6 6 0.00 Accept (Poster)
8 6.00 Imitation Learning Via Off-policy Distribution Matching 6 6 6 0.00 Accept (Poster)
8 6.00 Adversarial Autoaugment 6 6 6 0.00 Accept (Poster)
8 6.00 Option Discovery Using Deep Skill Chaining 6 6 6 0.00 Accept (Poster)
8 6.00 State-only Imitation With Transition Dynamics Mismatch 6 6 6 0.00 Accept (Poster)
8 6.00 The Gambler’s Problem And Beyond 6 6 6 0.00 Accept (Poster)
8 6.00 Structured Object-aware Physics Prediction For Video Modeling And Planning 6 6 6 0.00 Accept (Poster)
8 6.00 Dynamical Distance Learning For Semi-supervised And Unsupervised Skill Discovery 6 6 6 0.00 Accept (Poster)
8 6.00 Exploration In Reinforcement Learning With Deep Covering Options 6 6 6 0.00 Accept (Poster)
8 6.00 Cm3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning 6 6 6 0.00 Accept (Poster)
8 6.00 Learning To Coordinate Manipulation Skills Via Skill Behavior Diversification 6 6 6 0.00 Accept (Poster)
8 6.00 Composing Task-agnostic Policies With Deep Reinforcement Learning 6 6 6 0.00 Accept (Poster)
8 6.00 Frequency-based Search-control In Dyna 6 6 6 0.00 Accept (Poster)
8 6.00 Black-box Off-policy Estimation For Infinite-horizon Reinforcement Learning 6 6 6 0.00 Accept (Poster)
8 6.00 Action Semantics Network: Considering The Effects Of Actions In Multiagent Systems 6 6 6 0.00 Accept (Poster)
8 6.00 Caql: Continuous Action Q-learning 6 6 0.00 Accept (Poster)
8 6.00 Reinforced Active Learning For Image Segmentation 6 6 0.00 Accept (Poster)
8 6.00 The Variational Bandwidth Bottleneck: Stochastic Evaluation On An Information Budget 6 6 0.00 Accept (Poster)
8 6.00 Hierarchical Foresight: Self-supervised Learning Of Long-horizon Tasks Via Visual Subgoal Generation 6 6 0.00 Accept (Poster)
9 5.75 Maximum Likelihood Constraint Inference For Inverse Reinforcement Learning 8 6 3 6 3.19 Accept (Spotlight)
9 5.75 Autoq: Automated Kernel-wise Neural Network Quantization 6 6 8 3 3.19 Accept (Poster)
9 5.75 Varibad: A Very Good Method For Bayes-adaptive Deep Rl Via Meta-learning 8 6 8 1 8.19 Accept (Poster)
10 5.67 Watch, Try, Learn: Meta-learning From Demonstrations And Rewards 8 3 6 4.22 Accept (Poster)
10 5.67 Population-guided Parallel Policy Search For Reinforcement Learning 6 8 3 4.22 Accept (Poster)
10 5.67 A Simple Randomization Technique For Generalization In Deep Reinforcement Learning 8 3 6 4.22 Accept (Poster)
10 5.67 On The Weaknesses Of Reinforcement Learning For Neural Machine Translation 8 6 3 4.22 Accept (Poster)
10 5.67 State Alignment-based Imitation Learning 6 8 3 4.22 Accept (Poster)
10 5.67 Finding And Visualizing Weaknesses Of Deep Reinforcement Learning Agents 8 6 3 4.22 Accept (Poster)
10 5.67 Model-augmented Actor-critic: Backpropagating Through Paths 3 6 8 4.22 Accept (Poster)
10 5.67 Behaviour Suite For Reinforcement Learning 8 3 6 4.22 Accept (Spotlight)
10 5.67 Learning Heuristics For Quantified Boolean Formulas Through Reinforcement Learning 6 8 3 4.22 Accept (Poster)
10 5.67 Maxmin Q-learning: Controlling The Estimation Bias Of Q-learning 8 6 3 4.22 Accept (Poster)
10 5.67 Hypermodels For Exploration 8 3 6 4.22 Accept (Poster)
11 5.50 Sub-policy Adaptation For Hierarchical Reinforcement Learning 3 8 6.25 Accept (Poster)
11 5.50 Svqn: Sequential Variational Soft Q-learning Networks 3 8 6.25 Accept (Poster)
12 5.25 Impact: Importance Weighted Asynchronous Architectures With Clipped Target Networks 6 3 6 6 1.69 Accept (Poster)
13 5.00 Ranking Policy Gradient 6 3 6 2.00 Accept (Poster)
13 5.00 Model-based Reinforcement Learning For Biological Sequence Design 6 3 6 2.00 Accept (Poster)
13 5.00 Learning Nearly Decomposable Value Functions Via Communication Minimization 6 6 3 2.00 Accept (Poster)
13 5.00 Implementing Inductive Bias For Different Navigation Tasks Through Diverse Rnn Attrractors 3 6 6 2.00 Accept (Poster)
13 5.00 Toward Evaluating Robustness Of Deep Reinforcement Learning With Continuous Control 6 3 6 2.00 Accept (Poster)
13 5.00 Learning Efficient Parameter Server Synchronization Policies For Distributed Sgd 6 3 6 2.00 Accept (Poster)
13 5.00 Episodic Reinforcement Learning With Associative Memory 6 3 6 2.00 Accept (Poster)
14 4.67 Logic And The 2-simplicial Transformer 8 3 3 5.56 Accept (Poster)
15 4.00 Exploratory Not Explanatory: Counterfactual Analysis Of Saliency Maps For Deep Rl 1 3 8 8.67 Accept (Poster)
15 4.00 Playing The Lottery With Rewards And Multiple Languages: Lottery Tickets In Rl And Nlp 3 3 6 2.00 Accept (Poster)

Dynamics-aware Unsupervised Skill Discovery

Archit Sharma · Shixiang Gu · Sergey Levine · Vikash Kumar · Karol Hausman

Rating: [8,8,8]

OpenReview

Abstract

Conventionally, model-based reinforcement learning (MBRL) aims to learn a global model for the dynamics of the environment. A good model can potentially enable planning algorithms to generate a large variety of behaviors and solve diverse tasks. However, learning an accurate model for complex dynamical systems is difficult, and even then, the model might not generalize well outside the distribution of states on which it was trained. In this work, we combine model-based learning with model-free learning of primitives that make model-based planning easy. To that end, we aim to answer the question: how can we discover skills whose outcomes are easy to predict? We propose an unsupervised learning algorithm, Dynamics-Aware Discovery of Skills (DADS), which simultaneously discovers predictable behaviors and learns their dynamics. Our method can leverage continuous skill spaces, theoretically, allowing us to learn infinitely many behaviors even for high-dimensional state-spaces. We demonstrate that zero-shot planning in the learned latent space significantly outperforms standard MBRL and model-free goal-conditioned RL, can handle sparse-reward tasks, and substantially improves over prior hierarchical RL methods for unsupervised skill discovery.

Automated Curriculum Generation Through Setter-solver Interactions

Andrew Lampinen · Sebastien Racaniere · Adam Santoro · David Reichert · Vlad Firoiu · Timothy Lillicrap

Rating: [6,6,6]

OpenReview

Abstract

Reinforcement learning algorithms use correlations between policies and rewards to improve agent performance. But in dynamic or sparsely rewarding environments these correlations are often too small, or rewarding events are too infrequent to make learning feasible. Human education instead relies on curricula –the breakdown of tasks into simpler, static challenges with dense rewards– to build up to complex behaviors. While curricula are also useful for artificial agents, hand-crafting them is time consuming. This has lead researchers to explore automatic curriculum generation. Here we explore automatic curriculum generation in rich,dynamic environments. Using a setter-solver paradigm we show the importance of considering goal validity, goal feasibility, and goal coverage to construct useful curricula. We demonstrate the success of our approach in rich but sparsely rewarding 2D and 3D environments, where an agent is tasked to achieve a single goal selected from a set of possible goals that varies between episodes, and identify challenges for future work. Finally, we demonstrate the value of a novel technique that guides agents towards a desired goal distribution. Altogether, these results represent a substantial step towards applying automatic task curricula to learn complex, otherwise unlearnable goals, and to our knowledge are the first to demonstrate automated curriculum generation for goal-conditioned agents in environments where the possible goals vary between episodes.

Ranking Policy Gradient

Kaixiang Lin · Jiayu Zhou

Rating: [6,3,6]

OpenReview

Abstract

Sample inefficiency is a long-lasting problem in reinforcement learning (RL). The state-of-the-art uses action value function to derive policy while it usually involves an extensive search over the state-action space and unstable optimization. Towards the sample-efficient RL, we propose ranking policy gradient (RPG), a policy gradient method that learns the optimal rank of a set of discrete actions. To accelerate the learning of policy gradient methods, we establish the equivalence between maximizing the lower bound of return and imitating a near-optimal policy without accessing any oracles. These results lead to a general off-policy learning framework, which preserves the optimality, reduces variance, and improves the sample-efficiency. We conduct extensive experiments showing that when consolidating with the off-policy learning framework, RPG substantially reduces the sample complexity, comparing to the state-of-the-art.

Is A Good Representation Sufficient For Sample Efficient Reinforcement Learning?

Simon S. Du · Sham M. Kakade · Ruosong Wang · Lin F. Yang

Rating: [8,8,6]

OpenReview

Abstract

Modern deep learning methods provide effective means to learn good representations. However, is a good representation itself sufficient for efficient reinforcement learning? This question is largely unexplored, and the extant body of literature mainly focuses on conditions which permit efficient reinforcement learning with little understanding of what are necessary conditions for efficient reinforcement learning. This work provides strong negative results for reinforcement learning methods with function approximation for which a good representation (feature extractor) is known to the agent, focusing on natural representational conditions relevant to value-based learning and policy-based learning. For value-based learning, we show that even if the agent has a highly accurate linear representation, the agent still needs to sample an exponential number trajectories in order to find a near-optimal policy. For policy-based learning, we show even if the agent's linear representation is capable of perfectly predicting the optimal action at any state, the agent still needs to sample an exponential number of trajectories in order to find a near-optimal policy. These lower bounds highlight the fact that having a good (value-based or policy-based) representation in and of itself is insufficient for efficient reinforcement learning and that additional assumptions are needed. In particular, these results provide new insights into why the analysis of existing provably efficient reinforcement learning methods make assumptions which are partly model-based in nature. Furthermore, our lower bounds also imply exponential separations on the sample complexity between 1) value-based learning with perfect representation and value-based learning with a good-but-not-perfect representation, 2) value-based learning and policy-based learning, 3) policy-based learning and supervised learning and 4) reinforcement learning and imitation learning.

Optimistic Exploration Even With A Pessimistic Initialisation

Tabish Rashid · Bei Peng · Wendelin Boehmer · Shimon Whiteson

Rating: [6,6,6]

OpenReview

Abstract

Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL). In the tabular case, all provably efficient model-free algorithms rely on it. However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient tabular algorithms. In particular, in scenarios with only positive rewards, Q-values are initialised at their lowest possible values due to commonly used network initialisation schemes, a pessimistic initialisation. Merely initialising the network to output optimistic Q-values is not enough, since we cannot ensure that they remain optimistic for novel state-action pairs, which is crucial for exploration. We propose a simple count-based augmentation to pessimistically initialised Q-values that separates the source of optimism from the neural network. We show that this scheme is provably efficient in the tabular setting and extend it to the deep RL setting. Our algorithm, Optimistic Pessimistically Initialised Q-Learning (OPIQ), augments the Q-value estimates of a DQN-based agent with count-derived bonuses to ensure optimism during both action selection and bootstrapping. We show that OPIQ outperforms non-optimistic DQN variants that utilise a pseudocount-based intrinsic motivation in hard exploration tasks, and that it predicts optimistic estimates for novel state-action pairs.

Multi-agent Reinforcement Learning For Networked System Control

Tianshu Chu · Sandeep Chinchali · Sachin Katti

Rating: [6,6,6]

OpenReview

Abstract

This paper considers multi-agent reinforcement learning (MARL) in networked system control. Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors. We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and introduce a spatial discount factor to stabilize the training of each local agent. Further, we propose a new differentiable communication protocol, called NeurComm, to reduce information loss and non-stationarity in NMARL. Based on experiments in realistic NMARL scenarios of adaptive traffic signal control and cooperative adaptive cruise control, an appropriate spatial discount factor effectively enhances the learning curves of non-communicative MARL algorithms, while NeurComm outperforms existing communication protocols in both learning efficiency and control performance.

A Learning-based Iterative Method For Solving Vehicle Routing Problems

Hao Lu · Xingwen Zhang · Shuang Yang

Rating: [6,6,6]

OpenReview

Abstract

This paper is concerned with solving combinatorial optimization problems, in particular, the capacitated vehicle routing problems (CVRP). Classical Operations Research (OR) algorithms such as LKH3 (Helsgaun, 2017) are extremely inefficient (e.g., 13 hours on CVRP of only size 100) and difficult to scale to larger-size problems. Machine learning based approaches have recently shown to be promising, partly because of their efficiency (once trained, they can perform solving within minutes or even seconds). However, there is still a considerable gap between the quality of a machine learned solution and what OR methods can offer (e.g., on CVRP-100, the best result of learned solutions is between 16.10-16.80, significantly worse than LKH3's 15.65). In this paper, we present the first learning based approach for CVRP that is efficient in solving speed and at the same time outperforms OR methods. Starting with a random initial solution, our algorithm learns to iteratively refines the solution with an improvement operator, selected by a reinforcement learning based controller. The improvement operator is selected from a pool of powerful operators that are customized for routing problems. By combining the strengths of the two worlds, our approach achieves the new state-of-the-art results on CVRP, e.g., an average cost of 15.57 on CVRP-100.

Making Efficient Use Of Demonstrations To Solve Hard Exploration Problems

Caglar Gulcehre · Tom Le Paine · Bobak Shahriari · Misha Denil · Matt Hoffman · Hubert Soyer · Richard Tanburn · Steven Kapturowski · Neil Rabinowitz · Duncan Williams · Gabriel Barth-maron · Ziyu Wang · Nando De Freitas · Worlds Team

Rating: [6,8,6]

OpenReview

Abstract

This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.

Intrinsic Motivation For Encouraging Synergistic Behavior

Rohan Chitnis · Shubham Tulsiani · Saurabh Gupta · Abhinav Gupta

Rating: [6,8,6]

OpenReview

Abstract

We study the role of intrinsic motivation as an exploration bias for reinforcement learning in sparse-reward synergistic tasks, which are tasks where multiple agents must work together to achieve a goal they could not individually. Our key idea is that a good guiding principle for intrinsic motivation in synergistic tasks is to take actions which affect the world in ways that would not be achieved if the agents were acting on their own. Thus, we propose to incentivize agents to take (joint) actions whose effects cannot be predicted via a composition of the predicted effect for each individual agent. We study two instantiations of this idea, one based on the true states encountered, and another based on a dynamics model trained concurrently with the policy. While the former is simpler, the latter has the benefit of being analytically differentiable with respect to the action taken. We validate our approach in robotic bimanual manipulation tasks with sparse rewards; we find that our approach yields more efficient learning than both 1) training with only the sparse reward and 2) using the typical surprise-based formulation of intrinsic motivation, which does not bias toward synergistic behavior. Videos are available on the project webpage: https://sites.google.com/view/iclr2020-synergistic.

Contrastive Learning Of Structured World Models

Thomas Kipf · Elise Van Der Pol · Max Welling

Rating: [8,8,8]

OpenReview

Abstract

A structured understanding of our world in terms of objects, relations, and hierarchies is an important component of human cognition. Learning such a structured world model from raw sensory data remains a challenge. As a step towards this goal, we introduce Contrastively-trained Structured World Models (C-SWMs). C-SWMs utilize a contrastive approach for representation learning in environments with compositional structure. We structure each state embedding as a set of object representations and their relations, modeled by a graph neural network. This allows objects to be discovered from raw pixel observations without direct supervision as part of the learning process. We evaluate C-SWMs on compositional environments involving multiple interacting objects that can be manipulated independently by an agent, simple Atari games, and a multi-object physics simulation. Our experiments demonstrate that C-SWMs can overcome limitations of models based on pixel reconstruction and outperform typical representatives of this model class in highly structured environments, while learning interpretable object-based representations.

Maximum Likelihood Constraint Inference For Inverse Reinforcement Learning

Dexter R.r. Scobee · S. Shankar Sastry

Rating: [8,6,3,6]

OpenReview

Abstract

While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agent’s policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints. In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior. We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agent’s behavior. Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agent’s demonstrations given our knowledge of an MDP. Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations. We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle.

Watch, Try, Learn: Meta-learning From Demonstrations And Rewards

Allan Zhou · Eric Jang · Daniel Kappler · Alex Herzog · Mohi Khansari · Paul Wohlhart · Yunfei Bai · Mrinal Kalakrishnan · Sergey Levine · Chelsea Finn

Rating: [8,3,6]

OpenReview

Abstract

Imitation learning allows agents to learn complex behaviors from demonstrations. However, learning a complex vision-based task may require an impractical number of demonstrations. Meta-imitation learning is a promising approach towards enabling agents to learn a new task from one or a few demonstrations by leveraging experience from learning similar tasks. In the presence of task ambiguity or unobserved dynamics, demonstrations alone may not provide enough information; an agent must also try the task to successfully infer a policy. In this work, we propose a method that can learn to learn from both demonstrations and trial-and-error experience with sparse reward feedback. In comparison to meta-imitation, this approach enables the agent to effectively and efficiently improve itself autonomously beyond the demonstration data. In comparison to meta-reinforcement learning, we can scale to substantially broader distributions of tasks, as the demonstration reduces the burden of exploration. Our experiments show that our method significantly outperforms prior approaches on a set of challenging, vision-based control tasks.

Autoq: Automated Kernel-wise Neural Network Quantization

Qian Lou · Feng Guo · Minje Kim · Lantao Liu · Lei Jiang.

Rating: [6,6,8,3]

OpenReview

Abstract

Network quantization is one of the most hardware friendly techniques to enable the deployment of convolutional neural networks (CNNs) on low-power mobile devices. Recent network quantization techniques quantize each weight kernel in a convolutional layer independently for higher inference accuracy, since the weight kernels in a layer exhibit different variances and hence have different amounts of redundancy. The quantization bitwidth or bit number (QBN) directly decides the inference accuracy, latency, energy and hardware overhead. To effectively reduce the redundancy and accelerate CNN inferences, various weight kernels should be quantized with different QBNs. However, prior works use only one QBN to quantize each convolutional layer or the entire CNN, because the design space of searching a QBN for each weight kernel is too large. The hand-crafted heuristic of the kernel-wise QBN search is so sophisticated that domain experts can obtain only sub-optimal results. It is difficult for even deep reinforcement learning (DRL) DDPG-based agents to find a kernel-wise QBN configuration that can achieve reasonable inference accuracy. In this paper, we propose a hierarchical-DRL-based kernel-wise network quantization technique, AutoQ, to automatically search a QBN for each weight kernel, and choose another QBN for each activation layer. Compared to the models quantized by the state-of-the-art DRL-based schemes, on average, the same models quantized by AutoQ reduce the inference latency by 54.06%, and decrease the inference energy consumption by 50.69%, while achieving the same inference accuracy.

Sqil: Imitation Learning Via Reinforcement Learning With Sparse Rewards

Siddharth Reddy · Anca D. Dragan · Sergey Levine

Rating: [8,6,6]

OpenReview

Abstract

Learning to imitate expert behavior from demonstrations can be challenging, especially in environments with high-dimensional, continuous observations and unknown dynamics. Supervised learning methods based on behavioral cloning (BC) suffer from distribution shift: because the agent greedily imitates demonstrated actions, it can drift away from demonstrated states due to error accumulation. Recent methods based on reinforcement learning (RL), such as inverse RL and generative adversarial imitation learning (GAIL), overcome this issue by training an RL agent to match the demonstrations over a long horizon. Since the true reward function for the task is unknown, these methods learn a reward function from the demonstrations, often using complex and brittle approximation techniques that involve adversarial training. We propose a simple alternative that still uses RL, but does not require learning a reward function. The key idea is to provide the agent with an incentive to match the demonstrations over a long horizon, by encouraging it to return to demonstrated states upon encountering new, out-of-distribution states. We accomplish this by giving the agent a constant reward of r=+1 for matching the demonstrated action in a demonstrated state, and a constant reward of r=0 for all other behavior. Our method, which we call soft Q imitation learning (SQIL), can be implemented with a handful of minor modifications to any standard Q-learning or off-policy actor-critic algorithm. Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation. Empirically, we show that SQIL outperforms BC and achieves competitive results compared to GAIL, on a variety of image-based and low-dimensional tasks in Box2D, Atari, and MuJoCo. This paper is a proof of concept that illustrates how a simple imitation method based on RL with constant rewards can be as effective as more complex methods that use learned rewards.

Exploratory Not Explanatory: Counterfactual Analysis Of Saliency Maps For Deep Rl

Akanksha Atrey · Kaleigh Clary · David Jensen

Rating: [1,3,8]

OpenReview

Abstract

Saliency maps are often used to suggest explanations of the behavior of deep rein- forcement learning (RL) agents. However, the explanations derived from saliency maps are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and show that explanations suggested by saliency maps are often not supported by experiments. Our experiments suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool.

Sharing Knowledge In Multi-task Deep Reinforcement Learning

Carlo D'eramo · Davide Tateo · Andrea Bonarini · Marcello Restelli · Jan Peters

Rating: [6,6,6]

OpenReview

Abstract

We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning. We leverage the assumption that learning from different tasks, sharing common properties, is helpful to generalize the knowledge of them resulting in a more effective feature extraction compared to learning a single task. Intuitively, the resulting set of features offers performance benefits when used by Reinforcement Learning algorithms. We prove this by providing theoretical guarantees that highlight the conditions for which is convenient to share representations among tasks, extending the well-known finite-time bounds of Approximate Value-Iteration to the multi-task setting. In addition, we complement our analysis by proposing multi-task extensions of three Reinforcement Learning algorithms that we empirically evaluate on widely used Reinforcement Learning benchmarks showing significant improvements over the single-task counterparts in terms of sample efficiency and performance.

Reinforcement Learning With Competitive Ensembles Of Information-constrained Primitives

Anirudh Goyal · Shagun Sodhani · Jonathan Binas · Xue Bin Peng · Sergey Levine · Yoshua Bengio

Rating: [8,6,6]

OpenReview

Abstract

Reinforcement learning agents that operate in diverse and complex environments can benefit from the structured decomposition of their behavior. Often, this is addressed in the context of hierarchical reinforcement learning, where the aim is to decompose a policy into lower-level primitives or options, and a higher-level meta-policy that triggers the appropriate behaviors for a given situation. However, the meta-policy must still produce appropriate decisions in all states. In this work, we propose a policy design that decomposes into primitives, similarly to hierarchical reinforcement learning, but without a high-level meta-policy. Instead, each primitive can decide for themselves whether they wish to act in the current state. We use an information-theoretic mechanism for enabling this decentralized decision: each primitive chooses how much information it needs about the current state to make a decision and the primitive that requests the most information about the current state acts in the world. The primitives are regularized to use as little information as possible, which leads to natural competition and specialization. We experimentally demonstrate that this policy architecture improves over both flat and hierarchical policies in terms of generalization.

Rtfm: Generalising To New Environment Dynamics Via Reading

Victor Zhong · Tim Rocktäschel · Edward Grefenstette

Rating: [6,6,6]

OpenReview

Abstract

Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose txt2π, a model that captures three-way interactions between the goal, document, and observations. On RTFM, txt2π generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, txt2π produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.

Sub-policy Adaptation For Hierarchical Reinforcement Learning

Alexander Li · Carlos Florensa · Ignasi Clavera · Pieter Abbeel

Rating: [3,8]

OpenReview

Abstract

Hierarchical reinforcement learning is a promising approach to tackle long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Leaving the skills fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold. First, we derive a new hierarchical policy gradient with an unbiased latent-dependent baseline, and we introduce Hierarchical Proximal Policy Optimization (HiPPO), an on-policy method to efficiently train all levels of the hierarchy jointly. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Code and videos are available at sites.google.com/view/hippo-rl.

Meta Reinforcement Learning With Autonomous Inference Of Subtask Dependencies

Sungryull Sohn · Hyunjae Woo · Jongwook Choi · Honglak Lee

Rating: [6,6,6]

OpenReview

Abstract

We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. The agent needs to quickly adapt to the task over few episodes during adaptation phase to maximize the return in the test phase. Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. To facilitate learning, we adopt an intrinsic reward inspired by upper confidence bound (UCB) that encourages efficient exploration. Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter, and to adapt more efficiently than existing meta RL and hierarchical RL methods.

Multi-agent Interactions Modeling With Correlated Policies

Minghuan Liu · Ming Zhou · Weinan Zhang · Yuzheng Zhuang · Jun Wang · Wulong Liu · Yong Yu

Rating: [6,6,8]

OpenReview

Abstract

In multi-agent systems, complex interacting behaviors arise due to heavy correlations among agents. However, prior works on modeling multi-agent interactions from demonstrations have largely been constrained by assuming the independence among policies and their reward structures. In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better fit complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods.

Reanalysis Of Variance Reduced Temporal Difference Learning

Tengyu Xu · Zhe Wang · Yi Zhou · Yingbin Liang

Rating: [8,8,6,3,6]

OpenReview

Abstract

Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance. A variance reduced TD (VRTD) algorithm was proposed by Korda and La (2015), which applies the variance reduction technique directly to the online TD learning with Markovian samples. In this work, we first point out the technical errors in the analysis of VRTD in Korda and La (2015), and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance. We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate. Furthermore, the variance error (for both i.i.d. and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD.

Population-guided Parallel Policy Search For Reinforcement Learning

Whiyoung Jung · Giseung Park · Youngchul Sung

Rating: [6,8,3]

OpenReview

Abstract

In this paper, a new population-guided parallel learning scheme is proposed to enhance the performance of off-policy reinforcement learning (RL). In the proposed scheme, multiple identical learners with their own value-functions and policies share a common experience replay buffer, and search a good policy in collaboration with the guidance of the best policy information. The key point is that the information of the best policy is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search region by the multiple learners. The guidance by the previous best policy and the enlarged range enable faster and better policy search, and monotone improvement of the expected cumulative return by the proposed scheme is proved theoretically. Working algorithms are constructed by applying the proposed scheme to the twin delayed deep deterministic (TD3) policy gradient algorithm, and numerical results show that the constructed P3S-TD3 outperforms most of the current state-of-the-art RL algorithms, and the gain is significant in the case of sparse reward environment.

Projection Based Constrained Policy Optimization

Tsung-yen Yang · Justinian Rosca · Karthik Narasimhan · Peter J. Ramadge

Rating: [6,6,6]

OpenReview

Abstract

In this paper, we consider the problem of learning control policies that optimize areward function while satisfying constraints due to considerations of safety, fairness, or other costs. We propose a new algorithm - Projection Based ConstrainedPolicy Optimization (PCPO), an iterative method for optimizing policies in a two-step process - the first step performs an unconstrained update while the secondstep reconciles the constraint violation by projection the policy back onto the constraint set. We theoretically analyze PCPO and provide a lower bound on rewardimprovement, as well as an upper bound on constraint violation for each policy update. We further characterize the convergence of PCPO with projection basedon two different metrics - L2 norm and Kullback-Leibler divergence. Our empirical results over several control tasks demonstrate that our algorithm achievessuperior performance, averaging more than 3.5 times less constraint violation andaround 15% higher reward compared to state-of-the-art methods.

Influence-based Multi-agent Exploration

Tonghan Wang* · Jianhao Wang* · Yi Wu · Chongjie Zhang

Rating: [6,6,8]

OpenReview

Abstract

Intrinsically motivated reinforcement learning aims to address the exploration challenge for sparse-reward tasks. However, the study of exploration methods in transition-dependent multi-agent settings is largely absent from the literature. We aim to take a step towards solving this problem. We present two exploration methods: exploration via information-theoretic influence (EITI) and exploration via decision-theoretic influence (EDTI), by exploiting the role of interaction in coordinated behaviors of agents. EITI uses mutual information to capture influence transition dynamics. EDTI uses a novel intrinsic reward, called Value of Interaction (VoI), to characterize and quantify the influence of one agent's behavior on expected returns of other agents. By optimizing EITI or EDTI objective as a regularizer, agents are encouraged to coordinate their exploration and learn policies to optimize team performance. We show how to optimize these regularizers so that they can be easily integrated with policy gradient reinforcement learning. The resulting update rule draws a connection between coordinated exploration and intrinsic reward distribution. Finally, we empirically demonstrate the significant strength of our method in a variety of multi-agent scenarios.

Graph Constrained Reinforcement Learning For Natural Language Action Spaces

Prithviraj Ammanabrolu · Matthew Hausknecht

Rating: [6,6,6]

OpenReview

Abstract

Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language. They are ideal environments for studying how to extend reinforcement learning agents to meet the challenges of natural language understanding, partial observability, and action generation in combinatorially-large text-based action spaces. We present KG-A2C, an agent that builds a dynamic knowledge graph while exploring and generates actions using a template-based action space. We contend that the dual uses of the knowledge graph to reason about game state and to constrain natural language generation are the keys to scalable exploration of combinatorially large natural language actions. Results across a wide variety of IF games show that KG-A2C outperforms current IF agents despite the exponential increase in action space size.

A Simple Randomization Technique For Generalization In Deep Reinforcement Learning

Kimin Lee · Kibok Lee · Jinwoo Shin · Honglak Lee

Rating: [8,3,6]

OpenReview

Abstract

Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained ones), particularly when they are trained on high-dimensional state spaces such as images. In this paper, we propose a simple technique to improve a generalization ability of deep RL agents by introducing a randomized (convolutional) neural network that randomly perturbs input observations. It enables trained agents to adapt to new domains by learning robust features which are invariant across varied and randomized input observations. Furthermore, we propose an inference method based on Monte Carlo approximation to reduce the variance induced by this randomization. The proposed method significantly outperforms conventional techniques including various regularization and data augmentation techniques across 2D CoinRun, 3D DeepMind Lab exploration, and 3D robotics control tasks.

On The Weaknesses Of Reinforcement Learning For Neural Machine Translation

Leshem Choshen · Lior Fox · Zohar Aizenbud · Omri Abend

Rating: [8,6,3]

OpenReview

Abstract

Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.

Learning The Arrow Of Time For Problems In Reinforcement Learning

Nasim Rahaman · Steffen Wolf · Anirudh Goyal · Roman Remme · Yoshua Bengio

Rating: [6,6,8]

OpenReview

Abstract

We humans have an innate understanding of the asymmetric progression of time, which we use to efficiently and safely perceive and manipulate our environment. Drawing inspiration from that, we approach the problem of learning an arrow of time in a Markov (Decision) Process. We illustrate how a learned arrow of time can capture salient information about the environment, which in turn can be used to measure reachability, detect side-effects and to obtain an intrinsic reward signal. Finally, we propose a simple yet effective algorithm to parameterize the problem at hand and learn an arrow of time with a function approximator (here, a deep neural network). Our empirical results span a selection of discrete and continuous environments, and demonstrate for a class of stochastic processes that the learned arrow of time agrees reasonably well with a well known notion of an arrow of time due to Jordan, Kinderlehrer and Otto (1998).

Model-based Reinforcement Learning For Biological Sequence Design

Christof Angermueller · David Dohan · David Belanger · Ramya Deshpande · Kevin Murphy · Lucy Colwell

Rating: [6,3,6]

OpenReview

Abstract

The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact. Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations. In response, we propose using reinforcement learning (RL) based on proximal-policy optimization (PPO) for biological sequence design. RL provides a flexible framework for optimization generative sequence models to achieve specific criteria, such as diversity among the high-quality sequences discovered. We propose a model-based variant of PPO, DyNA-PPO, to improve sample efficiency, where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds. To accommodate the growing number of observations across rounds, the simulator model is automatically selected at each round from a pool of diverse models of varying capacity. On the tasks of designing DNA transcription factor binding sites, designing antimicrobial proteins, and optimizing the energy of Ising models based on protein structure, we find that DyNA-PPO performs significantly better than existing methods in settings in which modeling is feasible, while still not performing worse in situations in which a reliable model cannot be learned.

Caql: Continuous Action Q-learning

Moonkyung Ryu · Yinlam Chow · Ross Anderson · Christian Tjandraatmadja · Craig Boutilier

Rating: [6,6]

OpenReview

Abstract

Reinforcement learning (RL) with value-based methods (e.g., Q-learning) has shown success in a variety of domains such as games and recommender systems (RSs). When the action space is finite, these algorithms implicitly finds a policy by learning the optimal value function, which are often very efficient. However, one major challenge of extending Q-learning to tackle continuous-action RL problems is that obtaining optimal Bellman backup requires solving a continuous action-maximization (max-Q) problem. While it is common to restrict the parameterization of the Q-function to be concave in actions to simplify the max-Q problem, such a restriction might lead to performance degradation. Alternatively, when the Q-function is parameterized with a generic feed-forward neural network (NN), the max-Q problem can be NP-hard. In this work, we propose the CAQL method which minimizes the Bellman residual using Q-learning with one of several plug-and-play action optimizers. In particular, leveraging the strides of optimization theories in deep NN, we show that max-Q problem can be solved optimally with mixed-integer programming (MIP)---when the Q-function has sufficient representation power, this MIP-based optimization induces better policies and is more robust than counterparts, e.g., CEM or GA, that approximate the max-Q solution. To speed up training of CAQL, we develop three techniques, namely (i) dynamic tolerance, (ii) dual filtering, and (iii) clustering. To speed up inference of CAQL, we introduce the action function that concurrently learns the optimal policy. To demonstrate the efficiency of CAQL we compare it with state-of-the-art RL algorithms on benchmark continuous control problems that have different degrees of action constraints and show that CAQL significantly outperforms policy-based methods in heavily constrained environments.

V-mpo: On-policy Maximum A Posteriori Policy Optimization For Discrete And Continuous Control

H. Francis Song · Abbas Abdolmaleki · Jost Tobias Springenberg · Aidan Clark · Hubert Soyer · Jack W. Rae · Seb Noury · Arun Ahuja · Siqi Liu · Dhruva Tirumala · Nicolas Heess · Dan Belov · Martin Riedmiller · Matthew M. Botvinick

Rating: [6,6,6]

OpenReview

Abstract

Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported.

Thinking While Moving: Deep Reinforcement Learning With Concurrent Control

Ted Xiao · Eric Jang · Dmitry Kalashnikov · Sergey Levine · Julian Ibarz · Karol Hausman · Alexander Herzog

Rating: [6,6,6]

OpenReview

Abstract

We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the previous action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control problems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architectural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must "think while moving.''

Harnessing Structures For Value-based Planning And Reinforcement Learning

Yuzhe Yang · Guo Zhang · Zhi Xu · Dina Katabi

Rating: [6,8,8]

OpenReview

Abstract

Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL). In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL. In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures. Specifically, we investigate the low-rank structure, which widely exists for big data matrices. We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks (Atari games). As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions, leading to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to any value-based RL techniques to consistently achieve better performance on ''low-rank'' tasks. Extensive experiments on control tasks and Atari games confirm the efficacy of our approach.

Explain Your Move: Understanding Agent Actions Using Focused Feature Saliency

Piyush Gupta · Nikaash Puri · Sukriti Verma · Sameer Singh · Dhruv Kayastha · Shripad Deshmukh · Balaji Krishnamurthy

Rating: [6,8,8]

OpenReview

Abstract

As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our approach generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweights irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare our approach with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that our approach generates saliency maps that are more interpretable for humans than existing approaches.

Implementation Matters In Deep Rl: A Case Study On Ppo And Trpo

Logan Engstrom · Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Firdaus Janoos · Larry Rudolph · Aleksander Madry

Rating: [8,8,8]

OpenReview

Abstract

We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms, Proximal Policy Optimization and Trust Region Policy Optimization. We investigate the consequences of "code-level optimizations:" algorithm augmentations found only in implementations or described as auxiliary details to the core algorithm. Seemingly of secondary importance, such optimizations have a major impact on agent behavior. Our results show that they (a) are responsible for most of PPO's gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function. These insights show the difficulty, and importance, of attributing performance gains in deep reinforcement learning.

Keep Doing What Worked: Behavior Modelling Priors For Offline Reinforcement Learning

Noah Siegel · Jost Tobias Springenberg · Felix Berkenkamp · Abbas Abdolmaleki · Michael Neunert · Thomas Lampe · Roland Hafner · Nicolas Heess · Martin Riedmiller

Rating: [6,6,6]

OpenReview

Abstract

Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.

Imitation Learning Via Off-policy Distribution Matching

Ilya Kostrikov · Ofir Nachum · Jonathan Tompson

Rating: [6,6,6]

OpenReview

Abstract

When performing imitation learning from expert demonstrations, distribution matching is a popular approach, in which one alternates between estimating distribution ratios and then using these ratios as rewards in a standard reinforcement learning (RL) algorithm. Traditionally, estimation of the distribution ratio requires on-policy data, which has caused previous work to either be exorbitantly data- inefficient or alter the original objective in a manner that can drastically change its optimum. In this work, we show how the original distribution ratio estimation objective may be transformed in a principled manner to yield a completely off-policy objective. In addition to the data-efficiency that this provides, we are able to show that this objective also renders the use of a separate RL optimization unnecessary. Rather, an imitation policy may be learned directly from this objective without the use of explicit rewards. We call the resulting algorithm ValueDICE and evaluate it on a suite of popular imitation learning benchmarks, finding that it can achieve state-of-the-art sample efficiency and performance.

Learning Nearly Decomposable Value Functions Via Communication Minimization

Tonghan Wang* · Jianhao Wang* · Chongyi Zheng · Chongjie Zhang

Rating: [6,6,3]

OpenReview

Abstract

Reinforcement learning encounters major challenges in multi-agent settings, such as scalability and non-stationarity. Recently, value function factorization learning emerges as a promising way to address these challenges in collaborative multi-agent systems. However, existing methods have been focusing on learning fully decentralized value function, which are not efficient for tasks requiring communication. To address this limitation, this paper presents a novel framework for learning nearly decomposable value functions with communication, with which agents act on their own most of the time but occasionally send messages to other agents in order for effective coordination. This framework hybridizes value function factorization learning and communication learning by introducing two information-theoretic regularizers. These regularizers are maximizing mutual information between decentralized Q functions and communication messages while minimizing the entropy of messages between agents. We show how to optimize these regularizers in a way that is easily integrated with existing value function factorization methods such as QMIX. Finally, we demonstrate that, on the StarCraft unit micromanagement benchmark, our framework significantly outperforms baseline methods and allows to cut off more than 80\% communication without sacrificing the performance.

Adversarial Autoaugment

Xinyu Zhang · Qiang Wang · Jian Zhang · Zhao Zhong

Rating: [6,6,6]

OpenReview

Abstract

Data augmentation (DA) has been widely utilized to improve generalization in training deep neural networks. Recently, human-designed data augmentation has been gradually replaced by automatically learned augmentation policy. Through finding the best policy in well-designed search space of data augmentation, AutoAugment (Cubuk et al., 2018) can significantly improve validation accuracy on image classification tasks. However, this approach is not computationally practical for large problems. In this paper, we develop an adversarial method to arrive at a computationally-affordable solution called Adversarial AutoAugment, which can simultaneously optimizes target related object and augmentation policy search loss. The augmentation policy network attempts to increase the training loss of a target network through generating adversarial augmentation policies, while the target network can learn more robust features from harder examples to improve the generalization. In contrast to prior work, we reuse the computation in target network training for policy evaluation, and dispense with the retraining of the target network. Compared to AutoAugment, this leads to about 12x reduction in computing cost and 11x shortening in time overhead on ImageNet. We show experimental results of our approach on CIFAR-10/CIFAR-100, ImageNet, and demonstrate significant performance improvements over state-of-the-art. On CIFAR-10, we achieve a top-1 test error of 1.36%, which is the currently best performing single model. On ImageNet, we achieve a leading performance of top-1 accuracy 79.40% on ResNet-50 and 80.00% on ResNet-50-D without extra data.

Varibad: A Very Good Method For Bayes-adaptive Deep Rl Via Meta-learning

Luisa Zintgraf · Kyriacos Shiarlis · Maximilian Igl · Sebastian Schulze · Yarin Gal · Katja Hofmann · Shimon Whiteson

Rating: [8,6,8,1]

OpenReview

Abstract

Trading off exploration and exploitation in an unknown environment is key to maximising expected return during learning. A Bayes-optimal policy, which does so optimally, conditions its actions not only on the environment state but on the agent's uncertainty about the environment. Computing a Bayes-optimal policy is however intractable for all but the smallest tasks. In this paper, we introduce variational Bayes-Adaptive Deep RL (variBAD), a way to meta-learn to perform approximate inference in an unknown environment, and incorporate task uncertainty directly during action selection. In a grid-world domain, we illustrate how variBAD performs structured online exploration as a function of task uncertainty. We also evaluate variBAD on MuJoCo domains widely used in meta-RL and show that it achieves higher return during training than existing methods.

State Alignment-based Imitation Learning

Fangchen Liu · Zhan Ling · Tongzhou Mu · Hao Su

Rating: [6,8,3]

OpenReview

Abstract

Consider an imitation learning problem that the imitator and the expert have different dynamics models. Most of existing imitation learning methods fail because they focus on the imitation of actions. We propose a novel state alignment-based imitation learning method to train the imitator by following the state sequences in the expert demonstrations as much as possible. The alignment of states comes from both local and global perspectives. We combine them into a reinforcement learning framework by a regularized policy update objective. We show the superiority of our method on standard imitation learning settings as well as the challenging settings in which the expert and the imitator have different dynamics models.

Amrl: Aggregated Memory For Reinforcement Learning

Jacob Beck · Kamil Ciosek · Sam Devlin · Sebastian Tschiatschek · Cheng Zhang · Katja Hofmann

Rating: [6,6,8]

OpenReview

Abstract

In many partially observable scenarios, Reinforcement Learning (RL) agents must rely on long-term memory in order to learn an optimal policy. We demonstrate that using techniques from NLP and supervised learning fails at RL tasks due to stochasticity from the environment and from exploration. Utilizing our insights on the limitations of traditional memory methods in RL, we propose AMRL, a class of models that can learn better policies with greater sample efficiency and are resilient to noisy inputs. Specifically, our models use a standard memory module to summarize short-term context, and then aggregate all prior states from the standard model without respect to order. We show that this provides advantages both in terms of gradient decay and signal-to-noise ratio over time. Evaluating in Minecraft and maze environments that test long-term memory, we find that our model improves average return by 19% over a baseline that has the same number of parameters and by 9% over a stronger baseline that has far more parameters.

Implementing Inductive Bias For Different Navigation Tasks Through Diverse Rnn Attrractors

Tie Xu · Omri Barak

Rating: [3,6,6]

OpenReview

Abstract

Navigation is crucial for animal behavior and is assumed to require an internal representation of the external environment, termed a cognitive map. The precise form of this representation is often considered to be a metric representation of space. An internal representation, however, is judged by its contribution to performance on a given task, and may thus vary between different types of navigation tasks. Here we train a recurrent neural network that controls an agent performing several navigation tasks in a simple environment. To focus on internal representations, we split learning into a task-agnostic pre-training stage that modifies internal connectivity and a task-specific Q learning stage that controls the network's output. We show that pre-training shapes the attractor landscape of the networks, leading to either a continuous attractor, discrete attractors or a disordered state. These structures induce bias onto the Q-Learning phase, leading to a performance pattern across the tasks corresponding to metric and topological regularities. Our results show that, in recurrent networks, inductive bias takes the form of attractor landscapes -- which can be shaped by pre-training and analyzed using dynamical systems methods. Furthermore, we demonstrate that non-metric representations are useful for navigation tasks.

Toward Evaluating Robustness Of Deep Reinforcement Learning With Continuous Control

Tsui-wei Weng · Krishnamurthy (dj) Dvijotham* · Jonathan Uesato* · Kai Xiao* · Sven Gowal* · Robert Stanforth* · Pushmeet Kohli

Rating: [6,3,6]

OpenReview

Abstract

Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free based attacks baselines in degrading agent performance as well as driving agents to unsafe states.

Option Discovery Using Deep Skill Chaining

Akhil Bagaria · George Konidaris

Rating: [6,6,6]

OpenReview

Abstract

Autonomously discovering temporally extended actions, or skills, is a longstanding goal of hierarchical reinforcement learning. We propose a new algorithm that combines skill chaining with deep neural networks to autonomously discover skills in high-dimensional, continuous domains. The resulting algorithm, deep skill chaining, constructs skills with the property that executing one enables the agent to execute another. We demonstrate that deep skill chaining significantly outperforms both non-hierarchical agents and other state-of-the-art skill discovery techniques in challenging continuous control tasks.

Finding And Visualizing Weaknesses Of Deep Reinforcement Learning Agents

Christian Rupprecht · Cyril Ibrahim · Christopher J. Pal

Rating: [8,6,3]

OpenReview

Abstract

As deep reinforcement learning driven by visual perception becomes more widely used there is a growing need to better understand and probe the learned agents. Understanding the decision making process and its relationship to visual inputs can be very valuable to identify problems in learned behavior. However, this topic has been relatively under-explored in the research community. In this work we present a method for synthesizing visual inputs of interest for a trained agent. Such inputs or states could be situations in which specific actions are necessary. Further, critical states in which a very high or a very low reward can be achieved are often interesting to understand the situational awareness of the system as they can correspond to risky states. To this end, we learn a generative model over the state space of the environment and use its latent space to optimize a target function for the state of interest. In our experiments we show that this method can generate insights for a variety of environments and reinforcement learning methods. We explore results in the standard Atari benchmark games as well as in an autonomous driving simulator. Based on the efficiency with which we have been able to identify behavioural weaknesses with this technique, we believe this general approach could serve as an important tool for AI safety applications.

Learning Efficient Parameter Server Synchronization Policies For Distributed Sgd

Rong Zhu · Sheng Yang · Andreas Pfadler · Zhengping Qian · Jingren Zhou

Rating: [6,3,6]

OpenReview

Abstract

We apply a reinforcement learning (RL) based approach to learning optimal synchronization policies used for Parameter Server-based distributed training of machine learning models with Stochastic Gradient Descent (SGD). Utilizing a formal synchronization policy description in the PS-setting, we are able to derive a suitable and compact description of states and actions, allowing us to efficiently use the standard off-the-shelf deep Q-learning algorithm. As a result, we are able to learn synchronization policies which generalize to different cluster environments, different training datasets and small model variations and (most importantly) lead to considerable decreases in training time when compared to standard policies such as bulk synchronous parallel (BSP), asynchronous parallel (ASP), or stale synchronous parallel (SSP). To support our claims we present extensive numerical results obtained from experiments performed in simulated cluster environments. In our experiments training time is reduced by 44 on average and learned policies generalize to multiple unseen circumstances.

Model-augmented Actor-critic: Backpropagating Through Paths

Ignasi Clavera · Yao Fu · Pieter Abbeel

Rating: [3,6,8]

OpenReview

Abstract

Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator to augment the data for policy optimization or value function learning. In this paper, we show how to make more effective use of the model by exploiting its differentiability. We construct a policy optimization algorithm that uses the pathwise derivative of the learned model and policy across future timesteps. Instabilities of learning across many timesteps are prevented by using a terminal value function, learning the policy in an actor-critic fashion. Furthermore, we present a derivation on the monotonic improvement of our objective in terms of the gradient error in the model and value function. We show that our approach (i) is consistently more sample efficient than existing state-of-the-art model-based algorithms, (ii) matches the asymptotic performance of model-free algorithms, and (iii) scales to long horizons, a regime where typically past model-based approaches have struggled.

Reinforced Active Learning For Image Segmentation

Arantxa Casanova · Pedro O. Pinheiro · Negar Rostamzadeh · Christopher J. Pal

Rating: [6,6]

OpenReview

Abstract

Learning-based approaches for semantic segmentation have two inherent challenges. First, acquiring pixel-wise labels is expensive and time-consuming. Second, realistic segmentation datasets are highly unbalanced: some categories are much more abundant than others, biasing the performance to the most represented ones. In this paper, we are interested in focusing human labelling effort on a small subset of a larger pool of data, minimizing this effort while maximizing performance of a segmentation model on a hold-out set. We present a new active learning strategy for semantic segmentation based on deep reinforcement learning (RL). An agent learns a policy to select a subset of small informative image regions -- opposed to entire images -- to be labeled, from a pool of unlabeled data. The region selection decision is made based on predictions and uncertainties of the segmentation model being trained. Our method proposes a new modification of the deep Q-network (DQN) formulation for active learning, adapting it to the large-scale nature of semantic segmentation problems. We test the proof of concept in CamVid and provide results in the large-scale dataset Cityscapes. On Cityscapes, our deep RL region-based DQN approach requires roughly 30% less additional labeled data than our most competitive baseline to reach the same performance. Moreover, we find that our method asks for more labels of under-represented categories compared to the baselines, improving their performance and helping to mitigate class imbalance.

State-only Imitation With Transition Dynamics Mismatch

Tanmay Gangwani · Jian Peng

Rating: [6,6,6]

OpenReview

Abstract

Imitation Learning (IL) is a popular paradigm for training agents to achieve complicated goals by leveraging expert behavior, rather than dealing with the hardships of designing a correct reward function. With the environment modeled as a Markov Decision Process (MDP), most of the existing IL algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitator policy is to be learned. This is uncharacteristic of many real-life scenarios where discrepancies between the expert and the imitator MDPs are common, especially in the transition dynamics function. Furthermore, obtaining expert actions may be costly or infeasible, making the recent trend towards state-only IL (where expert demonstrations constitute only states or observations) ever so promising. Building on recent adversarial imitation approaches that are motivated by the idea of divergence minimization, we present a new state-only IL algorithm in this paper. It divides the overall optimization objective into two sub-problems by introducing an indirection step, and solves the sub-problems iteratively. We show that our algorithm is particularly effective when there is a transition dynamics mismatch between the expert and imitator MDPs, while the baseline IL methods suffer from performance degradation. To analyze this, we construct several interesting MDPs by modifying the configuration parameters for the MuJoCo locomotion tasks from OpenAI Gym.

Behaviour Suite For Reinforcement Learning

Ian Osband · Yotam Doron · Matteo Hessel · John Aslanides · Eren Sezener · Andre Saraiva · Katrina Mckinney · Tor Lattimore · Csaba Szepezvari · Satinder Singh · Benjamin Van Roy · Richard Sutton · David Silver · Hado Van Hasselt

Rating: [8,3,6]

OpenReview

Abstract

This paper introduces the Behaviour Suite for Reinforcement Learning, or bsuite for short. bsuite is a collection of carefully-designed experiments that investigate core capabilities of reinforcement learning (RL) agents with two objectives. First, to collect clear, informative and scalable problems that capture key issues in the design of general and efficient learning algorithms. Second, to study agent behaviour through their performance on these shared benchmarks. To complement this effort, we open source this http URL, which automates evaluation and analysis of any agent on bsuite. This library facilitates reproducible and accessible research on the core issues in RL, and ultimately the design of superior learning algorithms. Our code is Python, and easy to use within existing projects. We include examples with OpenAI Baselines, Dopamine as well as new reference implementations. Going forward, we hope to incorporate more excellent experiments from the research community, and commit to a periodic review of bsuite from a committee of prominent researchers.

The Gambler's Problem And Beyond

Baoxiang Wang · Shuai Li · Jiajin Li · Siu On Chan

Rating: [6,6,6]

OpenReview

Abstract

We analyze the Gambler's problem, a simple reinforcement learning problem where the gambler has the chance to double or lose their bets until the target is reached. This is an early example introduced in the reinforcement learning textbook by \cite{sutton2018reinforcement}, where they mention an interesting pattern of the optimal value function with high-frequency components and repeating non-smooth points but without further investigation. We provide the exact formula for the optimal value function for both the discrete and the continuous case. Though simple as it might seem, the value function is pathological: fractal, self-similar, non-smooth on any interval, zero derivative almost everywhere, and not written as elementary functions. Sharing these properties with the Cantor function, it holds a complexity that has been uncharted thus far. With the analysis, our work could lead insights on improving value function approximation, Q-learning, and gradient-based algorithms in real applications and implementations.

Model Based Reinforcement Learning For Atari

Łukasz Kaiser · Mohammad Babaeizadeh · Piotr Miłos · Błażej Osiński · Roy H Campbell · Konrad Czechowski · Dumitru Erhan · Chelsea Finn · Piotr Kozakowski · Sergey Levine · Afroz Mohiuddin · Ryan Sepassi · George Tucker · Henryk Michalewski

Rating: [6,8,6]

OpenReview

Abstract

Model-free reinforcement learning (RL) can be used to learn effective policies for complex tasks, such as Atari games, even from image observations. However, this typically requires very large amounts of interaction -- substantially more, in fact, than a human would need to learn the same games. How can people learn so quickly? Part of the answer may be that people can learn how the game works and predict which actions will lead to desirable outcomes. In this paper, we explore how video prediction models can similarly enable agents to solve Atari games with fewer interactions than model-free methods. We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting. Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment, which corresponds to two hours of real-time play. In most games SimPLe outperforms state-of-the-art model-free algorithms, in some games by over an order of magnitude.

Meta-q-learning

Rasool Fakoor · Pratik Chaudhari · Stefano Soatto · Alexander J. Smola

Rating: [8,8,6]

OpenReview

Abstract

This paper introduces Meta-Q-Learning (MQL), a new off-policy algorithm for meta-Reinforcement Learning (meta-RL). MQL builds upon three simple ideas. First, we show that Q-learning is competitive with state of the art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory. Second, using a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies. Third, past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates. MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation. Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with state of the art meta-RL algorithms.

Structured Object-aware Physics Prediction For Video Modeling And Planning

Jannik Kossen · Karl Stelzner · Marcel Hussing · Claas Voelcker · Kristian Kersting

Rating: [6,6,6]

OpenReview

Abstract

When humans observe a physical system, they can easily locate components, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem. In this paper, we present STOVE, a novel state-space model for videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over hundreds of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control, in a task with heavily interacting objects.

Single Episode Transfer For Differing Environmental Dynamics In Reinforcement Learning

Jiachen Yang · Brenden Petersen · Hongyuan Zha · Daniel Faissol

Rating: [3,8,8]

OpenReview

Abstract

Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL). An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience rollouts for adaptation. To achieve single episode transfer in a family of environments with related dynamics, we propose a general algorithm that optimizes a probe and an inference model to rapidly estimate underlying latent variables of test dynamics, which are then immediately used as input to a universal control policy. This modular approach enables integration of state-of-the-art algorithms for variational inference or RL. Moreover, our approach does not require access to rewards at test time, allowing it to perform in settings where existing adaptive approaches cannot. In diverse experimental domains with a single episode test constraint, our method significantly outperforms existing adaptive approaches and shows favorable performance against baselines for robust transfer.

Learning Heuristics For Quantified Boolean Formulas Through Reinforcement Learning

Gil Lederman · Markus Rabe · Sanjit Seshia · Edward A. Lee

Rating: [6,8,3]

OpenReview

Abstract

We demonstrate how to learn efficient heuristics for automated reasoning algorithms for quantified Boolean formulas through deep reinforcement learning. We focus on a backtracking search algorithm, which can already solve formulas of impressive size - up to hundreds of thousands of variables. The main challenge is to find a representation of these formulas that lends itself to making predictions in a scalable way. For a family of challenging problems, we learned a heuristic that solves significantly more formulas compared to the existing handwritten heuristics.

Discriminative Particle Filter Reinforcement Learning For Complex Partial Observations

Xiao Ma · Peter Karkus · Nan Ye · David Hsu · Wee Sun Lee

Rating: [8,6,8]

OpenReview

Abstract

Deep reinforcement learning has succeeded in sophisticated games such as Atari, Go, etc. Real-world decision making, however, often requires reasoning with partial information extracted from complex visual observations. This paper presents Discriminative Particle Filter Reinforcement Learning (DPFRL), a new reinforcement learning framework for partial and complex observations. DPFRL encodes a differentiable particle filter with learned transition and observation models in a neural network, which allows for reasoning with partial observations over multiple time steps. While a standard particle filter relies on a generative observation model, DPFRL learns a discriminatively parameterized model that is training directly for decision making. We show that the discriminative parameterization results in significantly improved performance, especially for tasks with complex visual observations, because it circumvents the difficulty of modelling observations explicitly. In most cases, DPFRL outperforms state-of-the-art POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark that we introduce. We further show that DPFRL performs well for visual navigation with real-world data.

Variational Recurrent Models For Solving Partially Observable Control Tasks

Dongqi Han · Kenji Doya · Jun Tani

Rating: [6,6,8]

OpenReview

Abstract

In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner.

Disagreement-regularized Imitation Learning

Kiante Brantley · Wen Sun · Mikael Henaff

Rating: [6,8,8]

OpenReview

Abstract

We present a simple and effective algorithm designed to address the covariate shift problem in imitation learning. It operates by training an ensemble of policies on the expert demonstration data, and using the variance of their predictions as a cost which is minimized with RL together with a supervised behavioral cloning cost. Unlike adversarial imitation methods, it uses a fixed reward function which is easy to optimize. We prove a regret bound for the algorithm in the tabular setting which is linear in the time horizon multiplied by a coefficient which we show to be low for certain problems in which behavioral cloning fails. We evaluate our algorithm empirically across multiple pixel-based Atari environments and continuous control tasks, and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning.

Sample Efficient Policy Gradient Methods With Recursive Variance Reduction

Pan Xu · Felicia Gao · Quanquan Gu

Rating: [6,8,6]

OpenReview

Abstract

Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires \footnote{ notation hides constant factors.} episodes to find an -approximate stationary point of the nonconcave performance function (i.e., such that ). This sample complexity improves the existing result for stochastic variance reduced policy gradient algorithms by a factor of . In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.

Doubly Robust Bias Reduction In Infinite Horizon Off-policy Estimation

Ziyang Tang* · Yihao Feng* · Lihong Li · Dengyong Zhou · Qiang Liu

Rating: [6,8,8]

OpenReview

Abstract

Infinite horizon off-policy policy evaluation is a highly challenging task due to the excessively large variance of typical importance sampling (IS) estimators. Recently, Liu et al. (2018) proposed an approach that significantly reduces the variance of infinite-horizon off-policy evaluation by estimating the stationary density ratio, but at the cost of introducing potentially high risks due to the error in density ratio estimation. In this paper, we develop a bias-reduced augmentation of their method, which can take advantage of a learned value function to obtain higher accuracy. Our method is doubly robust in that the bias vanishes when either the density ratio or value function estimation is perfect. In general, when either of them is accurate, the bias can also be reduced. Both theoretical and empirical results show that our method yields significant advantages over previous methods.

Exploring Model-based Planning With Policy Networks

Tingwu Wang · Jimmy Ba

Rating: [6,8,6]

OpenReview

Abstract

Model-based reinforcement learning (MBRL) with model-predictive control or online planning has shown great potential for locomotion control tasks in both sample efficiency and asymptotic performance. Despite the successes, the existing planning methods search from candidate sequences randomly generated in the action space, which is inefficient in complex high-dimensional environments. In this paper, we propose a novel MBRL algorithm, model-based policy planning (POPLIN), that combines policy networks with online planning. More specifically, we formulate action planning at each time-step as an optimization problem using neural networks. We experiment with both optimization w.r.t. the action sequences initialized from the policy network, and also online optimization directly w.r.t. the parameters of the policy network. We show that POPLIN obtains state-of-the-art performance in the MuJoCo benchmarking environments, being about 3x more sample efficient than the state-of-the-art algorithms, such as PETS, TD3 and SAC. To explain the effectiveness of our algorithm, we show that the optimization surface in parameter space is smoother than in action space. Further more, we found the distilled policy network can be effectively applied without the expansive model predictive control during test time for some environments such as Cheetah. Code is released.

Reinforcement Learning Based Graph-to-sequence Model For Natural Question Generation

Yu Chen · Lingfei Wu · Mohammed J. Zaki

Rating: [6,6,8]

OpenReview

Abstract

Natural question generation (QG) aims to generate questions from a passage and an answer. Previous works on QG either (i) ignore the rich structure information hidden in text, (ii) solely rely on cross-entropy loss that leads to issues like exposure bias and inconsistency between train/test measurement, or (iii) fail to fully exploit the answer information. To address these limitations, in this paper, we propose a reinforcement learning (RL) based graph-to sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator with a novel Bidirectional Gated Graph Neural Network based encoder to embed the passage, and a hybrid evaluator with a mixed objective function that combines both the cross-entropy and RL loss to ensure the generation of syntactically and semantically valid text. We also introduce an effective Deep Alignment Network for incorporating the answer information into the passage at both the word and contextual level. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the standard SQuAD benchmark for QG.

Seed Rl: Scalable And Efficient Deep-rl With Accelerated Central Inference

Lasse Espeholt · Raphaël Marinier · Piotr Stanczyk · Ke Wang · Marcin Michalski

Rating: [8,6,8]

OpenReview

Abstract

We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost. of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state-of-the-art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 twice as fast in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out.

The Variational Bandwidth Bottleneck: Stochastic Evaluation On An Information Budget

Anirudh Goyal · Yoshua Bengio · Matthew Botvinick · Sergey Levine

Rating: [6,6]

OpenReview

Abstract

In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a ``privileged'' input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.

Dynamical Distance Learning For Semi-supervised And Unsupervised Skill Discovery

Kristian Hartikainen · Xinyang Geng · Tuomas Haarnoja · Sergey Levine

Rating: [6,6,6]

OpenReview

Abstract

Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. This shaping is difficult to specify by hand, particularly when the task is learned from raw observations, such as images. In this paper, we study how we can automatically learn dynamical distances: a measure of the expected number of time steps to reach a given goal state from any other state. These dynamical distances can be used to provide well-shaped reward functions for reaching new goals, making it possible to learn complex tasks efficiently. We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples. We evaluate our method both on a real-world robot and in simulation. We show that our method can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and just ten preference labels, without any other supervision. Videos of the learned skills can be found on the project website: https://sites.google.com/view/skills-via-distance-learning.

Impact: Importance Weighted Asynchronous Architectures With Clipped Target Networks

Michael Luo · Jiahao Yao · Richard Liaw · Eric Liang · Ion Stoica

Rating: [6,3,6,6]

OpenReview

Abstract

The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process. However, modern methods for scalable reinforcement learning (RL) often tradeoff between the throughput of samples that an RL agent can learn from (sample throughput) and the quality of learning from each sample (sample efficiency). In these scalable RL architectures, as one increases sample throughput (i.e. increasing parallelization in IMPALA (Espeholt et al., 2018)), sample efficiency drops significantly. To address this, we propose a new distributed reinforcement learning algorithm, IMPACT. IMPACT extends PPO with three changes: a target network for stabilizing the surrogate objective, a circular buffer, and truncated importance sampling. In discrete action-space environments, we show that IMPACT attains higher reward and, simultaneously, achieves up to 30% decrease in training wall-time than that of IMPALA. For continuous control environments, IMPACT trains faster than existing scalable agents while preserving the sample efficiency of synchronous PPO.

Ride: Rewarding Impact-driven Exploration For Procedurally-generated Environments

Roberta Raileanu · Tim Rocktäschel

Rating: [6,6,8]

OpenReview

Abstract

Exploration in sparse reward environments remains one of the key challenges of model-free reinforcement learning (RL). Instead of solely relying on extrinsic rewards provided by the environment, many state-of-the-art methods use intrinsic rewards to encourage the agent to explore the environment. However, we show that existing methods fall short in procedurally-generated environments where an agent is unlikely to ever visit the same state more than once. We propose a novel type of intrinsic exploration bonus which rewards the agent for actions that change the agent's learned state representation. We evaluate our method on multiple challenging procedurally-generated tasks in MiniGrid, as well as on tasks used in prior curiosity-driven exploration work. Our experiments demonstrate that our approach is more sample efficient than existing exploration methods, particularly for procedurally-generated MiniGrid environments. Furthermore, we analyze the learned behavior as well as the intrinsic reward received by our agent. In contrast to previous approaches, our intrinsic reward does not diminish during the course of training and it rewards the agent substantially more for interacting with objects that it can control.

Geometric Insights Into The Convergence Of Nonlinear Td Learning

David Brandfonbrener · Joan Bruna

Rating: [8,3,6,8]

OpenReview

Abstract

While there are convergence guarantees for temporal difference (TD) learning when using linear function approximators, the situation for nonlinear models is far less understood, and divergent examples are known. Here we take a first step towards extending theoretical convergence guarantees to TD learning with nonlinear function approximation. More precisely, we consider the expected learning dynamics of the TD(0) algorithm for value estimation. As the step-size converges to zero, these dynamics are defined by a nonlinear ODE which depends on the geometry of the space of function approximators, the structure of the underlying Markov chain, and their interaction. We find a set of function approximators that includes ReLU networks and has geometry amenable to TD learning regardless of environment, so that the solution performs about as well as linear TD in the worst case. Then, we show how environments that are more reversible induce dynamics that are better for TD learning and prove global convergence to the true value function for well-conditioned function approximators. Finally, we generalize a divergent counterexample to a family of divergent problems to demonstrate how the interaction between approximator and environment can go wrong and to motivate the assumptions needed to prove convergence.

The Ingredients Of Real World Robotic Reinforcement Learning

Henry Zhu · Justin Yu · Abhishek Gupta · Dhruv Shah · Kristian Hartikainen · Avi Singh · Vikash Kumar · Sergey Levine

Rating: [6,8,8]

OpenReview

Abstract

The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm.

Watch The Unobserved: A Simple Approach To Parallelizing Monte Carlo Tree Search

Anji Liu · Jianshu Chen · Mingze Yu · Yu Zhai · Xuewen Zhou · Ji Liu

Rating: [8,6,8]

OpenReview

Abstract

Monte Carlo Tree Search (MCTS) algorithms have achieved great success on many challenging benchmarks (e.g., Computer Go). However, they generally require a large number of rollouts, making their applications to planning costly. Furthermore, it is also extremely challenging to parallelize MCTS due to its inherent sequential nature: each rollout heavily relies on the statistics (e.g., node visitation counts) estimated from previous simulations to achieve an effective exploration-exploitation tradeoff. In spite of these difficulties, we develop an algorithm, P-UCT, to effectively parallelize MCTS, which achieves linear speedup and exhibits negligible performance loss with an increasing number of workers. The key idea in P-UCT is a set of statistics that we introduce to track the number of on-going yet incomplete simulation queries (named as unobserved samples). These statistics are used to modify the UCT tree policy in the selection steps in a principled manner to retain effective exploration-exploitation tradeoff when we parallelize the most time-consuming expansion and simulation steps. Experimental results on a proprietary benchmark and the public Atari Game benchmark demonstrate the near-optimal linear speedup and the superior performance of P-UCT when compared to these existing techniques.

Meta-learning Acquisition Functions For Transfer Learning In Bayesian Optimization

Michael Volpp · Lukas Froehlich · Kirsten Fischer · Andreas Doerr · Stefan Falkner · Frank Hutter · Christian Daniel

Rating: [8,6,8]

OpenReview

Abstract

Transferring knowledge across tasks to improve data-efficiency is one of the open key challenges in the area of global optimization algorithms. Readily available algorithms are typically designed to be universal optimizers and, thus, often suboptimal for specific tasks. We propose a novel transfer learning method to obtain customized optimizers within the well-established framework of Bayesian optimization, allowing our algorithm to utilize the proven generalization capabilities of Gaussian processes. Using reinforcement learning to meta-train an acquisition function (AF) on a set of related tasks, the proposed method learns to extract implicit structural information and to exploit it for improved data-efficiency. We present experiments on a sim-to-real transfer task as well as on several simulated functions and two hyperparameter search problems. The results show that our algorithm (1) automatically identifies structural properties of objective functions from available source tasks or simulations, (2) performs favourably in settings with both scarse and abundant source data, and (3) falls back to the performance level of general AFs if no structure is present.

Learning Expensive Coordination: An Event-based Deep Rl Approach

Zhenyu Shi* · Runsheng Yu* · Xinrun Wang* · Rundong Wang · Youzhi Zhang · Hanjiang Lai · Bo An

Rating: [6,8,6]

OpenReview

Abstract

Existing works in deep Multi-Agent Reinforcement Learning (MARL) mainly focus on coordinating cooperative agents to complete certain tasks jointly. However, in many cases of the real world, agents are self-interested such as employees in a company and clubs in a league. Therefore, the leader, i.e., the manager of the company or the league, needs to provide bonuses to followers for efficient coordination, which we call expensive coordination. The main difficulties of expensive coordination are that i) the leader has to consider the long-term effect and predict the followers' behaviors when assigning bonuses and ii) the complex interactions between followers make the training process hard to converge, especially when the leader's policy changes with time. In this work, we address this problem through an event-based deep RL approach. Our main contributions are threefold. (1) We model the leader's decision-making process as a semi-Markov Decision Process and propose a novel multi-agent event-based policy gradient to learn the leader's long-term policy. (2) We exploit the leader-follower consistency scheme to design a follower-aware module and a follower-specific attention module to predict the followers' behaviors and make accurate response to their behaviors. (3) We propose an action abstraction-based policy gradient algorithm to reduce the followers' decision space and thus accelerate the training process of followers. Experiments in resource collections, navigation, and the predator-prey game reveal that our approach outperforms the state-of-the-art methods dramatically.

Exploration In Reinforcement Learning With Deep Covering Options

Yuu Jinnai · Jee Won Park · Marlos C. Machado · George Konidaris

Rating: [6,6,6]

OpenReview

Abstract

While many option discovery methods have been proposed to accelerate exploration in reinforcement learning, they are often heuristic. Recently, covering options was proposed to discover a set of options that provably reduce the upper bound of the environment's cover time, a measure of the difficulty of exploration. Covering options are computed using the eigenvectors of the graph Laplacian, but they are constrained to tabular tasks and are not applicable to tasks with large or continuous state-spaces. We introduce deep covering options, an online method that extends covering options to large state spaces, automatically discovering task-agnostic options that encourage exploration. We evaluate our method in several challenging sparse-reward domains and we show that our approach identifies less explored regions of the state-space and successfully generates options to visit these regions, substantially improving both the exploration and the total accumulated reward.

Svqn: Sequential Variational Soft Q-learning Networks

Shiyu Huang · Hang Su · Jun Zhu · Ting Chen

Rating: [3,8]

OpenReview

Abstract

Partially Observable Markov Decision Processes (POMDPs) are popular and flexible models for real-world decision-making applications that demand the information from past observations to make optimal decisions. Standard reinforcement learning algorithms for solving Markov Decision Processes (MDP) tasks are not applicable, as they cannot infer the unobserved states. In this paper, we propose a novel algorithm for POMDPs, named sequential variational soft Q-learning networks (SVQNs), which formalizes the inference of hidden states and maximum entropy reinforcement learning (MERL) under a unified graphical model and optimizes the two modules jointly. We further design a deep recurrent neural network to reduce the computational complexity of the algorithm. Experimental results show that SVQNs can utilize past information to help decision making for efficient inference, and outperforms other baselines on several challenging tasks. Our ablation study shows that SVQNs have the generalization ability over time and are robust to the disturbance of the observation.

Evolutionary Population Curriculum For Scaling Multi-agent Reinforcement Learning

Qian Long* · Zihan Zhou* · Abhinav Gupta · Fei Fang · Yi Wu† · Xiaolong Wang†

Rating: [6,8,6]

OpenReview

Abstract

In multi-agent games, the complexity of the environment can grow exponentially as the number of agents increases, so it is particularly challenging to learn good policies when the agent population is large. In this paper, we introduce Evolutionary Population Curriculum (EPC), a curriculum learning paradigm that scales up Multi-Agent Reinforcement Learning (MARL) by progressively increasing the population of training agents in a stage-wise manner. Furthermore, EPC uses an evolutionary approach to fix an objective misalignment issue throughout the curriculum: agents successfully trained in an early stage with a small population are not necessarily the best candidates for adapting to later stages with scaled populations. Concretely, EPC maintains multiple sets of agents in each stage, performs mix-and-match and fine-tuning over these sets and promotes the sets of agents with the best adaptability to the next stage. We implement EPC on a popular MARL algorithm, MADDPG, and empirically show that our approach consistently outperforms baselines by a large margin as the number of agents grows exponentially.

Episodic Reinforcement Learning With Associative Memory

Guangxiang Zhu* · Zichuan Lin* · Guangwen Yang · Chongjie Zhang

Rating: [6,3,6]

OpenReview

Abstract

Sample efficiency has been one of the major challenges for deep reinforcement learning. Non-parametric episodic control has been proposed to speed up parametric reinforcement learning by rapidly latching on previously successful policies. However, previous work on episodic reinforcement learning neglects the relationship between states and only stored the experiences as unrelated items. To improve sample efficiency of reinforcement learning, we propose a novel framework, called Episodic Reinforcement Learning with Associative Memory (ERLAM), which associates related experience trajectories to enable reasoning effective strategies. We build a graph on top of states in memory based on state transitions and develop an efficient reverse-trajectory propagation strategy to allow rapid value propagation through the graph. We use the non-parametric associative memory as early guidance for a parametric reinforcement learning model. Results on Atari games show that our framework has significantly higher sample efficiency and outperforms state-of-the-art episodic reinforcement learning models.

Cm3: Cooperative Multi-goal Multi-stage Multi-agent Reinforcement Learning

Jiachen Yang · Alireza Nakhaei · David Isele · Kikuo Fujimura · Hongyuan Zha

Rating: [6,6,6]

OpenReview

Abstract

A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success. This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others' success, and credit-assignment for interactions between actions and goals of different agents. To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment. We use a function augmentation scheme to bridge value and policy functions across the curriculum. The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.

Making Sense Of Reinforcement Learning And Probabilistic Inference

Brendan O'donoghue · Ian Osband · Catalin Ionescu

Rating: [6,6,8]

OpenReview

Abstract

Reinforcement learning (RL) combines a control problem with statistical estimation: the system dynamics are not known to the agent, but can be learned through experience. A recent line of research casts 'RL as inference' and suggests a particular framework to generalize the RL problem as probabilistic inference. Our paper surfaces key shortcomings in that approach, and clarifies the sense in which RL can be coherently cast as an inference problem. In particular, an RL agent must consider the effects of its actions upon future rewards and observations: the exploration-exploitation tradeoff. In all but the most simple settings, the resulting inference is computationally intractable so that practical RL algorithms must resort to approximation. We show that the popular 'RL as inference' approximation can perform poorly in even the simplest settings. Despite this, we demonstrate that with a small modification the RL as inference framework can provably perform well, and we connect the resulting algorithm with Thompson sampling and the recently proposed K-learning algorithm.

Gendice: Generalized Offline Estimation Of Stationary Values

Ruiyi Zhang* · Bo Dai* · Lihong Li · Dale Schuurmans

Rating: [8,8,8]

OpenReview

Abstract

An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain. In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without additional interaction with the environment being available. We show that consistent estimation remains possible in this scenario, and that effective estimation can still be achieved in important applications. Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions, derived from fundamental properties of the stationary distribution, and exploiting constraint reformulations based on variational divergence minimization. The resulting algorithm, GenDICE, is straightforward and effective. We prove the consistency of the method under general conditions, provide a detailed error analysis, and demonstrate strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation.

Learning To Coordinate Manipulation Skills Via Skill Behavior Diversification

Youngwoon Lee · Jingyun Yang · Joseph J. Lim

Rating: [6,6,6]

OpenReview

Abstract

When mastering a complex manipulation task, humans often decompose the task into sub-skills of their body parts, practice the sub-skills independently, and then execute the sub-skills together. Similarly, a robot with multiple end-effectors can perform a complex task by coordinating sub-skills of each end-effector. To realize temporal and behavioral coordination of skills, we propose a hierarchical framework that first individually trains sub-skills of each end-effector with skill behavior diversification, and learns to coordinate end-effectors using diverse behaviors of the skills. We demonstrate that our proposed framework is able to efficiently learn sub-skills with diverse behaviors and coordinate them to solve challenging collaborative control tasks such as picking up a long bar, placing a block inside a container while pushing the container with two robot arms, and pushing a box with two ant agents.

Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs

Aditya Paliwal · Felix Gimeno · Vinod Nair · Yujia Li · Miles Lubin · Pushmeet Kohli · Oriol Vinyals

Rating: [8,6,6]

OpenReview

Abstract

We present a deep reinforcement learning approach to minimizing the execution cost of neural network computation graphs in an optimizing compiler. Unlike earlier learning-based works that require training the optimizer on the same graph to be optimized, we propose a learning approach that trains an optimizer offline and then generalizes to previously unseen graphs without further training. This allows our approach to produce high-quality execution decisions on real-world TensorFlow graphs in seconds instead of hours. We consider two optimization tasks for computation graphs: minimizing running time and peak memory usage. In comparison to an extensive set of baselines, our approach achieves significant improvements over classical and other learning-based methods on these two tasks.

Composing Task-agnostic Policies With Deep Reinforcement Learning

Ahmed H. Qureshi · Jacob J. Johnson · Yuzhe Qin · Taylor Henderson · Byron Boots · Michael C. Yip

Rating: [6,6,6]

OpenReview

Abstract

The composition of elementary behaviors to solve challenging transfer learning problems is one of the key elements in building intelligent machines. To date, there has been plenty of work on learning task-specific policies or skills but almost no focus on composing necessary, task-agnostic skills to find a solution to new problems. In this paper, we propose a novel deep reinforcement learning-based skill transfer and composition method that takes the agent's primitive policies to solve unseen tasks. We evaluate our method in difficult cases where training policy through standard reinforcement learning (RL) or even hierarchical RL is either not feasible or exhibits high sample complexity. We show that our method not only transfers skills to new problem settings but also solves the challenging environments requiring both task planning and motion control with high data efficiency.

Frequency-based Search-control In Dyna

Yangchen Pan · Jincheng Mei · Amir-massoud Farahmand · Martha White

Rating: [6,6,6]

OpenReview

Abstract

Model-based reinforcement learning has been empirically demonstrated as a successful strategy to improve sample efficiency. Particularly, Dyna architecture, as an elegant model-based architecture integrating learning and planning, provides huge flexibility of using a model. One of the most important components in Dyna is called search-control, which refers to the process of generating state or state-action pairs from which we query the model to acquire simulated experiences. Search-control is critical to improve learning efficiency. In this work, we propose a simple and novel search-control strategy by searching high frequency region on value function. Our main intuition is built on Shannon sampling theorem from signal processing, which indicates that a high frequency signal requires more samples to reconstruct. We empirically show that a high frequency function is more difficult to approximate. This suggests a search-control strategy: we should use states in high frequency region of the value function to query the model to acquire more samples. We develop a simple strategy to locally measure the frequency of a function by gradient norm, and provide theoretical justification for this approach. We then apply our strategy to search-control in Dyna, and conduct experiments to show its property and effectiveness on benchmark domains.

Causal Discovery With Reinforcement Learning

Shengyu Zhu · Ignavier Ng · Zhitang Chen

Rating: [8,8,8]

OpenReview

Abstract

Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint.

Q-learning With Ucb Exploration Is Sample Efficient For Infinite-horizon Mdp

Yuanhao Wang · Kefan Dong · Xiaoyu Chen · Liwei Wang

Rating: [6,6,6,6]

OpenReview

Abstract

A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. (2018) proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards \emph{without} accessing a generative model. We show that the \textit{sample complexity of exploration} of our algorithm is bounded by . This improves the previously best known result of in this setting achieved by delayed Q-learning (Strehlet al., 2006),, and matches the lower bound in terms of as well as and up to logarithmic factors.

Decentralized Distributed Ppo: Mastering Pointgoal Navigation

Erik Wijmans · Abhishek Kadian · Ari Morcos · Stefan Lee · Irfan Essa · Devi Parikh · Manolis Savva · Dhruv Batra

Rating: [3,8,8]

OpenReview

Abstract

We present Decentralized Distributed Proximal Policy Optimization (DD-PPO), a method for distributed reinforcement learning in resource-intensive simulated environments. DD-PPO is distributed (uses multiple machines), decentralized (lacks a centralized server), and synchronous (no computation is ever "stale"), making it conceptually simple and easy to implement. In our experiments on training virtual robots to navigate in Habitat-Sim, DD-PPO exhibits near-linear scaling -- achieving a speedup of 107x on 128 GPUs over a serial implementation. We leverage this scaling to train an agent for 2.5 Billion steps of experience (the equivalent of 80 years of human experience) -- over 6 months of GPU-time training in under 3 days of wall-clock time with 64 GPUs. This massive-scale training not only sets the state of art on Habitat Autonomous Navigation Challenge 2019, but essentially "solves" the task -- near-perfect autonomous navigation in an unseen environment without access to a map, directly from an RGB-D camera and a GPS+Compass sensor. Fortuitously, error vs computation exhibits a power-law-like distribution; thus, 90% of peak performance is obtained relatively early (at 100 million steps) and relatively cheaply (under 1 day with 8 GPUs). Finally, we show that the scene understanding and navigation policies learned can be transferred to other navigation tasks -- the analog of "ImageNet pre-training + task-specific fine-tuning" for embodied AI. Our model outperforms ImageNet pre-trained CNNs on these transfer tasks and can serve as a universal resource (all models + code will be publicly available).

Logic And The 2-simplicial Transformer

James Clift · Dmitry Doryn · Daniel Murfet · James Wallbridge

Rating: [8,3,3]

OpenReview

Abstract

We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.

Never Give Up: Learning Directed Exploration Strategies

Adrià Puigdomènech Badia · Pablo Sprechmann · Alex Vitvitskyi · Daniel Guo · Bilal Piot · Steven Kapturowski · Olivier Tieleman · Martin Arjovsky · Alexander Pritzel · Andrew Bolt · Charles Blundell

Rating: [6,6,8]

OpenReview

Abstract

We propose a reinforcement learning agent to solve hard exploration games by learning a range of directed exploratory policies. We construct an episodic memory-based intrinsic reward using k-nearest neighbors over the agent's recent experience to train the directed exploratory policies, thereby encouraging the agent to repeatedly revisit all states in its environment. A self-supervised inverse dynamics model is used to train the embeddings of the nearest neighbour lookup, biasing the novelty signal towards what the agent can control. We employ the framework of Universal Value Function Approximators to simultaneously learn many directed exploration policies with the same neural network, with different trade-offs between exploration and exploitation. By using the same neural network for different degrees of exploration/exploitation, transfer is demonstrated from predominantly exploratory policies yielding effective exploitative policies. The proposed method can be incorporated to run with modern distributed RL agents that collect large amounts of experience from many actors running in parallel on separate environment instances. Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344.0%. Notably, the proposed method is the first algorithm to achieve non-zero rewards (with a mean score of 8,400) in the game of Pitfall! without using demonstrations or hand-crafted features.

Black-box Off-policy Estimation For Infinite-horizon Reinforcement Learning

Ali Mousavi · Lihong Li · Qiang Liu · Denny Zhou

Rating: [6,6,6]

OpenReview

Abstract

Off-policy estimation for long-horizon problems is important in many real-life applications such as healthcare and robotics, where high-fidelity simulators may not be available and on-policy evaluation is expensive or impossible. Recently, \citet{liu18breaking} proposed an approach that avoids the curse of horizon suffered by typical importance-sampling-based methods. While showing promising results, this approach is limited in practice as it requires data being collected by a known behavior policy. In this work, we propose a novel approach that eliminates such limitations. In particular, we formulate the problem as solving for the fixed point of a "backward flow" operator and show that the fixed point solution gives the desired importance ratios of stationary distributions between the target and behavior policies. We analyze its asymptotic consistency and finite-sample generalization. Experiments on benchmarks verify the effectiveness of our proposed approach.

Action Semantics Network: Considering the Effects of Actions in Multiagent Systems

Weixun Wang · Tianpei Yang · Yong Liu · Jianye Hao · Xiaotian Hao · Yujing Hu · Yingfeng Chen · Changjie Fan · Yang Gao

Rating: [6,6,6]

OpenReview

Abstract

In multiagent systems (MASs), each agent makes individual decisions but all of them contribute globally to the system evolution. Learning in MASs is difficult since each agent's selection of actions must take place in the presence of other co-learning agents. Moreover, the environmental stochasticity and uncertainties increase exponentially with the increase in the number of agents. Previous works borrow various multiagent coordination mechanisms into deep learning architecture to facilitate multiagent coordination. However, none of them explicitly consider action semantics between agents that different actions have different influences on other agents. In this paper, we propose a novel network architecture, named Action Semantics Network (ASN), that explicitly represents such action semantics between agents. ASN characterizes different actions' influence on other agents using neural networks based on the action semantics between them. ASN can be easily combined with existing deep reinforcement learning (DRL) algorithms to boost their performance. Experimental results on StarCraft II micromanagement and Neural MMO show ASN significantly improves the performance of state-of-the-art DRL approaches compared with several network architectures.

Maxmin Q-learning: Controlling The Estimation Bias Of Q-learning

Qingfeng Lan · Yangchen Pan · Alona Fyshe · Martha White

Rating: [8,6,3]

OpenReview

Abstract

Q-learning suffers from overestimation bias, because it approximates the maximum action value using the maximum estimated action value. Algorithms have been proposed to reduce overestimation bias, but we lack an understanding of how bias interacts with performance, and the extent to which existing algorithms mitigate bias. In this paper, we 1) highlight that the effect of overestimation bias on learning efficiency is environment-dependent; 2) propose a generalization of Q-learning, called \emph{Maxmin Q-learning}, which provides a parameter to flexibly control bias; 3) show theoretically that there exists a parameter choice for Maxmin Q-learning that leads to unbiased estimation with a lower approximation variance than Q-learning; and 4) prove the convergence of our algorithm in the tabular case, as well as convergence of several previous Q-learning variants, using a novel Generalized Q-learning framework. We empirically verify that our algorithm better controls estimation bias in toy environments, and that it achieves superior performance on several benchmark problems.

Robust Reinforcement Learning For Continuous Control With Model Misspecification

Daniel J. Mankowitz · Nir Levine · Rae Jeong · Abbas Abdolmaleki · Jost Tobias Springenberg · Yuanyuan Shi · Jackie Kay · Todd Hester · Timothy Mann · Martin Riedmiller

Rating: [6,6,8]

OpenReview

Abstract

We provide a framework for incorporating robustness -- to perturbations in the transition dynamics which we refer to as model misspecification -- into continuous control Reinforcement Learning (RL) algorithms. We specifically focus on incorporating robustness into a state-of-the-art continuous control RL algorithm called Maximum a-posteriori Policy Optimization (MPO). We achieve this by learning a policy that optimizes for a worst case, entropy-regularized, expected return objective and derive a corresponding robust entropy-regularized Bellman contraction operator. In addition, we introduce a less conservative, soft-robust, entropy-regularized objective with a corresponding Bellman operator. We show that both, robust and soft-robust policies, outperform their non-robust counterparts in nine Mujoco domains with environment perturbations. In addition, we show improved robust performance on a challenging, simulated, dexterous robotic hand. Finally, we present multiple investigative experiments that provide a deeper insight into the robustness framework; including an adaptation to another continuous control RL algorithm. Performance videos can be found online at https://sites.google.com/view/robust-rl.

A Closer Look At Deep Policy Gradients

Andrew Ilyas · Logan Engstrom · Shibani Santurkar · Dimitris Tsipras · Firdaus Janoos · Larry Rudolph · Aleksander Madry

Rating: [8,6,8]

OpenReview

Abstract

We study how the behavior of deep policy gradient algorithms reflects the conceptual framework motivating their development. To this end, we propose a fine-grained analysis of state-of-the-art methods based on key elements of this framework: gradient estimation, value prediction, and optimization landscapes. Our results show that the behavior of deep policy gradient algorithms often deviates from what their motivating framework would predict: surrogate rewards do not match the true reward landscape, learned value estimators fail to fit the true value function, and gradient estimates poorly correlate with the "true" gradient. The mismatch between predicted and empirical behavior we uncover highlights our poor understanding of current methods, and indicates the need to move beyond current benchmark-centric evaluation methods.

Dream To Control: Learning Behaviors By Latent Imagination

Danijar Hafner · Timothy Lillicrap · Jimmy Ba · Mohammad Norouzi

Rating: [8,6,6,8]

OpenReview

Abstract

To select effective actions in complex environments, intelligent agents need to generalize from past experience. World models can represent knowledge about the environment to facilitate such generalization. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks purely by latent imagination. We efficiently learn behaviors by backpropagating analytic gradients of learned state values through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.

Synthesizing Programmatic Policies That Inductively Generalize

Jeevana Priya Inala · Osbert Bastani · Zenna Tavares · Armando Solar-lezama

Rating: [6,8,6]

OpenReview

Abstract

Deep reinforcement learning has successfully solved a number of challenging control tasks. However, learned policies typically have difficulty generalizing to novel environments. We propose an algorithm for learning programmatic state machine policies that can capture repeating behaviors. By doing so, they have the ability to generalize to instances requiring an arbitrary number of repetitions, a property we call inductive generalization. However, state machine policies are hard to learn since they consist of a combination of continuous and discrete structure. We propose a learning framework called adaptive teaching, which learns a state machine policy by imitating a teacher; in contrast to traditional imitation learning, our teacher adaptively updates itself based on the structure of the student. We show how our algorithm can be used to learn policies that inductively generalize to novel environments, whereas traditional neural network policies fail to do so.

Fast Task Inference With Variational Intrinsic Successor Features

Steven Hansen · Will Dabney · Andre Barreto · David Warde-farley · Tom Van De Wiele · Volodymyr Mnih

Rating: [8,6,8]

OpenReview

Abstract

It has been established that diverse behaviors spanning the controllable subspace of a Markov decision process can be trained by rewarding a policy for being distinguishable from other policies. However, one limitation of this formulation is the difficulty to generalize beyond the finite set of behaviors being explicitly learned, as may be needed in subsequent tasks. Successor features provide an appealing solution to this generalization problem, but require defining the reward function as linear in some grounded feature space. In this paper, we show that these two techniques can be combined, and that each method solves the other's primary limitation. To do so we introduce Variational Intrinsic Successor FeatuRes (VISR), a novel algorithm which learns controllable features that can be leveraged to provide enhanced generalization and fast task inference through the successor features framework. We empirically validate VISR on the full Atari suite, in a novel setup wherein the rewards are only exposed briefly after a long unsupervised phase. Achieving human-level performance on 12 games and beating all baselines, we believe VISR represents a step towards agents that rapidly learn from limited feedback.

Playing The Lottery With Rewards And Multiple Languages: Lottery Tickets In Rl And Nlp

Haonan Yu · Sergey Edunov · Yuandong Tian · Ari S. Morcos

Rating: [3,3,6]

OpenReview

Abstract

The lottery ticket hypothesis proposes that over-parameterization of deep neural networks (DNNs) aids training by increasing the probability of a “lucky” sub-network initialization being present rather than by helping the optimization process (Frankle& Carbin, 2019). Intriguingly, this phenomenon suggests that initialization strategies for DNNs can be improved substantially, but the lottery ticket hypothesis has only previously been tested in the context of supervised learning for natural image tasks. Here, we evaluate whether “winning ticket” initializations exist in two different domains: natural language processing (NLP) and reinforcement learning (RL).For NLP, we examined both recurrent LSTM models and large-scale Transformer models (Vaswani et al., 2017). For RL, we analyzed a number of discrete-action space tasks, including both classic control and pixel control. Consistent with workin supervised image classification, we confirm that winning ticket initializations generally outperform parameter-matched random initializations, even at extreme pruning rates for both NLP and RL. Notably, we are able to find winning ticket initializations for Transformers which enable models one-third the size to achieve nearly equivalent performance. Together, these results suggest that the lottery ticket hypothesis is not restricted to supervised learning of natural images, but rather represents a broader phenomenon in DNNs.

Adaptive Correlated Monte Carlo For Contextual Categorical Sequence Generation

Xinjie Fan · Yizhe Zhang · Zhendong Wang · Mingyuan Zhou

Rating: [6,6,8]

OpenReview

Abstract

Sequence generation models are commonly refined with reinforcement learning over user-defined metrics. However, high gradient variance hinders the practical use of this method. To stabilize this method for contextual generation of categorical sequences, we estimate the gradient by evaluating a set of correlated Monte Carlo rollouts. Due to the correlation, the number of unique rollouts is random and adaptive to model uncertainty; those rollouts naturally become baselines for each other, and hence are combined to effectively reduce gradient variance. We also demonstrate the use of correlated MC rollouts for binary-tree softmax models which reduce the high generation cost in large vocabulary scenarios, by decomposing each categorical action into a sequence of binary actions. We evaluate our methods on both neural program synthesis and image captioning. The proposed methods yield lower gradient variance and consistent improvement over related baselines.

Dynamics-aware Embeddings

William Whitney · Rajat Agarwal · Kyunghyun Cho · Abhinav Gupta

Rating: [3,8,6,8]

OpenReview

Abstract

In this paper we consider self-supervised representation learning to improve sample efficiency in reinforcement learning (RL). We propose a forward prediction objective for simultaneously learning embeddings of states and actions. These embeddings capture the structure of the environment's dynamics, enabling efficient policy learning. We demonstrate that our action embeddings alone improve the sample efficiency and peak performance of model-free RL on control from low-dimensional states. By combining state and action embeddings, we achieve efficient learning of high-quality policies on goal-conditioned continuous control from pixel observations in only 1-2 million environment steps.

Improving Generalization In Meta Reinforcement Learning Using Neural Objectives

Louis Kirsch · Sjoerd Van Steenkiste · Juergen Schmidhuber

Rating: [6,6,8]

OpenReview

Abstract

Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta-reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that affects how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency.

Learning To Plan In High Dimensions Via Neural Exploration-exploitation Trees

Binghong Chen · Bo Dai · Qinjie Lin · Guo Ye · Han Liu · Le Song

Rating: [8,8,6]

OpenReview

Abstract

We propose a meta path planning algorithm named \emph{Neural Exploration-Exploitation Trees~(NEXT)} for learning from prior experience for solving new path planning problems in high dimensional continuous state and action spaces. Compared to more classical sampling-based methods like RRT, our approach achieves much better sample efficiency in high-dimensions and can benefit from prior experience of planning in similar environments. More specifically, NEXT exploits a novel neural architecture which can learn promising search directions from problem structures. The learned prior is then integrated into a UCB-type algorithm to achieve an online balance between \emph{exploration} and \emph{exploitation} when solving a new problem. We conduct thorough experiments to show that NEXT accomplishes new planning problems with more compact search trees and significantly outperforms state-of-the-art methods on several benchmarks.

Hierarchical Foresight: Self-supervised Learning Of Long-horizon Tasks Via Visual Subgoal Generation

Suraj Nair · Chelsea Finn

Rating: [6,6]

OpenReview

Abstract

Video prediction models combined with planning algorithms have shown promise in enabling robots to learn to perform many vision-based tasks through only self-supervision, reaching novel goals in cluttered scenes with unseen objects. However, due to the compounding uncertainty in long horizon video prediction and poor scalability of sampling-based planning optimizers, one significant limitation of these approaches is the ability to plan over long horizons to reach distant goals. To that end, we propose a framework for subgoal generation and planning, hierarchical visual foresight (HVF), which generates subgoal images conditioned on a goal image, and uses them for planning. The subgoal images are directly optimized to decompose the task into easy to plan segments, and as a result, we observe that the method naturally identifies semantically meaningful states as subgoals. Across three out of four simulated vision-based manipulation tasks, we find that our method achieves nearly a 200% performance improvement over planning without subgoals and model-free RL approaches. Further, our experiments illustrate that our approach extends to real, cluttered visual scenes.

Hypermodels For Exploration

Vikranth Dwaracherla · Xiuyuan Lu · Morteza Ibrahimi · Ian Osband · Zheng Wen · Benjamin Van Roy

Rating: [8,3,6]

OpenReview

Abstract

We study the use of hypermodels to represent epistemic uncertainty and guide exploration. This generalizes and extends the use of ensembles to approximate Thompson sampling. The computational cost of training an ensemble grows with its size, and as such, prior work has typically been limited to ensembles with tens of elements. We show that alternative hypermodels can enjoy dramatic efficiency gains, enabling behavior that would otherwise require hundreds or thousands of elements, and even succeed in situations where ensemble methods fail to learn regardless of size. This allows more accurate approximation of Thompson sampling as well as use of more sophisticated exploration schemes. In particular, we consider an approximate form of information-directed sampling and demonstrate performance gains relative to Thompson sampling. As alternatives to ensembles, we consider linear and neural network hypermodels, also known as hypernetworks. We prove that, with neural network base models, a linear hypermodel can represent essentially any distribution over functions, and as such, hypernetworks do not extend what can be represented.

comments powered by Disqus