Towards Deep Symbolic Reinforcement Learning
Deep reinforcement learning (DRL) brings the power of deep neural networks to bear on the generic task of trial-and-error learning, and its effectiveness has been convincingly demonstrated on tasks such as Atari video games and the game of Go. However, contemporary DRL systems inherit a number of shortcomings from the current generation of deep learning techniques. For example, they require very large datasets to work effectively, entailing that they are slow to learn even when such datasets are available. Moreover, they lack the ability to reason on an abstract level, which makes it difficult to implement high-level cognitive functions such as transfer learning, analogical reasoning, and hypothesis-based reasoning. Finally, their operation is largely opaque to humans, rendering them unsuitable for domains in which verifiability is important. In this paper, we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings. As proof-of-concept, we present a preliminary implementation of the architecture and apply it to several variants of a simple video game. We show that the resulting system though just a prototype learns effectively, and by acquiring a set of symbolic rules that are easily comprehensible to humans, dramatically outperforms a conventional, fully neural DRL system on a stochastic variant of the game.
End-to-End Reinforcement Learning of Dialogue Agents for Information Access
Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, Li Deng
This paper proposes a dialogue agent that provides users with an entity from a knowledge base (KB) by interactively asking for its attributes. All components of the KB-InfoBot are trained in an end-to-end fashion using reinforcement learning. Goal-oriented dialogue systems typically need to interact with an external database to access real-world knowledge (e.g. movies playing in a city). Previous systems achieved this by issuing a symbolic query to the database and adding retrieved results to the dialogue state. However, such symbolic operations break the differentiability of the system and prevent end-to-end training of neural dialogue agents. In this paper, we address this limitation by replacing symbolic queries with an induced “soft” posterior distribution over the KB that indicates which entities the user is interested in. We also provide a modified version of the episodic REINFORCE algorithm, which allows the KB-InfoBot to explore and learn both the policy for selecting dialogue acts and the posterior over the KB for retrieving the correct entities. Experimental results show that the end-to-end trained KB-InfoBot outperforms competitive rule-based baselines, as well as agents which are not end-to-end trainable.
本文提出 一个对话agent，通过交互式问答向用户提供来自知识库（KB）的实体。 KB-InfoBot的所有模块都使用强化学习以端到端方式训练。面向目标的对话系统通常需要与外部数据库交互以访问现实世界的知识（例如在城市中播放的电影）。传统的系统通过数据库的符号查询并将检索到的结果添加到对话状态来实现。然而，这种符号操作导致系统不可微，使得无法采用神经对话agent进行端对端训练。在本文中，我们针对这一问题，通过“软”后验分布替换符号查询来表示用户感兴趣的实体，我们还提出了一个情景强化学习算法的修改版本，允许KB-InfoBot同时学习探索选择对话行为的策略，以及获取正确检索结果的后验概率。实验结果表明，端到端的KB-InfoBot优于基于规则的基线算法，以及非端到端训练的agents。