Awesome Papers: 2016-11-1

Towards Deep Symbolic Reinforcement Learning

Deep reinforcement learning (DRL) brings the power of deep neural networks to bear on the generic task of trial-and-error learning, and its effectiveness has been convincingly demonstrated on tasks such as Atari video games and the game of Go. However, contemporary DRL systems inherit a number of shortcomings from the current generation of deep learning techniques. For example, they require very large datasets to work effectively, entailing that they are slow to learn even when such datasets are available. Moreover, they lack the ability to reason on an abstract level, which makes it difficult to implement high-level cognitive functions such as transfer learning, analogical reasoning, and hypothesis-based reasoning. Finally, their operation is largely opaque to humans, rendering them unsuitable for domains in which verifiability is important. In this paper, we propose an end-to-end reinforcement learning architecture comprising a neural back end and a symbolic front end with the potential to overcome each of these shortcomings. As proof-of-concept, we present a preliminary implementation of the architecture and apply it to several variants of a simple video game. We show that the resulting system though just a prototype learns effectively, and by acquiring a set of symbolic rules that are easily comprehensible to humans, dramatically outperforms a conventional, fully neural DRL system on a stochastic variant of the game.

面向深度符号的强化学习

深度强化学习将深度学习应用在通用问题的试错学习上,这种强大的能力在atari游戏和围棋中特到了很好的体现。然而,当代的深度强化学习系统也因此存在大量深度学习技术的缺点。例如,他们需要的数据量巨大,并且学习速度缓慢。更重要的是,缺乏抽象层次的推理能力,这使得难以进行更高层次的认知行为,例如迁移学习、类比推理和基于假设的推理。最后,这种学习的过程难以被人类很好的理解,这使得他们在强调验证的领域中难以大范围使用。这篇文章中,我们提出了端到端的强化学习框架,它由符号和神经元共同构成,作为概念性证明,我们对该框架进行了部分实现,并应用与部分原型学习问题中,通过一系列人类易于理解的符号规则,极大提升了深度强化学习的性能。


End-to-End Reinforcement Learning of Dialogue Agents for Information Access

Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, Li Deng

This paper proposes a dialogue agent that provides users with an entity from a knowledge base (KB) by interactively asking for its attributes. All components of the KB-InfoBot are trained in an end-to-end fashion using reinforcement learning. Goal-oriented dialogue systems typically need to interact with an external database to access real-world knowledge (e.g. movies playing in a city). Previous systems achieved this by issuing a symbolic query to the database and adding retrieved results to the dialogue state. However, such symbolic operations break the differentiability of the system and prevent end-to-end training of neural dialogue agents. In this paper, we address this limitation by replacing symbolic queries with an induced “soft” posterior distribution over the KB that indicates which entities the user is interested in. We also provide a modified version of the episodic REINFORCE algorithm, which allows the KB-InfoBot to explore and learn both the policy for selecting dialogue acts and the posterior over the KB for retrieving the correct entities. Experimental results show that the end-to-end trained KB-InfoBot outperforms competitive rule-based baselines, as well as agents which are not end-to-end trainable.

基于端到端强化学习的智能对话系统

本文提出 一个对话agent,通过交互式问答向用户提供来自知识库(KB)的实体。 KB-InfoBot的所有模块都使用强化学习以端到端方式训练。面向目标的对话系统通常需要与外部数据库交互以访问现实世界的知识(例如在城市中播放的电影)。传统的系统通过数据库的符号查询并将检索到的结果添加到对话状态来实现。然而,这种符号操作导致系统不可微,使得无法采用神经对话agent进行端对端训练。在本文中,我们针对这一问题,通过“软”后验分布替换符号查询来表示用户感兴趣的实体,我们还提出了一个情景强化学习算法的修改版本,允许KB-InfoBot同时学习探索选择对话行为的策略,以及获取正确检索结果的后验概率。实验结果表明,端到端的KB-InfoBot优于基于规则的基线算法,以及非端到端训练的agents。

Phone

07318457661

Address

National University of Defense Tecnology
Changsha, Hunan 410073
China