How DeepMind’s UNREAL Agent Performed 9 Times Better Than Experts on Atari

栏目: IT技术 · 发布时间: 4年前

内容简介:We can think of auxiliary tasks as “side quests.” Although they don’tOverall, the goal is to maximize the sum of two terms:where the superscript c denotes an auxiliary control task reward. Here are the two control tasks used by UNREAL:

Auxiliary Control Tasks

We can think of auxiliary tasks as “side quests.” Although they don’t directly help achieve the overall goal, they help the agent learn about environment dynamics and extract relevant information. In turn, that helps the agent learn how to achieve the desired overall end state. We can also view them as additional pseudo-reward functions for the agent to interact with.

Overall, the goal is to maximize the sum of two terms:

  1. The expected cumulative extrinsic reward
  2. The expected cumulative sum of auxiliary rewards
Overall Maximization Goal

where the superscript c denotes an auxiliary control task reward. Here are the two control tasks used by UNREAL:

  • Pixel Changes (Pixel Control): The agent tries to maximize changes in pixel values since these changes often correspond to important events.
  • Network Features (Feature Control): The agent tries to maximize the activation of all units in a given layer. This can force the policy and value networks to extract more task-relevant, high-level information.

For more details on how these tasks are defined and learned, feel free to skim this paper [1]. For now, just know that the agent tries to find accurate Q value functions to best achieve these auxiliary tasks, using auxiliary rewards defined by the user.

Okay, perfect! Now we just add the extrinsic and auxiliary rewards then run A3C using the sum as a newly defined reward! Right?

How UNREAL is Clever

In actuality, UNREAL does something different. Instead of training a single policy to optimize this reward, it trains a policy for each of the tasks on top of the base A3C policy . While all auxiliary task policies share some network components with the base A3C agent, they each also add individual components to define separate policies.

For example, the “Pixel Control” task has a deconvolutional network after the shared convolutional network and LSTM. The output defines the Q-values for the pixel control policy. (Skim [1] for details on the implementation)

Each of the policies optimizes an n-step Q-learning loss:

Auxiliary Control Loss Using N-Step Q

Even more amazingly, we never explicitly use these auxiliary control task policies. Even though we discover which actions optimize each of the auxiliary tasks, we only use the base A3C agent’s actions in the environment. Then, you may think, all this auxiliary training was for nothing!

Not quite. The key is that there are shared parts of the architecture between the A3C agent and auxiliary control tasks! As we optimize policies over the auxiliary tasks, we are changing parameters that are also used by the base agent. This has, what I like to call, a “nudging effect.”

Updating shared components not only helps learn auxiliary tasks but also better equips the agent to solve the overall problem by extracting relevant information from the environment.

In other words, we get more information from the environment than if we did not use auxiliary tasks.


以上所述就是小编给大家介绍的《How DeepMind’s UNREAL Agent Performed 9 Times Better Than Experts on Atari》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

编写高质量代码:改善Python程序的91个建议

编写高质量代码:改善Python程序的91个建议

张颖、赖勇浩 / 机械工业出版社 / 2014-6 / 59.00元

在通往“Python技术殿堂”的路上,本书将为你编写健壮、优雅、高质量的Python代码提供切实帮助!内容全部由Python编码的最佳实践组成,从基本原则、惯用法、语法、库、设计模式、内部机制、开发工具和性能优化8个方面深入探讨了编写高质量Python代码的技巧与禁忌,一共总结出91条宝贵的建议。每条建议对应Python程序员可能会遇到的一个问题。本书不仅以建议的方式从正反两方面给出了被实践证明为......一起来看看 《编写高质量代码:改善Python程序的91个建议》 这本书的介绍吧!

URL 编码/解码
URL 编码/解码

URL 编码/解码

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具