Matrix Calculus for Deeplearning Part2

栏目: IT技术 · 发布时间: 5年前

内容简介:May 29, 2020We can’t compute partial derivatives of very complicated functions using just the basic matrix calculus rules we’ve seenBlog part 1. For example, we can’t take the derivative of nested expressions like sum(

Matrix Calculus for DeepLearning (Part2)

May 29, 2020

Matrix Calculus for Deeplearning Part2

We can’t compute partial derivatives of very complicated functions using just the basic matrix calculus rules we’ve seenBlog part 1. For example, we can’t take the derivative of nested expressions like sum( w + x ) directly without reducing it to its scalar equivalent. We need to be able to combine our basic vector rules using the vector chain rule.

In paper they have defined and named three different chain rules.

  1. single-variable chain rule
  2. single-variable total-derivative chain rule
  3. vector chain rule

The chain rule comes into play when we need the derivative of an expression composed of nested subexpressions. Chain rule helps in solving problem by breaking complicated expressions into subexpression whose derivatives are easy to compute.

Single-variable chain rule

Chain rules are defined in terms of nested functions such as y=f(g(x)) for single variable chain rule.

Formula is

dy/dx = (dy/du) (du/dx)

There are 4 steps to solve using single variable chain rule

  1. Introduce intermediate variable
  2. compute derivatives of intermediate variables wrt(with respect to) their parameters.
  3. combine all derivatives by multiplying them together
  4. substitute intermediate variables back in derivative equation.

Lets see example of nested equation y = f (x) = ln (sin(x³ ) ² )

Matrix Calculus for Deeplearning Part2

It is to compute the derivatives of the intermediate variables in isolation!

But single variable chain rule is applicable only when a single variable can influence output in only one way. As we see in example we can handle nested expression of single variable x using this chain ruleonly when x can effect y through single data flow path.

Single-variable total-derivative chain rule

If we apply single variable chain rule to y = f (x) = x + x² we get wrong answer, because derivative operator doesnot apply to multivariate functions. change in x in the equation , affects y both as operand og addition and as operand of square. so we clearly cant apply single variable chain rule. so…

we move to total derivatives.

which is to compute (dy/dx) , we need to sum up all possible contributions from changes in x to the change in y.

Formula for total derivative chain rule

Matrix Calculus for Deeplearning Part2

Total derivative assumes all variables are potentially co-dependent where as partial derivative assumes all variables but x are constants.

when you take the total derivative with respect to x, other variables might also be functions of x so add in their contributions as well. The left side of the equation looks like a typical partial derivative but the right-hand side is actually the total derivative.

Lets see example,

Matrix Calculus for Deeplearning Part2

total derivative formula always sums , that is sums up terms in the derivative. For example, given y = x × x² instead of y = x + x² , the total-derivative chain rule formula still adds partial derivative terms, for more detail see demonstration in paper.

Formula of total derivative can be simplified further.

Matrix Calculus for Deeplearning Part2

This chain rule that takes into consideration the total derivative degenerates to the single-variable chain rule when all intermediate variables are functions of a single variable.

Vector chain rule

derivative of a sample vector function with respect to a scalar, y = f (x).

Matrix Calculus for Deeplearning Part2

introduce two intermediate variables, g 1 and g 2 , one for each f i so that y looks more like y = f ( g (x))

Matrix Calculus for Deeplearning Part2

If we split the terms, isolating the terms into a vector, we get a matrix by vector.

Matrix Calculus for Deeplearning Part2

This completes chain rule. In next blog that is part3 we will see how we can apply this gradient of neural activation and loss function and wrap up.

Thank you.

Useful Points:

It is difficult while writing blog in markdown to convert to superscript and subscript so I have listed down , which you can use ( copy paste) in your markdown

super script ⁰ ¹ ² ³ ⁴ ⁵ ⁶ ⁷ ⁸ ⁹ ᵃ ᵇ ᶜ ᵈ ᵉ ᶠ ᵍ ʰ ᶦ ʲ ᵏ ˡ ᵐ ⁿ ᵒ ᵖ ʳ ˢ ᵗ ᵘ ᵛ ʷ ˣ ʸ ᶻ

subscript ₀ ₁ ₂ ₃ ₄ ₅ ₆ ₇ ₈ ₉ ₐ ᵦ ₑ ₕ ᵢ ⱼ ₖ ₗ ₘ ₙ ₒ ₚ ᵩ ᵣ ₛ ₜ ᵤ ᵥ ₓ ᵧ

# Blog 10

Matrix Calculus for Deeplearning Part2

Written by Kiran U Kamath

You can follow me on

Twitter Linkedin


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Google成功的七堂课

Google成功的七堂课

罗耀宗 / 电子工业出版社 / 2005-7 / 28.00元

Google是全球使用人数最多的搜索引擎,在短短几年内,Google从斯坦福大学的实验室,茁壮成长为举世瞩目的IT业超级巨人,他们的成功绝非偶然,尤其是在网络泡沫破灭,行业一片萧条之际,它的崛起更为IT业带来一缕曙光。作者从趋势观察家的角度,以讲座的形式,向读者讲述Google成功的关键因素:破除因循守旧、不断打破常规,核心技术领先、做出了“更好的捕鼠器”,使得Google在搜索技术方面远远超越对......一起来看看 《Google成功的七堂课》 这本书的介绍吧!

在线进制转换器
在线进制转换器

各进制数互转换器

URL 编码/解码
URL 编码/解码

URL 编码/解码

html转js在线工具
html转js在线工具

html转js在线工具