Neural Networks for Machine Learning Lecture 7
课程地址:https://www.coursera.org/learn/neural-networks
老师主页:http://www.cs.toronto.edu/~hinton
备注:笔记内容和图片均参考老师课件。
这节课蜻蜓点水地介绍了RNN,这里主要回顾下LSTM以及本周的习题解答。
Long Short Term Memory (LSTM)
由于梯度爆炸和梯度消失的问题,RNN很难学习到太久之前的输入,为了解决这个问题,有人提出了Long Short Term Memory (LSTM),具体如下
这里有三个门,分别是keep gate,write gate以及read gate,keep gate控制着记忆单元,如果权重为$1$,数据保持不变,write gate控制数据输入, read gate控制数据输出。
Quiz
这一章的习题比较难,这里记录下。
Problem 1
How many bits of information can be modeled by the hidden state (at some specific time) of a Hidden Markov Model with 16 hidden units?
- $2$
- $4$
- $16$
- $>16$
$n$比特的信息可以表示$2^n$个二进制数,也就是$2^n$种状态,所以$m$种状态对应了$\text{log}_2(m)$个比特的信息。对于HMM来说,$16$个隐藏单元对应了$16$种状态,所以需要$\text{log}_2(16)=4$个比特的信息,答案为4。
Problem 2
This question is about speech recognition. To accurately recognize what phoneme is being spoken at a particular time, one needs to know the sound data from 100ms before that time to 100ms after that time, i.e. a total of 200ms of sound data. Which of the following setups have access to enough sound data to recognize what phoneme was being spoken 100ms into the past?
- A Recurrent Neural Network (RNN) with 200ms of input
- A Recurrent Neural Network (RNN) with 30ms of input
- A feed forward Neural Network with 30ms of input
- A feed forward Neural Network with 200ms of input
如果存储全部$200$ms的数据,那么一定能识别,所以第一个和第四个选项是正确的,对于RNN来说,只要$30$ms的数据,就可以完成识别。
Problem 3
The figure below shows a Recurrent Neural Network (RNN) with one input unit xx, one logistic hidden unit hh, and one linear output unit $y$. The RNN is unrolled in time for $T=0,1$, and $2$.
The network parameters are: $W_{xh}=0.5,W_{hh}=-1.0, W_{hy}=-0.7 , h_{bias}=-1.0$, and $y_{bias}=0.0$. Remember, $\sigma(k) = \frac{1}{1+\exp(-k)}$.
If the input xx takes the values $9 , 4 , -2$ at time steps $0, 1 ,2$ respectively, what is the value of the output $y$ at $T=1$? Give your answer to at least two digits after the decimal point.
这题需要计算$T=1$时候的输出,主要用到以下公式
带入计算即可
所以这题答案为$-0.36$
这里附上计算的代码
from math import exp
Wxh = 0.5
Whh = -1
Why = -0.7
hbias = -1
ybias = 0
x0 = 9
x1 = 4
x2 = -2
def f(x):
return 1 / (1 + exp(-x))
z0 = Wxh*x0 + hbias
h0 = f(z0)
z1 = Wxh*x1 + Whh*h0 + hbias
h1 = f(z1)
y1 = Why*h1 + ybias
z2 = Wxh*x2 + Whh*h1 + hbias
h2 = f(z2)
y2 = Why*h2 + ybias
Problem 4
The figure below shows a Recurrent Neural Network (RNN) with one input unit xx, one logistic hidden unit hh, and one linear output unit $y$. The RNN is unrolled in time for $T=0,1$, and $2$.
The network parameters are: $W_{xh}=-0.1,W_{hh}=0.5, W_{hy}=0.25 , h_{bias}=0.4$, and $y_{bias}=0.0$.
If the input $x$ takes the values $18 , 9 , -8$ at time steps $0, 1 ,2$ respectively, the hidden unit values will be $0.2 , 0.4 , 0.8$ and the output unit values will be $0.05 , 0.1 , 0.2$ (you can check these values as an exercise). A variable $z$ is defined as the total input to the hidden unit before the logistic nonlinearity.
If we are using the squared loss, with targets $t_0,t_1,t_2$, then the sequence of calculations required to compute the total error $E$ is as follows:
If the target output values are $t_0 = 0.1 , t_1=-0.1 , t_2=-0.2$ and the squared error loss is used, what is the value of the error derivative just before the hidden unit nonlinearity at $T=1$ (i.e. $\frac{\partial E}{\partial z_1}$)? Write your answer up to at least the fourth decimal place.
这题主要就是考察了链式法则,注意$\frac{d\sigma(z)}{dz}=\sigma(z)(1-\sigma(z))$
计算代码如下
from math import exp
Wxh = -0.1
Whh = 0.5
Why = 0.25
hbias = 0.4
ybias = 0
x0 = 18
x1 = 9
x2 = -8
h0 = 0.2
h1 = 0.4
h2 = 0.8
y0 = 0.05
y1 = 0.1
y2 = 0.2
t0 = 0.1
t1 = -0.1
t2 = -0.2
def f(x):
return 1 / (1 + exp(-x))
def d(x):
return f(x) * (1 - f(x))
z0 = Wxh*x0 + hbias
z1 = Wxh*x1 + Whh*h0 + hbias
z2 = Wxh*x2 + Whh*h1 + hbias
t = (y1 - t1) * Why * d(z1) + (y2 - t2) * Why * d(z2) * Whh * d(z1)
Problem 5
Consider a Recurrent Neural Network with one input unit, one logistic hidden unit, and one linear output unit. This RNN is for modeling sequences of length $4$ only, and the output unit exists only at the last time step, i.e. $T=3$. This diagram shows the RNN unrolled in time:
Suppose that the model has learned the following parameter values:
- $w_{xh}=1$
- $w_{hh}=2$
- $w_{hy}=1$
- All biases are 0
For one specific training case, the input is $1$ at $T=0$ and $0$ at $T=1, T=2,$ and $T=3$. The target output (at $T=3$) is $0.5$, and we’re using the squared error loss function.
We’re interested in the gradient for $w_{xh}$, i.e. $\frac{\partial E}{\partial w_{xh}}$. Because it’s only at $T=0$ that the input is not zero, and it’s only at $T=3$ that there’s an output, the error needs to be backpropagated from $T=3$ to $T=0$, and that’s the kind of situations where RNN’s often get either vanishing gradients or exploding
gradients. Which of those two occurs in this situation?
You can either do the calculations, and find the answer that way, or you can find the answer with more thinking and less math, by thinking about the slope $\frac{\partial y}{\partial z}$ of the logistic function, and what role that plays in the backpropagation process.
- Vanishing gradient
- Exploding gradient
这里直接计算
我们通过计算可以知道
所以上述梯度会越来越小,这题选Vanishing gradient
Problem 6
Consider the following Recurrent Neural Network (RNN):
As you can see, the RNN has two input units, two hidden units, and one output unit.
For this question, every arrow denotes the effect of a variable at time $t$ on a variable at time $t+1$.
Which feed forward Neural Network is equivalent to this network unrolled in time?
这题比较简单,$x_1$连接了$h_1,y$,$h_1$连接了$h_1,h_2$,选第一张图。