深度学习之一:神经网络与深度学习


1 简介

本系列内容为Andrew NG的深度学习课程的笔记。深度学习课程在coursera及网易云课堂上都可以免费学习到。课程共计5部分,分别介绍了深度学习,深度学习的优化,深度学习项目的演进策略,卷积神经网络和循环神经网络。本系列笔记也分为5部分,分别与之相对应。

本篇内容主要介绍如何使用numpy,通过梯度下降实现后向传播算法,完成从简单的单个神经元,单层直到多层全连接神经网络的训练实现方法。

2 Logistic回归

2.1 记法说明

x(i)x(i)<script type="math/tex" id="MathJax-Element-18">x^{(i)}</script> 上标表示训练集中第ii<script type="math/tex" id="MathJax-Element-19">i</script>个数据
x j ( i ) <script type="math/tex" id="MathJax-Element-20">x_j^{(i)}</script> 若xRnx∈Rn<script type="math/tex" id="MathJax-Element-21">x\in R^n</script>,下标表示向量x(i)x(i)<script type="math/tex" id="MathJax-Element-22">x^{(i)}</script>的第jj<script type="math/tex" id="MathJax-Element-23">j</script>维分量
y ( i ) <script type="math/tex" id="MathJax-Element-24">y^{(i)}</script> 表示第ii<script type="math/tex" id="MathJax-Element-25">i</script>个数据对应的监督学习的标签

训练数据集表示为: [ ( x ( 1 ) , y ( 1 ) ) , ( x ( 2 ) , y ( 2 ) ) , . . . ( x ( m ) , y ( m ) ) ] <script type="math/tex" id="MathJax-Element-26">\left[ (x^{(1)},y^{(1)}),(x^{(2)},y^{(2)}),...(x^{(m)},y^{(m)}) \right ] </script>

为了便于运算,将X和Y其表示为矩阵的形式,约定每一列为一个训练数据
X=[x(1)x(2)...x(m)]X=[x(1)x(2)...x(m)]<script type="math/tex" id="MathJax-Element-27">X=\left[ \matrix{x^{(1)} & x^{(2)} & ... & x^{(m)}} \right]</script>, 其维度为(dim(xx<script type="math/tex" id="MathJax-Element-28">x</script>),m)
Y = [ y ( 1 ) y ( 2 ) . . . y ( m ) ] <script type="math/tex" id="MathJax-Element-29">Y=\left[ \matrix{y^{(1)} & y^{(2)} & ... & y^{(m)}} \right]</script>,其维度为(1,m)
yˆ(i)y^(i)<script type="math/tex" id="MathJax-Element-30">\widehat y^{(i)}</script> 为算法针对第ii<script type="math/tex" id="MathJax-Element-31">i</script>组输入预测的输出

2.2 Logistic回归

Logistic回归是一种有监督的学习算法,可以用来解决二分类问题。神经网络中每个神经元都是一个logistic回归单元。下图来表示逻辑回归的思想:
人工神经元示意图

以判断图片是否为猫的图片这个二分类问题为例。可以把图片表示为一个由n个特征组成的向量 x <script type="math/tex" id="MathJax-Element-150">x</script>,算法输出当xx<script type="math/tex" id="MathJax-Element-151">x</script>作为输入时,结果为猫的概率。即

y ^ = P ( y = 1 | x ) = σ ( w T x + b )
<script type="math/tex; mode=display" id="MathJax-Element-152">\widehat y = P(y=1|x)=\sigma (w^Tx + b)</script>

其中σσ<script type="math/tex" id="MathJax-Element-153">\sigma</script>为sigmoid激活函数:σ(z)=11+ezσ(z)=11+e−z<script type="math/tex" id="MathJax-Element-154">\sigma(z) = \frac{1}{1+e^{-z}}</script>
其函数图像为:sigmoid函数
其导数为:σ=σ(1σ)σ′=σ∗(1−σ)<script type="math/tex" id="MathJax-Element-155">\sigma' = \sigma * (1 - \sigma)</script>

逻辑回归包含的参数有:wRnw∈Rn<script type="math/tex" id="MathJax-Element-156">w \in R^n</script> , bRb∈R<script type="math/tex" id="MathJax-Element-157">b \in R</script>,当我们有一批标注过的图片后,就可以将图片的特征和标签作为输入对其进行训练,从而得到合适的w和b的值,以使其可以对图片是否为猫进行分类。

2.3 Logistic回归的损失函数

损失函数定义了神经元的输出与真实标签之间的误差大小,那么模型分类越准确,则误差就越小。因此我们的目标就是得到一组参数w,b使得在数据集上的累积误差最小。这样回归单元针对这一任务的分类作用就越好。
Logistic神经元单个样本的误差:

L(yˆ,y)=ylogyˆ(1y)log(1yˆ)L(y^,y)=−ylogy^−(1−y)log(1−y^)
<script type="math/tex; mode=display" id="MathJax-Element-40">L(\widehat y,y ) = -ylog\widehat y - (1-y)log(1-\widehat y)</script>
所有训练集的整体误差:
J(w,b)=1mi=1mL(yˆ(i),y(i))=1mi=1my(i)logyˆ(i)+(1y(i))log(1yˆ(i))J(w,b)=1m∑i=1mL(y^(i),y(i))=−1m∑i=1my(i)logy^(i)+(1−y(i))log(1−y^(i))
<script type="math/tex; mode=display" id="MathJax-Element-41">J(w,b)=\frac{1}{m}\sum_{i=1}^m L(\widehat y^{(i)},y^{(i)} ) = -\frac{1}{m}\sum_{i=1}^m y^{(i)} log\widehat y^{(i)} + (1-y^{(i)})log(1-\widehat y^{(i)} )</script>

2.4 梯度下降法

梯度下降是一种计算函数极值的方法,在高等数学中,我们知道函数的梯度就是导数,其几何意义是某点上切线的斜率。

针对损失函数J(w,b)J(w,b)<script type="math/tex" id="MathJax-Element-42">J(w,b)</script>梯度下降的过程:

  1. 初始化参数w,b,可以随机初始化或零初始化
  2. 更新参数w,b,记学习率为αα<script type="math/tex" id="MathJax-Element-43">\alpha</script>
    w,b=wαJw,bαJbw,b=w−α∗∂J∂w,b−α∗∂J∂b
    <script type="math/tex; mode=display" id="MathJax-Element-44">w,b = w - \alpha * \frac{\partial J}{\partial w} ,b - \alpha * \frac{\partial J}{\partial b} </script>
  3. 直到误差小到可接受的范围或无法再使误差减小时停止更新过程

2.5 逻辑回归计算图

在计算梯度时,会需要通过链式法则来计算复合函数的导数,可以用计算图来解释并实现这一过程。我们先重写逻辑回归的计算函数的符号:
z=wTx+bz=wTx+b<script type="math/tex" id="MathJax-Element-45">z=w^T x + b</script>
yˆ=a=σ(z)y^=a=σ(z)<script type="math/tex" id="MathJax-Element-46">\widehat y = a = \sigma(z)</script>
L(a,y)=yloga(1y)log(1a)L(a,y)=−yloga−(1−y)log(1−a)<script type="math/tex" id="MathJax-Element-47">L(a,y) = -ylog a - (1-y)log(1-a)</script>

因此逻辑回归的计算图如下:
这里写图片描述

    A["w"]-->|变量w| D
    B["b"] -->|变量b| D
    C["x"] -->|训练集输入| D
    D["z = w.T * x + b"] --> E["a = σ(z)"]
    E --> F["L(a,y) = -yloga - (1-y)log(1-a)"]

梯度下降推导如下:
La=ya+1y1a∂L∂a=−ya+1−y1−a<script type="math/tex" id="MathJax-Element-48">\frac{\partial L}{\partial a} = -\frac{y}{a}+\frac{1-y}{1-a}</script>
az=a(1a)∂a∂z=a(1−a)<script type="math/tex" id="MathJax-Element-49">\frac{\partial a}{\partial z}=a(1-a)</script>
Lz=Laaz=ay∂L∂z=∂L∂a∂a∂z=a−y<script type="math/tex" id="MathJax-Element-50">\frac{\partial L}{\partial z}=\frac{\partial L}{\partial a} \frac{\partial a}{\partial z} = a -y</script>

Lw=Lzzw=x(ay)∂L∂w=∂L∂z∂z∂w=x(a−y)<script type="math/tex" id="MathJax-Element-51">\frac{\partial L}{\partial w}=\frac{\partial L}{\partial z}\frac{\partial z}{\partial w} = x(a-y)</script>
Lb=Lzzb=(ay)∂L∂b=∂L∂z∂z∂b=(a−y)<script type="math/tex" id="MathJax-Element-52">\frac{\partial L}{\partial b}=\frac{\partial L}{\partial z}\frac{\partial z}{\partial b} = (a-y)</script>

2.6 逻辑回归的代码实现

在上一节中,我们推导了单个样本时损失函数的梯度下降计算公式,本节我们看下如何使用numpy来实现这些公式并实现m个样本的向量化逻辑回归算法。

多个样本的情况,为了加速计算,我们将样本排列为矩阵,如2.1节所示。而损失函数则需要计算所有样本的损失的加和平均,因此目标是使得整体的损失函数最小化。方法依然是梯度下降法。具体的实现如下:

import numpy as np


learn_rate = 0.01

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

def train(train_x, train_y, loops=100):
    input_dim = train_x.shape[0]
    m = train_x.shape[1]

    # 初始化w,b
    w = np.zeros(shape=(input_dim,1))
    b = 0

    for i in range(loops):
        # 前向计算损失
        z = np.dot(w.T, train_x) + b
        A = sigmoid(z)
        cost = - 1/m * np.sum(train_y * np.log(A) + (1-train_y)*np.log(1-A))

        # 后向更新参数
        dZ = A - train_y
        dw = np.dot(train_x, dZ.T)/m
        db = np.sum(dZ)/m
        w -= learn_rate * dw
        b -= learn_rate * db
        print "Afte loop ", i, " get cost ", cost
    return w,b

def predict(w,b,x):
    num = x.shape[1]
    y_pred = np.zeros(shape=(1,num))
    z = np.dot(w.T,x) + b
    A = sigmoid(z)
    for i in range():
        y_pred[i] = 1  if A[0,i]>0.5 else 0

    return y_pred

# load_data()以2.1节说明的格式返回训练与测试数据集    
train_x, train_y, test_x, test_y = load_data()
w,b = train(train_x, train_y)
print predict(w, b, test_x)

3 浅层神经网络

3.1 浅层神经网络表示

这一节介绍浅层神经网络,我们先从只有一个隐层的神经网络开始。
如下图所示,图中有三层,通常称为其为两层神经网络(输入层不计数)。第0层称为输入层,第1层称为隐含层,第2层称为输出层。
为了区分不同层的参数,使用加了中括号上标的参数来表示区分,如w[i]w[i]<script type="math/tex" id="MathJax-Element-53">w^{[i]}</script>表示第ii<script type="math/tex" id="MathJax-Element-54">i</script>层的权重参数。隐层与输出层的每个单元都是前面介绍过的逻辑回归单元。
各层的单元数用 n [ l ] <script type="math/tex" id="MathJax-Element-55">n^{[l]}</script>来表示,如n[1]=4n[1]=4<script type="math/tex" id="MathJax-Element-56">n^{[1]}=4</script>
单隐层神经网络

值得注意的地方是隐层的激活函数使用了tanh
其函数表达式为:tanh(z)=ezezez+eztanh(z)=ez−e−zez+e−z<script type="math/tex" id="MathJax-Element-57">tanh(z)=\frac{e^z-e^{-z}}{e^z+e^{-z}}</script>
其函数图像如下:此处输入图片的描述
其导数形式为:tanh=1tanh2(z)tanh′=1−tanh2(z)<script type="math/tex" id="MathJax-Element-58">tanh'=1-tanh^2(z)</script>

以同样的方式,我们先画出上图的计算图:
这里写图片描述

    A["W1"] --> D
    B["b1"] --> D
    C["x"] --> D
    D["z[1]=W[1]*x+b[1]"] --> E
    E["a[1]=tanh(z[1])"] --> F
    F["z[2]=W[2]*a[1]+b[2]"]-->G
    G["a[2]=σ(z[2])"]-->H["L(a[2],y)"]

其中W1为4*2的矩阵,w2为1*4的矩阵。

3.2 浅层神经网络梯度下降推导

正向传播的计算过程在计算图中可以清楚的看出,不再推导。
下面先推导单个样本的反向传播过程的计算公式:
da[2]=La[2]=ya[2]+1y1a[2]da[2]=∂L∂a[2]=−ya[2]+1−y1−a[2]<script type="math/tex" id="MathJax-Element-59">da^{[2]}=\frac{\partial L}{\partial a^{[2]}}=-\frac{y}{a^{[2]}}+\frac{1-y}{1-a^{[2]}}</script>
dz[2]=z[2]a[2]da[2]=a[2]ydz[2]=∂z[2]∂a[2]da[2]=a[2]−y<script type="math/tex" id="MathJax-Element-60">dz^{[2]}= \frac{\partial z^{[2]}}{\partial a^{[2]}} da^{[2]}= a^{[2]}-y</script>
dW[2]=dz[2]a[1]TdW[2]=dz[2]a[1]T<script type="math/tex" id="MathJax-Element-61">dW^{[2]}= dz^{[2]} {a^{[1]}}^T</script>
db[2]=dz[2]db[2]=dz[2]<script type="math/tex" id="MathJax-Element-62">db^{[2]}= dz^{[2]}</script>
dz[1]=z[2]a[1]a[1]z[1]dz[2]=W[2]Tdz[2](1a[1]2)dz[1]=∂z[2]∂a[1]∂a[1]∂z[1]dz[2]=W[2]Tdz[2]∗(1−a[1]2)<script type="math/tex" id="MathJax-Element-63">dz^{[1]} = \frac{\partial z^{[2]}}{\partial a^{[1]}}\frac{\partial a^{[1]}}{\partial z^{[1]}}dz^{[2]} = W^{[2]T}dz^{[2]}*(1-{a^{[1]}}^2)</script>
dW[1]=dz[1]xTdW[1]=dz[1]xT<script type="math/tex" id="MathJax-Element-64">dW^{[1]}=dz^{[1]}x^T</script>
db[1]=dz[1]db[1]=dz[1]<script type="math/tex" id="MathJax-Element-65">db^{[1]} = dz^{[1]}</script>

容易推广到多个样本的反向传播公式:
dZ[2]=A[2]YdZ[2]=A[2]−Y<script type="math/tex" id="MathJax-Element-66">dZ^{[2]}= A^{[2]}-Y</script>
dW[2]=1mdZ[2]A[1]TdW[2]=1mdZ[2]A[1]T<script type="math/tex" id="MathJax-Element-67">dW^{[2]}= \frac{1}{m}dZ^{[2]} {A^{[1]}}^T</script>
db[2]=1mnp.sum(dZ[2],axis=1,keepdims=True)db[2]=1mnp.sum(dZ[2],axis=1,keepdims=True)<script type="math/tex" id="MathJax-Element-68">db^{[2]}= \frac{1}{m}np.sum(dZ^{[2]}, axis=1, keepdims=True)</script>
dZ[1]=W[2]TdZ[2](1A[1]2)dZ[1]=W[2]TdZ[2]∗(1−A[1]2)<script type="math/tex" id="MathJax-Element-69">dZ^{[1]}= W^{[2]T}dZ^{[2]}*(1-{A^{[1]}}^2)</script>
dW[1]=dZ[1]XTdW[1]=dZ[1]XT<script type="math/tex" id="MathJax-Element-70">dW^{[1]}=dZ^{[1]}X^T</script>
db[1]=1mnp.sum(dZ[1],axis=1,keepdims=True)db[1]=1mnp.sum(dZ[1],axis=1,keepdims=True)<script type="math/tex" id="MathJax-Element-71">db^{[1]} =\frac{1}{m}np.sum(dZ^{[1]},axis=1, keepdims=True)</script>

3.3 shape对齐

矩阵 X W1 A1/Z1 W2 A2/Z2/Y
维度 (n[0]n[0]<script type="math/tex" id="MathJax-Element-72">n^{[0]}</script>,m) (n[1]n[1]<script type="math/tex" id="MathJax-Element-73">n^{[1]}</script>,n[0]n[0]<script type="math/tex" id="MathJax-Element-74">n^{[0]}</script>) (n[1]n[1]<script type="math/tex" id="MathJax-Element-75">n^{[1]}</script>,m) (n[2]n[2]<script type="math/tex" id="MathJax-Element-76">n^{[2]}</script>,n[1]n[1]<script type="math/tex" id="MathJax-Element-77">n^{[1]}</script>) (n[2]n[2]<script type="math/tex" id="MathJax-Element-78">n^{[2]}</script>,m)

对以上变量的微分并不改变其shape。依照此表格,在实现代码的时候需要注意要保证相应的矩阵乘法可以正确执行。

3.3 浅层神经网络实现

下面的代码实现上面的公式完成多样本训练过程的浅层神经网络。

import numpy as np

hidden_units = 4
lr = 1.2

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

def train(X,Y, loops=1000)
    m = X.shpae[1]
    input_dim = X.shape[0]
    output_dim = Y.shape[0]
    W1 = np.random.randn(hidden_units, dim_input)
    b1 = np.zeros(shape=(hidden_units, 1))
    W2 = np.random.randn(dim_output, hidden_units)
    b2 = np.zeros(shape=(dim_output, 1))

    for i in range(loops):
        # forward propagate
        Z1 = np.dot(W1, X) + b1
        A1 = np.tanh(Z1)
        Z2 = np.dot(W2, A1) + b2
        A2 = sigmoid(Z2)

        loss = Y * np.log(A2) + (1 - Y) * np.log(1 - A2)
        loss = - np.sum(loss)/ m
        print "After loops ",i," get loss ",loss
        # backward propagate
        dZ2 = A2 - Y
        dW2 = np.dot(dZ2, A1.T) / m
        db2 = np.sum(dZ2, axis=1, keepdims=True) / m
        dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))
        dW1 = np.dot(dZ1, X.T) / m
        db1 = np.sum(dZ1, axis=1, keepdims=True) / m

        # update params
        W1 = W1 - lr * dW1
        b1 = b1 - lr * db1
        W2 = W2 - lr * dW2
        b2 = b2 - lr * d
    return {"W1":W1,"b1":b1,"W2":W2,"b2":b2}

def predict(X, Y, params):
    W1 = params['W1']
    b1 = params['b1']
    W2 = params['W2']
    b2 = params['b2']

    Z1 = np.dot(W1, X) + b1
    A1 = np.tanh(Z1)
    Z2 = np.dot(W2, A1) + b2
    A2 = sigmoid(Z2)

    y_pred = np.round(A2)
    accuracy = (np.dot(Y, y_pred.T) + np.dot(1 - Y, 1 - y_pred.T)) / float(Y.size) * 100)
    return y_pred, accuracy

train_x, train_y, test_x, test_y = load_data()
params = train(train_x, train_y, loops=1000)

_, accuracy = predict(test_x, test_y, params)
print "Accuracy is", accuracy

3.4 为什么需要激活函数

如果没有非线性激活函数,则无论神经网络有多少层,其实际在计算线性激活函数,不如直接去掉全部隐层。也就是说线性隐层没有一点作用,线性层只能计算线性关系,无法计算复杂的关系。

4 深层神经网络

4.1 深层神经网络介绍

与浅层相对,深层是指隐层数量多于两层的神经网络。隐层的数量可以很多,多达上百层。深度神经网络的训练难度也快速增加,需要更多优化的手段。
下图是一个深层神经网络示例:
此处输入图片的描述

4.2 深层神经网络的shape对齐

以输入层,任意隐层和输出层为例

矩阵 X Wi bi Zi/Ai Y
维度 (n[0]n[0]<script type="math/tex" id="MathJax-Element-79">n^{[0]}</script>,m) (n[i]n[i]<script type="math/tex" id="MathJax-Element-80">n^{[i]}</script>,n[i1]n[i−1]<script type="math/tex" id="MathJax-Element-81">n^{[i-1]}</script>) (n[i]n[i]<script type="math/tex" id="MathJax-Element-82">n^{[i]}</script>,11<script type="math/tex" id="MathJax-Element-83">1</script>) ( n [ i ] <script type="math/tex" id="MathJax-Element-84">n^{[i]}</script>,m) (n[n]n[n]<script type="math/tex" id="MathJax-Element-85">n^{[n]}</script>,m)

4.3 深层神经网络的前向与后向传播

有了第三章的基础,在深层神经网络中,推导任意一层ll<script type="math/tex" id="MathJax-Element-86">l</script>的前向与后向传播的公式。
Z [ l ] = W [ l ] A [ l 1 ] + b [ l ] <script type="math/tex" id="MathJax-Element-87">Z^{[l]} = W^{[l]}A^{[l-1]} + b^{[l]}</script>
A[l]=g[l](Z[l])A[l]=g[l](Z[l])<script type="math/tex" id="MathJax-Element-88">A^{[l]} = g^{[l]}(Z^{[l]})</script>
dZ[l]=dA[l]g[l](Z[l])dZ[l]=dA[l]∗g[l]′(Z[l])<script type="math/tex" id="MathJax-Element-89">dZ^{[l]} = dA^{[l]}* g^{[l]'}(Z^{[l]})</script>
dW[l]=1mdZ[l]A[l1]dW[l]=1mdZ[l]A[l−1]<script type="math/tex" id="MathJax-Element-90">dW^{[l]} = \frac{1}{m}dZ^{[l]} A^{[l-1]} </script>
db[l]=1mdZ[l]db[l]=1mdZ[l]<script type="math/tex" id="MathJax-Element-91">db^{[l]} = \frac{1}{m}dZ^{[l]} </script>
dA[l1]=W[l]TdZ[l]dA[l−1]=W[l]TdZ[l]<script type="math/tex" id="MathJax-Element-92">dA^{[l-1]} = W^{[l]T}dZ^{[l]} </script>
输出层dA[end]=YA+1Y1AdA[end]=−YA+1−Y1−A<script type="math/tex" id="MathJax-Element-93">dA^{[end]} = -\frac{Y}{A}+\frac{1-Y}{1-A}</script>

4.4 深层神经网络的实现

本节将实现一个深层神经网络模型,其隐层的激活函数为relu,输出层的激活函数为sigmoid,层数及各层的维度由参数layer_dims决定。先看下其计算图:
这里写图片描述

    A["W1"] --> D
    B["b1"] --> D
    C["x"] --> D
    D["z[1]=W[1]*x+b[1]"] --> E
    E["a[1]=relu(z[1])"] -->|"...若干relu隐层..."| F
    F["z[l]=W[l]*a[l-1]+b[l]"]-->G
    G["a[l]=σ(z[l])"]-->H["L(a[l],y)"]

下面看下实现代码:

import numpy as np


learn_rate = 0.0075
loops = 3000

def sigmoid(Z):
    A = 1 / (1 + np.exp(-Z))
    return A

def relu(Z):
    A = np.maximum(0, Z)
    return A

def init_parameters(layer_dims):
    np.random.seed(1)
    params = {}
    layers = len(layer_dims)

    for l in range(1, layer):
        w = np.random.randn(layer_dims[l], layer_dims[l - 1])
        w = w * 0.01 / np.sqrt(layer_dims[l - 1])
        params['W' + str(l)] = w 
        params['b' + str(l)] = np.zeros((layer_dims[l], 1))
    return params   

def update_parameters(parameters, grads, learn_rate):
    layers = len(parameters) // 2
    for l in range(layers):
        parameters["W" + str(l + 1)] -= learn_rate * grads["dW" + str(l + 1)]
        parameters["b" + str(l + 1)] -= learn_rate * grads["db" + str(l + 1)]

    return parameters

def compute_cost(AL, Y):
    m = Y.shape[1]
    cost =  (-np.dot(Y, np.log(AL).T) - np.dot(1 - Y, np.log(1 - AL).T)) / m
    cost = np.squeeze(cost)
    return cost

def relu_activation_forward(A_prev, W, b):
    Z = W.dot(A_prev) + b
    A = relu(Z)
    return A, (A_prev, W, b), Z

def sigmoid_activation_forward(A_prev, W, b):
    Z = W.dot(A_prev) + b
    A = sigmoid(Z)
    return A, (A_prev, W, b), Z


def deep_model_forward(X, parameters):
    caches = []
    A = X
    layers = len(parameters) // 2

    for layer in range(1, layers):
        A_prev = A
        A, awb, z = relu_activation_forward(A_prev,parameters['W' + str(layer)],parameters['b' + str(layer)])
        caches.append((awb,z))

    AL, awb, z = sigmoid_activation_forward(A,parameters['W' + str(layers)],parameters['b' + str(layers)])
    caches.append((awb, z))

    return AL, caches


def linear_backward(dZ, awb):
    A_prev, W, b = awb
    m = A_prev.shape[1]

    dW = 1. / m * np.dot(dZ, A_prev.T)
    db = 1. / m * np.sum(dZ, axis=1, keepdims=True)
    dA_prev = np.dot(W.T, dZ)

    return dA_prev, dW, db


def relu_activation_backward(dA, cache):
    awb, Z = cache
    dZ = np.array(dA, copy=True)
    dZ[Z <= 0] = 0
    dA_prev, dW, db = linear_backward(dZ, awb)
    return dA_prev, dW, db


def sigmoid_activation_backward(dA, cache):
    awb, Z = cache
    s = sigmoid(Z)
    dZ = dA * s * (1 - s)
    dA_prev, dW, db = linear_backward(dZ, awb)
    return dA_prev, dW, db

def deep_model_backward(AL, Y, caches):
    grads = {}
    layers = len(caches) 
    Y = Y.reshape(AL.shape)

    dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
    current_cache = caches[layers - 1]
    current_layer = str(layers)
    dA_prev_temp, dW_temp, db_temp = sigmoid_activation_backward(dAL, current_cache)
    grads["dA" + current_layer] = dA_prev_temp
    grads["dW" + current_layer] = dW_temp
    grads["db" + current_layer] = db_temp

    for layer in reversed(range(layers - 1)):
        current_cache = caches[layer]
        dA_prev_temp, dW_temp, db_temp = relu_activation_backward(grads["dA" + str(layer + 2)], current_cache)
        grads["dA" + str(layer + 1)] = dA_prev_temp
        grads["dW" + str(layer + 1)] = dW_temp
        grads["db" + str(layer + 1)] = db_temp

    return grads

def deep_layer_model(X, Y, layers_dims):
    parameters = init_parameters(layers_dims)

    for i in range(0, loops):
        AL, caches = deep_model_forward(X, parameters)
        cost = compute_cost(AL, Y)
        grads = deep_model_backward(AL, Y, caches)
        parameters = update_parameters(parameters, grads, learn_rate)
        print ("Cost after loops %i: %f" % (i, cost))

    return parameters

def predict(X, y, parameters):
    m = X.shape[1]
    n = len(parameters) // 2 
    p = np.zeros((1, m))

    probas, caches = deep_model_forward(X, parameters)

    for i in range(0, probas.shape[1]):
        if probas[0, i] > 0.5:
            p[0, i] = 1
        else:
            p[0, i] = 0
    print("Accuracy: " + str(np.sum((p == y)*1. / m)))
    return p

train_x, train_y, test_x, test_y = load_data()

input_dim = 4
hidden_units = 7
output_dim = 1
layer_dims = (input_dim, hidden_units, output_dim )

model = deep_layer_model(train_x, train_y, layer_dims)
predict(test_x, test_y, model)
Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐