Deeplearning
NumpyDeep LearningBasic神经网络:#mermaid-svg-Pq11a1l9maoT62aY {font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:16px;fill:#333;}#mermaid-svg-Pq11a1l9maoT62aY .error-icon{fill:#552222;}#mermai
Numpy
Deep Learning
Basic
- 神经网络:
-
监督学习:1个x对应1个y;
-
Sigmoid : 激活函数
sigmoid=11+e−x sigmoid=\frac{1}{1+e^{-x}} sigmoid=1+e−x1 -
ReLU : 线性整流函数;
Logistic Regression
–>binary classification / x–>y 0 1
some sign
(x,y),x∈Rnx,y∈0,1M=mtrainmtest=testM:(x(1),y(1)),(x(2),y(2))...,(x(m),y(m))X=[x(1)x(2)⋯x(m)]←nx×my^=P(y=1∣x)y^=σ(wtx+b)w∈Rnxb∈Rσ(z)=11+e−z (x,y) , x\in{\mathbb{R}^{n_{x}}},y\in{0,1}\\\\ M=m_{train}\quad m_{test}=test\\\\ M:{(x^{(1)},y^{(1)}),(x^{(2)},y^{(2)})...,(x^{(m)},y^{(m)})}\\\\ X = \left[ \begin{matrix} x^{(1)} & x^{(2)} &\cdots & x^{(m)} \end{matrix} \right] \leftarrow n^{x}\times m\\\\ \hat{y}=P(y=1\mid x)\quad\hat{y}=\sigma(w^tx+b)\qquad w\in \mathbb{R}^{n_x} \quad b\in \mathbb{R}\\ \sigma (z)=\frac{1}{1+e^{-z}} (x,y),x∈Rnx,y∈0,1M=mtrainmtest=testM:(x(1),y(1)),(x(2),y(2))...,(x(m),y(m))X=[x(1)x(2)⋯x(m)]←nx×my^=P(y=1∣x)y^=σ(wtx+b)w∈Rnxb∈Rσ(z)=1+e−z1
Loss function
单个样本
Loss function:L(y^,y)=12(y^−y)2p(y∣x)=y^y(1−y^)(1−y)min cost→max log(y∣x)L(y^,y)=−(ylog(y^)+(1−y)log(1−y^))y=1:L(y^,y)=−logy^logy^←largery^←largery=0:L(y^,y)=−log(1−y^)log(1−y^)←largery^←smaller Loss\:function:\mathcal{L}(\hat{y},y)=\frac{1}{2}(\hat{y}-y)^2\\\\ p(y\mid x)=\hat{y}^y(1-\hat y)^{(1-y)}\\ min\;cost\rightarrow max\;\log(y\mid x)\\ \mathcal{L}(\hat{y},y)=-(y\log(\hat{y})+(1-y)\log(1-\hat{y}))\\\\ y=1:\mathcal{L}(\hat{y},y)=-\log\hat{y}\quad \log\hat{y}\leftarrow larger\quad\hat{y}\leftarrow larger\\ y=0:\mathcal{L}(\hat{y},y)=-\log(1-\hat{y})\quad \log(1-\hat{y})\leftarrow larger\quad\hat{y}\leftarrow smaller\\\\ Lossfunction:L(y^,y)=21(y^−y)2p(y∣x)=y^y(1−y^)(1−y)mincost→maxlog(y∣x)L(y^,y)=−(ylog(y^)+(1−y)log(1−y^))y=1:L(y^,y)=−logy^logy^←largery^←largery=0:L(y^,y)=−log(1−y^)log(1−y^)←largery^←smaller
cost function
J(w,b)=1m∑i=1mL(y^(i),y(i)) \mathcal{J}(w,b)=\frac{1}{m}\sum_{i=1}^{m}\mathcal{L}(\hat{y}^{(i)},y^{(i)}) J(w,b)=m1i=1∑mL(y^(i),y(i))
Gradient Descent
find w,b that minimiaze J(w,b) ;
Repeat:
w:=w−α∂J(w,b)∂w(dw)b:=b−α∂J(w,b)∂b(db) w:=w-\alpha \frac{\partial\mathcal{J}(w,b)}{\partial w}(dw)\\ b:=b-\alpha \frac{\partial\mathcal{J}(w,b)}{\partial b}(db) w:=w−α∂w∂J(w,b)(dw)b:=b−α∂b∂J(w,b)(db)
Computation Grapha
example:
J=3(a+bc) J=3(a+bc) J=3(a+bc)
one example gradient descent computer grapha:
recap:
z=wTx+by^=a=σ(z)=11+e−zL(a,y)=−(ylog(a)+(1−y)log(1−a)) z=w^Tx+b\\ \hat{y}=a=\sigma(z)=\frac{1}{1+e^{-z}} \\ \mathcal{L}(a,y)=-(y\log(a)+(1-y)\log(1-a)) z=wTx+by^=a=σ(z)=1+e−z1L(a,y)=−(ylog(a)+(1−y)log(1−a))
The grapha:
′da′=dL(a,y)da=−ya+1−y1−a′dz′=dL(a,y)dz=dLda⋅dadz=a−y′dw1′=x1⋅dz ...w1:=w1−αdw1 ... 'da'=\frac{d\mathcal{L}(a,y)}{da}=-\frac{y}{a}+\frac{1-y}{1-a}\\ 'dz'=\frac{d\mathcal{L}(a,y)}{dz}=\frac{d\mathcal{L}}{da}\cdot\frac{da}{dz}=a-y\\ 'dw_1'=x_1\cdot dz\;\;\; ... \\w_1:=w_1-\alpha dw_1\;\;... ′da′=dadL(a,y)=−ay+1−a1−y′dz′=dzdL(a,y)=dadL⋅dzda=a−y′dw1′=x1⋅dz...w1:=w1−αdw1...
m example gradient descent computer grapha:
recap:
J(w,b)=1m∑i=1mL(a(i),y(1)) \mathcal{J}(w,b)=\frac{1}{m}\sum_{i=1}^m\mathcal{L}(a^{(i)},y^{(1)}) J(w,b)=m1i=1∑mL(a(i),y(1))
The grapha: (two iterate)
∂∂w1J(w,b)=1m∑i=1m∂∂w1L(a(i),y(1))Fori=1tom:{a(i)=σ(wTx(i)+b)J+=−[y(i)logai+(1−y(i)log(1−a(i)))]dz(i)=a(i)−y(i)dw1+=x1(i)dz(i)dw2+=x2(i)dz(i)db+=dz(i)}J/=m;dw1/=m;dw2/=m;db/=mdw1=∂J∂w1w1=w1−αdw1 \frac{\partial}{\partial w_1}\mathcal{J}(w,b)=\frac{1}{m}\sum_{i=1}^m\frac{\partial}{\partial w_1}\mathcal{L}(a^{(i)},y^{(1)})\\\\ For \quad i=1 \quad to \quad m:\{\\ a^{(i)}=\sigma (w^Tx^{(i)}+b)\\ \mathcal{J}+=-[y^{(i)}\log a^{i}+(1-y^{(i)}\log(1-a^{(i)}))] \\ dz^{(i)}=a^{(i)}-y^{(i)}\\ dw_1+=x_1^{(i)}dz^{(i)}\\ dw_2+=x_2^{(i)}dz^{(i)}\\ db+=dz^{(i)}\}\\ \mathcal{J}/=m;dw_1/=m;dw_2/=m;db/=m\\ dw_1=\frac{\partial\mathcal{J}}{\partial w_1}\\ w_1=w_1-\alpha dw_1 ∂w1∂J(w,b)=m1i=1∑m∂w1∂L(a(i),y(1))Fori=1tom:{a(i)=σ(wTx(i)+b)J+=−[y(i)logai+(1−y(i)log(1−a(i)))]dz(i)=a(i)−y(i)dw1+=x1(i)dz(i)dw2+=x2(i)dz(i)db+=dz(i)}J/=m;dw1/=m;dw2/=m;db/=mdw1=∂w1∂Jw1=w1−αdw1
Vectorization
vectorized
z=np.dot(w,x)+b z=np.dot(w,x)+b z=np.dot(w,x)+b
logistic regression derivatives:
change:
dw1=0,dw2=0→dw=np.zeros((nx,1)){dw1+=x1(i)dz(i)dw2+=x2(i)dz(i)→dw+=x(i)dz(i)Z=( z(1)z(2)...z(m) )=wTX+bA=σ(Z)dz=A−Y=( a(1)−y(1)z(2)−y(2)...z(m)−y(m) )db=1m∑i=1mdz(i)=1mnp.sum(dz)dw=1mXdzT=1m( x(1)⋅dz(1)+x(2)⋅dz(2)+x(m)⋅dz(m) ) dw_1=0,dw_2=0\rightarrow dw=np.zeros((n_x,1))\\ \begin{cases}dw_1+=x_1^{(i)}dz^{(i)}\\ dw_2+=x_2^{(i)}dz^{(i)}\end{cases}\rightarrow dw+=x^{(i)}dz^{(i)}\\\\ Z=\left(\;\begin{matrix} z^{(1)} & z^{(2)} &... &z^{(m)}\end{matrix}\;\right)=w^TX+b\\ A=\sigma(Z)\\\\ dz=A-Y=\left(\;\begin{matrix} a^{(1)}-y^{(1)} & z^{(2)}-y^{(2)} &... &z^{(m)}-y^{(m)}\end{matrix}\;\right)\\ db=\frac{1}{m}\sum_{i=1}^mdz^{(i)}=\frac{1}{m}np.sum(dz)\\ dw=\frac{1}{m}Xdz^T=\frac{1}{m}\left(\;\begin{matrix} x^{(1)}\cdot dz^{(1)}+x^{(2)}\cdot dz^{(2)}+x^{(m)}\cdot dz^{(m)}\end{matrix}\;\right) dw1=0,dw2=0→dw=np.zeros((nx,1)){dw1+=x1(i)dz(i)dw2+=x2(i)dz(i)→dw+=x(i)dz(i)Z=(z(1)z(2)...z(m))=wTX+bA=σ(Z)dz=A−Y=(a(1)−y(1)z(2)−y(2)...z(m)−y(m))db=m1i=1∑mdz(i)=m1np.sum(dz)dw=m1XdzT=m1(x(1)⋅dz(1)+x(2)⋅dz(2)+x(m)⋅dz(m))
Implementing:
Z=wTX+b=np.dot(wT,X)+bA=σ(Z)J=−1m∑i=1m(y(i)log(a(i))+(1−y(i))log(1−a(i)))dZ=A−Ydw=1mXdZTdb=1mnp.sum(dZ)w:=w−αdwb:=b−αdb Z=w^TX+b=np.dot(w^T,X)+b\\ A=\sigma(Z)\\ J=-\frac{1}{m}\sum_{i=1}^m(y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)}))\\ dZ=A-Y\\ dw=\frac{1}{m}XdZ^T\\ db=\frac{1}{m}np.sum(dZ)\\ w:=w-\alpha dw\\ b:=b-\alpha db Z=wTX+b=np.dot(wT,X)+bA=σ(Z)J=−m1i=1∑m(y(i)log(a(i))+(1−y(i))log(1−a(i)))dZ=A−Ydw=m1XdZTdb=m1np.sum(dZ)w:=w−αdwb:=b−αdb
broadcasting
np.dot(wT,X)+b np.dot(w^T,X)+b np.dot(wT,X)+b
A note on numpy
a=np.random.randn(5)//wrong→a=a.reshape(5,1)assert(a.shape==(5,1))a=np.random.randn(5,1)→colum vector a=np.random.randn(5) //wrong\rightarrow a=a.reshape(5,1)\\ assert(a.shape==(5,1))\\ a=np.random.randn(5,1)\rightarrow colum\;vector a=np.random.randn(5)//wrong→a=a.reshape(5,1)assert(a.shape==(5,1))a=np.random.randn(5,1)→columvector
Shallow Neural Network
Representation
2 layer NN:
Input layer→hidden→layer→out layera[0]→a[1]→a[2]z[1]=W[1]a[0]+b[1]a[1]=σ(z[1])z[2]=W[2]a[1]+b[2]a[2]=σ(z[2])=y^ Input\;layer\rightarrow hidden\rightarrow layer\rightarrow out\;layer\\ a^{[0]}\rightarrow a^{[1]}\rightarrow a^{[2]}\\\\ z^{[1]}=W^{[1]}a^{[0]}+b^{[1]}\\ a^{[1]}=\sigma(z^{[1]})\\ z^{[2]}=W^{[2]}a^{[1]}+b^{[2]}\\ a^{[2]}=\sigma(z^{[2]})=\hat y\\ Inputlayer→hidden→layer→outlayera[0]→a[1]→a[2]z[1]=W[1]a[0]+b[1]a[1]=σ(z[1])z[2]=W[2]a[1]+b[2]a[2]=σ(z[2])=y^
computing:
zi[1]=wi[1]Tx+bi[1]ai[1]=σ(zi[1])[w1[1]Tw2[1]Tw3[1]Tw4[1]T]⋅[x1x2x3]+[b1[1]b2[1]b3[1]b4[1]]=[z1[1]z2[1]z3[1]z4[1]] z_i^{[1]}=w_i^{[1]T}x+b_i^{[1]}\\ a_i^{[1]}=\sigma(z_i^{[1]})\\ \left[ \begin{matrix} w_1^{[1]T}\\w_2^{[1]T}\\w_3^{[1]T}\\w_4^{[1]T} \end{matrix} \right] \cdot \left[ \begin{matrix} x_1\\x_2\\x_3 \end{matrix} \right]+\left[ \begin{matrix} b_1^{[1]}\\b_2^{[1]}\\b_3^{[1]}\\b_4^{[1]} \end{matrix} \right]=\left[ \begin{matrix} z_1^{[1]}\\z_2^{[1]}\\z_3^{[1]}\\z_4^{[1]} \end{matrix} \right] zi[1]=wi[1]Tx+bi[1]ai[1]=σ(zi[1]) w1[1]Tw2[1]Tw3[1]Tw4[1]T ⋅ x1x2x3 + b1[1]b2[1]b3[1]b4[1] = z1[1]z2[1]z3[1]z4[1]
Vectorize:
x(i)→a[2](i)=y^(i)Z[1]=W[1]X+b[1]A[1]=σ(Z[1])Z[2]=W[2]A[1]+b[2]A[2]=σ(Z[2])W[1]⋅[x(1)x(2)⋯x(m)]+b=[z[1](1)z[1](2)⋯z[1](m)]=Z[1] x^{(i)}\rightarrow a^{[2](i)}=\hat y^{(i)}\\ Z^{[1]}=W^{[1]}X+b^{[1]}\\ A^{[1]}=\sigma(Z^{[1]})\\ Z^{[2]}=W^{[2]}A^{[1]}+b^{[2]}\\ A^{[2]}=\sigma(Z^{[2]})\\ W^{[1]}\cdot \left[ \begin{matrix} x^{(1)} & x^{(2)} &\cdots & x^{(m)} \end{matrix} \right]+b=\left[ \begin{matrix} z^{[1](1)} & z^{[1](2)} &\cdots & z^{[1](m)} \end{matrix} \right]=Z^{[1]} x(i)→a[2](i)=y^(i)Z[1]=W[1]X+b[1]A[1]=σ(Z[1])Z[2]=W[2]A[1]+b[2]A[2]=σ(Z[2])W[1]⋅[x(1)x(2)⋯x(m)]+b=[z[1](1)z[1](2)⋯z[1](m)]=Z[1]
Activation functions
a=11+e−z,a′=a(1−a)a=tanh(z)=ez−e−zez+e−z,a∈(−1,1),a′=1−a2a=max(0,z)a=max(0.01z,z) a=\frac{1}{1+e^{-z}},a'=a(1-a)\\ a=\tanh(z)=\frac{e^z-e^{-z}}{e^z+e^{-z}},a\in (-1,1),a'=1-a^2\\ a=max(0,z)\\ a=max(0.01z,z) a=1+e−z1,a′=a(1−a)a=tanh(z)=ez+e−zez−e−z,a∈(−1,1),a′=1−a2a=max(0,z)a=max(0.01z,z)
Gradient descent
computation
z[1]=W[1]x+b[1]→a[1]=σ(z[1])→z[2]=W[2]a[1]+b[2]→a[2]=σ(z[2])→L(a[2],y)dz[2]=a[2]−ydw[2]=dz[2]a[1]Tdb[2]=dz[2]dz[1]=w[2]Tdz[2]∗a′[1]dw[1]=dz[1]⋅xTdb[1]=dz[1] z^{[1]}=W^{[1]}x+b^{[1]}\rightarrow\\ a^{[1]}=\sigma(z^{[1]})\rightarrow\\ z^{[2]}=W^{[2]}a^{[1]}+b^{[2]}\rightarrow\\ a^{[2]}=\sigma(z^{[2]})\rightarrow\\ \mathcal{L}(a^{[2]},y)\\\\ dz^{[2]}=a^{[2]}-y\\ dw^{[2]}=dz^{[2]}a^{[1]T}\\ db^{[2]}=dz^{[2]}\\ dz^{[1]}=w^{[2]T}dz^{[2]}*a^{'[1]}\\ dw^{[1]}=dz^{[1]}\cdot x^T\\ db^{[1]}=dz^{[1]}\\\\ z[1]=W[1]x+b[1]→a[1]=σ(z[1])→z[2]=W[2]a[1]+b[2]→a[2]=σ(z[2])→L(a[2],y)dz[2]=a[2]−ydw[2]=dz[2]a[1]Tdb[2]=dz[2]dz[1]=w[2]Tdz[2]∗a′[1]dw[1]=dz[1]⋅xTdb[1]=dz[1]
dz[1]的推导涉及到了矩阵求导
the dimension
x:(n0,m)W[1]:(n1,n0)→a[1]:(n1,m)W[2]::(n2,n1)→a[2]:(n2,m) x:(n_0,m)\quad W^{[1]}:(n_1,n_0)\rightarrow \\ a^{[1]}:(n_1,m)\quad W^{[2]:}:(n_2,n_1)\rightarrow\\ a^{[2]}:(n_2,m)\quad x:(n0,m)W[1]:(n1,n0)→a[1]:(n1,m)W[2]::(n2,n1)→a[2]:(n2,m)
vectorize
dZ[2]=A[2]−YdW[2]=1mdZ[2]A[1]Tdb[2]=np.sum(dZ[2],axis=1,keepdims=True)dZ[1]=W[2]TdZ[2]∗A′[1]dW[1]=1mdZ[1]XTdb[1]=1mnp.sum(dZ[1],axis=1,keepdims=True) dZ^{[2]}=A^{[2]}-Y\\ dW^{[2]}=\frac{1}{m}dZ^{[2]}A^{[1]T}\\ db^{[2]}=np.sum(dZ^{[2]},axis = 1,keepdims=True)\\ dZ^{[1]}=W^{[2]T}dZ^{[2]}*A^{'[1]}\\ dW^{[1]}=\frac{1}{m}dZ^{[1]}X^T\\ db^{[1]}=\frac{1}{m}np.sum(dZ{[1]},axis=1,keepdims=True) dZ[2]=A[2]−YdW[2]=m1dZ[2]A[1]Tdb[2]=np.sum(dZ[2],axis=1,keepdims=True)dZ[1]=W[2]TdZ[2]∗A′[1]dW[1]=m1dZ[1]XTdb[1]=m1np.sum(dZ[1],axis=1,keepdims=True)
更多推荐



所有评论(0)