深度学习笔记3:手动搭建深度神经网络DNN

  • Post author:
  • Post category:其他


初始化参数,layer_dims为各层维度

# layer_dims : (5,4,4,3...)
def initialize_parameters(layer_dims):
    L = len(layer_dims)
    params = {}
    for i in range(1, L):
        params['w'+str(i)] = np.random.randn(layer_dims[i], layer_dims[i-1])
        params['b'+str(i)] = np.zeros((layer_dims[i], 1))
    return params

激活函数sigmoid和relu,其中relu为分段函数,



r

e

l

u

(

x

)

=

{

x

,

x

>

=

0

0

,

x

<

0

relu(x)=\begin{cases} x, & x >= 0 \\ 0, & x < 0 \end{cases}






r


e


l


u


(


x


)




=










{














x


,








0


,



























x




>


=




0








x




<




0


























relu函数相比较于sigmoid函数的优点在于:

  1. sigmoid为指数计算,相较于relu计算量大;
  2. 对于深层网络,sigmoid反向传播时容易出现梯度消失的问题,原因在于sigmoid函数接近饱和时,导数趋近于0,导致信息丢失,relu函数在大于0部分为线性函数,导数一定,可以缓解梯度消失问题
  3. relu函数由于小于0部分为0,会使网络稀疏,减少参数的相互依存关系,缓解过拟合问题
def sigmoid(x):
    return 1/(1+np.exp(-x))
def relu(x):
    t = x.copy()
    t[t<0] = 0
    return t

前向传播,保留每一层的参数和输出,方便之后反向传播,除最后一层使用sigmoid作为激活函数,其余均使用relu作为激活函数

def forward_propagation(X, params):
    caches = []
    L = len(params)//2
    A = X.copy()
    
    for i in range(1, L):
        A, cache = linear_activation_forward(A, params['w'+str(i)], params['b'+str(i)], 'relu')
        caches.append(cache)
    
    A, cache = linear_activation_forward(A, params['w'+str(L)], params['b'+str(L)], 'sigmoid')
    caches.append(cache)
    
    return A, caches
def linear_activation_forward(A, w, b, activation):
    z = w.dot(A)+b
    
    if activation == 'relu':
        a = relu(z)
    elif activation == 'sigmoid':
        a = sigmoid(z)
    return a, (w, A, z, a)   

计算损失,依然使用交叉熵损失

def compute_cost(A, Y):
    m = Y.shape[0]
    logprobs = Y*np.log(A)+(1-Y)*np.log(1-A)
    cost = -1/m*np.sum(logprobs)
    cost = np.squeeze(cost)
    return cost

反向传播,计算梯度

def backward_propagation(A, Y, caches):
    grads = {}
    L = len(caches)
    dA = -(Y/A-(1-Y)/(1-A))
    current_cache = caches[L-1]
    grads["dA"+str(L)], grads["dw"+str(L)], grads["db"+str(L)] = linear_activation_backward(dA, current_cache, "sigmoid")    
    
    for i in range(L-2, -1, -1):
        current_cache = caches[i]
        grads["dA"+str(i+1)], grads["dw"+str(i+1)], grads["db"+str(i+1)] = linear_activation_backward(grads["dA"+str(i+2)], current_cache, "relu")     
    
    return grads

sigmoid函数的导数为



ϕ

(

z

)

=

ϕ

(

z

)

(

1

ϕ

(

z

)

)

\phi'(z) = \phi(z)*(1-\phi(z))







ϕ






















(


z


)




=








ϕ


(


z


)













(


1













ϕ


(


z


)


)





,relu函数的导数为



r

e

l

u

(

x

)

=

{

1

,

x

>

=

0

0

,

x

<

0

relu'(x)=\begin{cases} 1, & x >= 0 \\ 0, & x < 0 \end{cases}






r


e


l



u






















(


x


)




=










{














1


,








0


,



























x




>


=




0








x




<




0
























def linear_activation_backward(dA, cache, activation):
    w, A, z, a = cache
    if activation == 'sigmoid':
        dZ = dA*a*(1-a)
    elif activation == 'relu':
        dZ = dA.copy()
        dZ[z<=0] = 0
    return linear_backward(dZ, w, A)  

对于第L层的线性函数



z

=

w

a

+

b

z = wa+b






z




=








w


a




+








b





,所以



d

w

=

d

z

a

d

b

=

d

z

1

d

a

=

d

z

w

dw=dz*a,db = dz *1,da = dz*w






d


w




=








d


z













a





d


b




=








d


z













1





d


a




=








d


z













w




def linear_backward(dZ, w, A):
    m = A.shape[0]

    dw = np.dot(dZ, A.T)/m
    db = np.sum(dZ, axis=1, keepdims=True)/m
    dA = np.dot(w.T, dZ)   
    return dA, dw, db

梯度更新

def update_parameters(params, grads, learning_rate):
    L = len(params) // 2
    for l in range(L):
        params["w" + str(l+1)] = params["w"+str(l+1)] - learning_rate*grads["dw"+str(l+1)]
        params["b" + str(l+1)] = params["b"+str(l+1)] - learning_rate*grads["db"+str(l+1)]    
    return params

L层的深度神经网络模型组装

def dnn(X, Y, layers_dims, learning_rate = 0.001, num_iterations = 1000):
    costs = []    

    params = initialize_parameters(layers_dims)    
    
    for i in range(num_iterations):   
        A, caches = forward_propagation(X, params)    
        cost = compute_cost(A, Y)   
        grads = backward_propagation(A, Y, caches)   
        params = update_parameters(params, grads, learning_rate)    
        if i % 100 == 0: 
            costs.append(cost) 
    
    return params, costs



版权声明:本文为kouge94原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。