百度飞桨系列课程:百度架构师手把手带你零基础实践深度学习——21日学习打卡(第二周第五日和第六日)
飞百度架构师手把手带你零基础实践深度学习——21日学习打卡(第二周第五日)首先声明,不详细讲解代码部分,主要是针对课程的理解及对作业的分析。(要是有代码相关问题可以私信)课程链接:飞百度架构师手把手带你零基础实践深度学习前文:飞桨PaddlePaddle-百度架构师手把手带你零基础实践深度学习——21日学习打卡(第二周第一日)飞桨PaddlePaddle-百度架构师手把手带你零基础实践深度学习——
飞百度架构师手把手带你零基础实践深度学习——21日学习打卡(第二周第五日)
首先声明,不详细讲解代码部分,主要是针对课程的理解及对作业的分析。(要是有代码相关问题可以私信)
课程链接:飞百度架构师手把手带你零基础实践深度学习
前文:
飞桨PaddlePaddle-百度架构师手把手带你零基础实践深度学习——21日学习打卡(第二周第一日)
飞桨PaddlePaddle-百度架构师手把手带你零基础实践深度学习——21日学习打卡(第二周第二日)
飞桨PaddlePaddle-百度架构师手把手带你零基础实践深度学习——21日学习打卡(第二周第三日)
飞桨PaddlePaddle-百度架构师手把手带你零基础实践深度学习——21日学习打卡(第二周第四日、第二周(实践)大作业)
该课程分类专栏:深度学习课程——飞桨paddlepaddle
这两天孙老师主要讲的是关于YoloV3算法里面前半部分如何实现的内容。包含以下几点:生成候选区域(生成锚框、生成预测框、对候选区域进行标注、编著锚框的具体程序)、卷积神经网络的特征提取(这里比较简单,都是前文学到的知识)、根据输出特征图计算预测框位置和类别(这部分主要是计算多)、最后讲的就是建立损失函数(之前内容都是从概念上将输出特征图上的像素点与预测框关联起来了,那么要对神经网络进行求解,还得从数学上将网络输出和预测框关联起来,也就是要建立起损失函数跟网络输出之间的关系。)、多尺度检测(这里得好好学习下)以及模型的预测,这两天学的内容就到这了(学习页面链接)。大家一起加油!
实践
- 代码太多就不都打出来了,只有这一部分代码改了,我写在下面(有问题就私信)
- 最后在给一个实践作业的代码,这次不用分段式衰减,换一个cosine_decay(余弦衰减)试试。
- 有的时候生成不了log文件的话,首先要注意main函数改没改,改了的话还没成功,就重启环境试试,代码没有问题的哦~
# -*- coding: utf-8 -*-
# LeNet 识别眼疾图片
import os
import random
import paddle
import paddle.fluid as fluid
import numpy as np
from visualdl import LogWriter
DATADIR = '/home/aistudio/work/palm/PALM-Training400/PALM-Training400'
DATADIR2 = '/home/aistudio/work/palm/PALM-Validation400'
CSVFILE = '/home/aistudio/labels.csv'
# 定义训练过程
def train(model):
with fluid.dygraph.guard():
print('start training ... ')
model.train()
iter = 0
base_lr = 0.01
lr = fluid.layers.cosine_decay( learning_rate = base_lr, step_each_epoch=300, epochs=10)
epoch_num = 10
# 定义优化器
opt = fluid.optimizer.Momentum(learning_rate=lr, momentum=0.9, parameter_list=model.parameters())
# 定义数据读取器,训练数据读取器和验证数据读取器
train_loader = data_loader(DATADIR, batch_size=10, mode='train')
valid_loader = valid_data_loader(DATADIR2, CSVFILE)
for epoch in range(epoch_num):
for batch_id, data in enumerate(train_loader()):
x_data, y_data = data
img = fluid.dygraph.to_variable(x_data)
label = fluid.dygraph.to_variable(y_data)
# 运行模型前向计算,得到预测值
logits = model(img)
# 进行loss计算
loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
avg_loss = fluid.layers.mean(loss)
if batch_id % 10 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
writer.add_scalar(tag = 'loss', step = iter, value = avg_loss.numpy())
iter = iter + 10
# 反向传播,更新权重,清除梯度
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(valid_loader()):
x_data, y_data = data
img = fluid.dygraph.to_variable(x_data)
label = fluid.dygraph.to_variable(y_data)
# 运行模型前向计算,得到预测值
logits = model(img)
# 二分类,sigmoid计算后的结果以0.5为阈值分两个类别
# 计算sigmoid后的预测概率,进行loss计算
pred = fluid.layers.sigmoid(logits)
loss = fluid.layers.sigmoid_cross_entropy_with_logits(logits, label)
# 计算预测概率小于0.5的类别
pred2 = pred * (-1.0) + 1.0
# 得到两个类别的预测概率,并沿第一个维度级联
pred = fluid.layers.concat([pred2, pred], axis=1)
acc = fluid.layers.accuracy(pred, fluid.layers.cast(label, dtype='int64'))
accuracies.append(acc.numpy())
losses.append(loss.numpy())
print("[validation] accuracy/loss: {}/{}".format(np.mean(accuracies), np.mean(losses)))
model.train()
# save params of model
fluid.save_dygraph(model.state_dict(), 'palm')
# save optimizer state
fluid.save_dygraph(opt.state_dict(), 'palm')
# 定义评估过程
def evaluation(model, params_file_path):
with fluid.dygraph.guard():
print('start evaluation .......')
#加载模型参数
model_state_dict, _ = fluid.load_dygraph(params_file_path)
model.load_dict(model_state_dict)
model.eval()
eval_loader = data_loader(DATADIR,
batch_size=10, mode='eval')
acc_set = []
avg_loss_set = []
for batch_id, data in enumerate(eval_loader()):
x_data, y_data = data
img = fluid.dygraph.to_variable(x_data)
label = fluid.dygraph.to_variable(y_data)
y_data = y_data.astype(np.int64)
label_64 = fluid.dygraph.to_variable(y_data)
# 计算预测和精度
prediction, acc = model(img, label_64)
# 计算损失函数值
loss = fluid.layers.sigmoid_cross_entropy_with_logits(prediction, label)
avg_loss = fluid.layers.mean(loss)
acc_set.append(float(acc.numpy()))
avg_loss_set.append(float(avg_loss.numpy()))
# 求平均精度
acc_val_mean = np.array(acc_set).mean()
avg_loss_val_mean = np.array(avg_loss_set).mean()
print('loss={}, acc={}'.format(avg_loss_val_mean, acc_val_mean))
if __name__ == '__main__':
with fluid.dygraph.guard():
model = ResNet()
with LogWriter(logdir="./log") as writer:
train(model)
输出:
start training ...
epoch: 0, batch_id: 0, loss is: [0.7137054]
epoch: 0, batch_id: 10, loss is: [3.1994414]
epoch: 0, batch_id: 20, loss is: [0.6773306]
epoch: 0, batch_id: 30, loss is: [0.9753799]
[validation] accuracy/loss: 0.6274999976158142/3.8405699729919434
epoch: 1, batch_id: 0, loss is: [0.01808791]
epoch: 1, batch_id: 10, loss is: [0.69278806]
epoch: 1, batch_id: 20, loss is: [3.7786992]
epoch: 1, batch_id: 30, loss is: [0.19659323]
[validation] accuracy/loss: 0.7300000190734863/1.3651916980743408
epoch: 2, batch_id: 0, loss is: [0.5994292]
epoch: 2, batch_id: 10, loss is: [1.8652639]
epoch: 2, batch_id: 20, loss is: [0.09375274]
epoch: 2, batch_id: 30, loss is: [0.05307168]
[validation] accuracy/loss: 0.5249999761581421/1.3727504014968872
epoch: 3, batch_id: 0, loss is: [1.6526046]
epoch: 3, batch_id: 10, loss is: [0.20086887]
epoch: 3, batch_id: 20, loss is: [0.6735779]
epoch: 3, batch_id: 30, loss is: [0.04788392]
[validation] accuracy/loss: 0.9350000619888306/0.21471740305423737
epoch: 4, batch_id: 0, loss is: [0.109744]
epoch: 4, batch_id: 10, loss is: [0.3011025]
epoch: 4, batch_id: 20, loss is: [0.36790198]
epoch: 4, batch_id: 30, loss is: [0.05984742]
[validation] accuracy/loss: 0.9600000381469727/0.13759399950504303
epoch: 5, batch_id: 0, loss is: [1.8829508]
epoch: 5, batch_id: 10, loss is: [0.09521748]
epoch: 5, batch_id: 20, loss is: [0.3708975]
epoch: 5, batch_id: 30, loss is: [0.18169342]
[validation] accuracy/loss: 0.6525000333786011/3.747347354888916
epoch: 6, batch_id: 0, loss is: [1.0479392]
epoch: 6, batch_id: 10, loss is: [1.0646671]
epoch: 6, batch_id: 20, loss is: [2.1604965]
epoch: 6, batch_id: 30, loss is: [0.10893799]
[validation] accuracy/loss: 0.8999999761581421/0.45108360052108765
epoch: 7, batch_id: 0, loss is: [0.62175167]
epoch: 7, batch_id: 10, loss is: [0.01072638]
epoch: 7, batch_id: 20, loss is: [0.09918281]
epoch: 7, batch_id: 30, loss is: [0.07638247]
[validation] accuracy/loss: 0.8675000071525574/0.3348890542984009
epoch: 8, batch_id: 0, loss is: [0.1298107]
epoch: 8, batch_id: 10, loss is: [0.99106073]
epoch: 8, batch_id: 20, loss is: [0.15924785]
epoch: 8, batch_id: 30, loss is: [0.10301425]
[validation] accuracy/loss: 0.8225000500679016/0.7368809580802917
epoch: 9, batch_id: 0, loss is: [0.12999359]
epoch: 9, batch_id: 10, loss is: [0.2441178]
epoch: 9, batch_id: 20, loss is: [0.12912634]
epoch: 9, batch_id: 30, loss is: [0.03065121]
[validation] accuracy/loss: 0.934999942779541/0.18015335500240326
效果还可以,大家自己改参数啊~
有什么不对或者可以改进的地方欢迎评论,互相交流
更多推荐
所有评论(0)