刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

时间:2022-09-28 17:54:44

1、GoogleNet

I 网络结构

神经网络当中还有许多更为复杂的网络结构,那么它们如何来实现?用什么样的方法?GoogleNet网络结构如图所示:

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

GoogleNet常被用作基础主干网络,图中红色圈出的一个部分称为Inception块。

II 减少代码冗余思想(减少代码重复)

  1. 在c语言中 使用函数
  2. 面向对象过程中时 构造类
    在GoogleNet中把相同的块封装成一个类来减少代码冗余。

2、Inception Module

I 基本概念

问题: 构造神经网络时,超参数比较难选,比如kernel。
解决办法: 把几种卷积都用一下,效果更好的卷积被赋予的权重会更大,自动找到最优卷积的组合,针对每一个卷积结果再进行求和。

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

  • concarenate: 把张量拼接起来,必须保证图像的宽度和高度是一致的。
  • 均值池化: 最大池化会导致图像变为原来的一半,均值池化可以人为指定padding 和 stride 来保证输入和输出的图像是一样的。
  • 信息融合: 本质就是得到的值通过三个值通过某种运算得到的信息。考试对各科分数求总分进行比较分数高低,在多个维度下不太好比较。
  • 1*1卷积: 也是相同大小的卷积核,其个数取决于输入张量的通道,最主要目的就是改变通道的数量,减少运算量。

此处就是在做一个通道的变换,原通道数为3,新的通道数是卷积核的个数,高度和宽度不变。

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

运算量变成了原来的十分之一,大大提高了计算效率。

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

III 代码实现

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)
刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

import torch
from torch import nn
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt


# 1、准备数据集
batch_size = 64
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307, ), (0.3081, ))
])

train_dataset = datasets.MNIST(root='dataset/mnist',
                               train=True,
                               download=True,
                               transform=transform)
train_loader = DataLoader(dataset=train_dataset,
                          batch_size=batch_size,
                          shuffle=True)

test_dataset = datasets.MNIST(root='dataset/mnist',
                              train=False,
                              download=True,
                              transform=transform)
test_loader = DataLoader(dataset=test_dataset,
                         batch_size=batch_size,
                         shuffle=False)


# 2、建立模型
# 定义一个Inception类,在网络里会用到
class InceptionA(nn.Module):
    def __init__(self, in_channels):
        super(InceptionA, self).__init__()
        self.branch1X1 = nn.Conv2d(in_channels, 16, kernel_size=1)

        # 设置padding保证各个分支输出的高度和宽度保持不变
        self.branch5X5_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch5X5_2 = nn.Conv2d(16, 24, kernel_size=5, padding=2)

        self.branch3X3_1 = nn.Conv2d(in_channels, 16, kernel_size=1)
        self.branch3X3_2 = nn.Conv2d(16, 24, kernel_size=3, padding=1)
        self.branch3X3_3 = nn.Conv2d(24, 24, kernel_size=3, padding=1)

        self.branch_pool = nn.Conv2d(in_channels, 24, kernel_size=1)

    def forward(self, x):
        branch1X1 = self.branch1X1(x)

        branch5X5 = self.branch5X5_1(x)
        branch5X5 = self.branch5X5_2(branch5X5)

        branch3X3 = self.branch3X3_1(x)
        branch3X3 = self.branch3X3_2(branch3X3)
        branch3X3 = self.branch3X3_3(branch3X3)

        branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1)
        branch_pool = self.branch_pool(branch_pool)

        outputs = [branch1X1, branch5X5, branch3X3, branch_pool]
        # (b, c, w, h),dim=1——以第一个维度channel来拼接
        return torch.cat(outputs, dim=1)

# 定义模型
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        # 88 = 24*3 + 16
        self.conv2 = nn.Conv2d(88, 20, kernel_size=5)

        self.incep1 = InceptionA(in_channels=10)
        self.incep2 = InceptionA(in_channels=20)

        self.mp = nn.MaxPool2d(2)
        # 确定输出张量的尺寸
        # 在定义时先不定义fc层,随便选取一个输入,经过模型后查看其尺寸
        # 在init函数中把fc层去掉,forward函数中把最后两行去掉,确定输出的尺寸后再定义Lear层的大小
        self.fc = nn.Linear(1408, 10)

    def forward(self, x):
        in_size = x.size(0)
        # 1 ==》 10
        x = F.relu(self.mp(self.conv1(x)))
        # 10 ==》 88
        x = self.incep1(x)
        # 88 ==》 20
        x = F.relu(self.mp(self.conv2(x)))
        # 20 ==》 88
        x = self.incep2(x)
        x = x.view(in_size, -1)
        x = self.fc(x)
        return x


model = Net()
# 将模型迁移到GPU上运行,cuda:0表示第0块显卡
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# print(torch.cuda.is_available())
model.to(device)

# 3、建立损失函数和优化器
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


# 4、定义训练函数
def train(epoch):
    running_loss = 0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data
        # 将要计算的张量也迁移到GPU上——输入和输出
        inputs, target = inputs.to(device), target.to(device)
        optimizer.zero_grad()

        # 前馈 反馈 更新
        outputs = model(inputs)
        loss = criterion(outputs, target)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:
            print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0


# 5、定义测试函数
accuracy = []
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            # 测试中的张量也迁移到GPU上
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            # 两个张量比较,得出的是其中相等的元素的个数(即一个批次中预测正确的个数)
            correct += (predicted == labels).sum().item()
    print('Accuracy on test  set: %d %%' % (100 * correct / total))
    accuracy.append(100 * correct / total)


if __name__ == '__main__':
    for epoch in range(10):
        train(epoch)
        test()
    print(accuracy)
    plt.plot(range(10), accuracy)
    plt.xlabel("epoch")
    plt.ylabel("Accuracy")
    plt.show()

输出:

[1,   300] loss: 0.767
[1,   600] loss: 0.186
[1,   900] loss: 0.141
Accuracy on test  set: 96 %
[2,   300] loss: 0.109
[2,   600] loss: 0.098
[2,   900] loss: 0.096
Accuracy on test  set: 97 %
[3,   300] loss: 0.083
[3,   600] loss: 0.076
[3,   900] loss: 0.076
Accuracy on test  set: 97 %
[4,   300] loss: 0.066
[4,   600] loss: 0.066
[4,   900] loss: 0.064
Accuracy on test  set: 98 %
[5,   300] loss: 0.054
[5,   600] loss: 0.057
[5,   900] loss: 0.054
Accuracy on test  set: 98 %
[6,   300] loss: 0.049
[6,   600] loss: 0.052
[6,   900] loss: 0.049
Accuracy on test  set: 98 %
[7,   300] loss: 0.044
[7,   600] loss: 0.047
[7,   900] loss: 0.042
Accuracy on test  set: 98 %
[8,   300] loss: 0.043
[8,   600] loss: 0.039
[8,   900] loss: 0.041
Accuracy on test  set: 98 %
[9,   300] loss: 0.034
[9,   600] loss: 0.041
[9,   900] loss: 0.038
Accuracy on test  set: 98 %
[10,   300] loss: 0.034
[10,   600] loss: 0.035
[10,   900] loss: 0.033
Accuracy on test  set: 98 %
[96.51, 97.37, 97.94, 98.45, 98.31, 98.58, 98.59, 98.8, 98.73, 98.9]

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

性能提高不多,可能是最好全连接层太少,训练次数不一定越多越好,当前网络参数可以进行存盘,存储训练效果最好的结果。

II Stack Layer

问题: 为什么网络层数更深反而准确率会下降,训练效果更差?
刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)
梯度消失: 在反向传播时需要根据链式法则把一连串的梯度乘起来,若每个梯度都小于1,则乘起来的结果会接近于0,导致权重在更新时得不到什么更新,进而导致最开始的这些块(离输入近的块)没办法得到充分的训练。
解决办法: 逐层训练,每一层加锁,但是深度学习中层数太多了,难以实现。

3、residual net

I 普通网络与残差网络的区别

残差网络多一个跳连接,在做完卷积激活之前,将该层的输入加上输出一起作为整个的输出来激活。
刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

II Residual block

偏导数+1一定大于等于1,所以不会出现梯度消失的问题。

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

III 代码实现

import torch
from torch import nn
from torchvision import transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt


# 1、准备数据集
batch_size = 64
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307, ), (0.3081, ))
])

train_dataset = datasets.MNIST(root='dataset/mnist',
                               train=True,
                               download=True,
                               transform=transform)
train_loader = DataLoader(dataset=train_dataset,
                          batch_size=batch_size,
                          shuffle=True)

test_dataset = datasets.MNIST(root='dataset/mnist',
                              train=False,
                              download=True,
                              transform=transform)
test_loader = DataLoader(dataset=test_dataset,
                         batch_size=batch_size,
                         shuffle=False)


# 2、建立模型
# 定义一个ResidualBlock类,在网络里会用到
class ResidualBlock(nn.Module):
    def __init__(self, channels):
        super(ResidualBlock, self).__init__()
        self.channels = channels
        self.conv1 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)

    def forward(self, x):
        y = F.relu(self.conv1(x))
        y = self.conv2(y)
        return F.relu(x + y)

# 定义模型
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1,16, kernel_size=5)
        self.conv2 = nn.Conv2d(16, 32, kernel_size=5)
        self.mp = nn.MaxPool2d(2)

        self.rblock1 = ResidualBlock(16)
        self.rblock2 = ResidualBlock(32)

        self.fc = nn.Linear(512, 10)

    def forward(self, x):
        in_size = x.size(0)
        x = self.mp(F.relu(self.conv1(x)))
        x = self.rblock1(x)
        x = self.mp(F.relu(self.conv2(x)))
        x = self.rblock2(x)
        x = x.view(in_size, -1)
        x = self.fc(x)
        return x


model = Net()
# 将模型迁移到GPU上运行,cuda:0表示第0块显卡
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# print(torch.cuda.is_available())
model.to(device)

# 3、建立损失函数和优化器
criterion = torch.nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


# 4、定义训练函数
def train(epoch):
    running_loss = 0
    for batch_idx, data in enumerate(train_loader, 0):
        inputs, target = data
        # 将要计算的张量也迁移到GPU上——输入和输出
        inputs, target = inputs.to(device), target.to(device)
        optimizer.zero_grad()

        # 前馈 反馈 更新
        outputs = model(inputs)
        loss = criterion(outputs, target)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:
            print('[%d, %5d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300))
            running_loss = 0


# 5、定义测试函数
accuracy = []
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            # 测试中的张量也迁移到GPU上
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            _, predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            # 两个张量比较,得出的是其中相等的元素的个数(即一个批次中预测正确的个数)
            correct += (predicted == labels).sum().item()
    print('Accuracy on test  set: %d %%' % (100 * correct / total))
    accuracy.append(100 * correct / total)


if __name__ == '__main__':
    for epoch in range(10):
        train(epoch)
        test()
    print(accuracy)
    plt.plot(range(10), accuracy)
    plt.xlabel("epoch")
    plt.ylabel("Accuracy")
    plt.show()

输出:

[1,   300] loss: 0.520
[1,   600] loss: 0.159
[1,   900] loss: 0.118
Accuracy on test  set: 97 %
[2,   300] loss: 0.090
[2,   600] loss: 0.081
[2,   900] loss: 0.074
Accuracy on test  set: 98 %
[3,   300] loss: 0.063
[3,   600] loss: 0.058
[3,   900] loss: 0.055
Accuracy on test  set: 98 %
[4,   300] loss: 0.046
[4,   600] loss: 0.050
[4,   900] loss: 0.048
Accuracy on test  set: 98 %
[5,   300] loss: 0.044
[5,   600] loss: 0.038
[5,   900] loss: 0.038
Accuracy on test  set: 98 %
[6,   300] loss: 0.035
[6,   600] loss: 0.033
[6,   900] loss: 0.034
Accuracy on test  set: 98 %
[7,   300] loss: 0.028
[7,   600] loss: 0.029
[7,   900] loss: 0.032
Accuracy on test  set: 98 %
[8,   300] loss: 0.027
[8,   600] loss: 0.028
[8,   900] loss: 0.026
Accuracy on test  set: 98 %
[9,   300] loss: 0.021
[9,   600] loss: 0.026
[9,   900] loss: 0.022
Accuracy on test  set: 98 %
[10,   300] loss: 0.021
[10,   600] loss: 0.023
[10,   900] loss: 0.021
Accuracy on test  set: 98 %
[97.03, 98.21, 98.47, 98.8, 98.52, 98.88, 98.88, 98.98, 98.95, 98.98]

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

4、作业

作业1:阅读论文 Identity Mappings in Deep Residual Networks

给出了很多residual block实现的方式。

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

实现 constant scaling

返回结果为原来的一半

class ResidualBlock(nn.Module):
    def __init__(self, channels):
        super(ResidualBlock, self).__init__()
        self.channels = channels
        self.conv1 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(channels, channels, kernel_size=3, padding=1)

    def forward(self, x):
        y = F.relu(self.conv1(x))
        y = self.conv2(x)
        z = 0.5 * (x + y)
        return F.relu(z)

输出:

[1,   300] loss: 0.947
[1,   600] loss: 0.252
[1,   900] loss: 0.173
Accuracy on test  set: 96 %
[2,   300] loss: 0.126
[2,   600] loss: 0.113
[2,   900] loss: 0.107
Accuracy on test  set: 97 %
[3,   300] loss: 0.085
[3,   600] loss: 0.084
[3,   900] loss: 0.077
Accuracy on test  set: 98 %
[4,   300] loss: 0.064
[4,   600] loss: 0.066
[4,   900] loss: 0.068
Accuracy on test  set: 98 %
[5,   300] loss: 0.057
[5,   600] loss: 0.058
[5,   900] loss: 0.055
Accuracy on test  set: 98 %
[6,   300] loss: 0.051
[6,   600] loss: 0.051
[6,   900] loss: 0.047
Accuracy on test  set: 98 %
[7,   300] loss: 0.042
[7,   600] loss: 0.044
[7,   900] loss: 0.048
Accuracy on test  set: 98 %
[8,   300] loss: 0.041
[8,   600] loss: 0.040
[8,   900] loss: 0.040
Accuracy on test  set: 98 %
[9,   300] loss: 0.035
[9,   600] loss: 0.037
[9,   900] loss: 0.037
Accuracy on test  set: 98 %
[10,   300] loss: 0.031
[10,   600] loss: 0.038
[10,   900] loss: 0.031
Accuracy on test  set: 98 %
[96.09, 97.78, 98.07, 98.29, 98.41, 98.67, 98.03, 98.86, 98.75, 98.81]

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

实现conv shortcut

多进行一次卷积

class ResidualBlock(nn.Module):
    def __init__(self, channels):
        super(ResidualBlock, self).__init__()
        self.channels = channels
 
        self.conv1 = nn.Conv2d(channels, channels,
                               kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(channels, channels,
                               kernel_size=3, padding=1)
        self.conv3 = nn.Conv2d(channels, channels,
                               kernel_size=1)
 
    def forward(self, x):
        y = F.relu(self.conv1(x))
        y = self.conv2(x)
        z = self.conv3(x) + y
        return F.relu(z)

输出:

[1,   300] loss: 0.686
[1,   600] loss: 0.192
[1,   900] loss: 0.137
Accuracy on test  set: 96 %
[2,   300] loss: 0.105
[2,   600] loss: 0.093
[2,   900] loss: 0.078
Accuracy on test  set: 98 %
[3,   300] loss: 0.073
[3,   600] loss: 0.065
[3,   900] loss: 0.060
Accuracy on test  set: 98 %
[4,   300] loss: 0.054
[4,   600] loss: 0.049
[4,   900] loss: 0.056
Accuracy on test  set: 98 %
[5,   300] loss: 0.042
[5,   600] loss: 0.048
[5,   900] loss: 0.040
Accuracy on test  set: 98 %
[6,   300] loss: 0.041
[6,   600] loss: 0.039
[6,   900] loss: 0.037
Accuracy on test  set: 98 %
[7,   300] loss: 0.034
[7,   600] loss: 0.033
[7,   900] loss: 0.035
Accuracy on test  set: 98 %
[8,   300] loss: 0.029
[8,   600] loss: 0.030
[8,   900] loss: 0.031
Accuracy on test  set: 98 %
[9,   300] loss: 0.025
[9,   600] loss: 0.027
[9,   900] loss: 0.028
Accuracy on test  set: 98 %
[10,   300] loss: 0.023
[10,   600] loss: 0.026
[10,   900] loss: 0.026
Accuracy on test  set: 98 %
[96.42, 98.2, 98.48, 98.7, 98.9, 98.89, 98.92, 98.99, 98.68, 98.97]

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

作业2:阅读论文 Densely Connected Convolutional Networks
刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)
怎么实现?

5、建议学习流程

刘二大人 PyTorch深度学习实践 笔记 P11 卷积神经网络(高级篇)

  1. 理解网络模型理论 看花书 《动手学深度学习》。
  2. 阅读pytorch文档(至少通读一遍),知道提供了什么功能以及文档结构。
  3. 复现经典工作,不是跑通代码,是先去读代码,学习架构;然后尝试自己来写,如此往复。
  4. 选特定研究领域,融会贯通,扩充视野,广泛阅读(前提是拥有前面的能力,看到论文,可以反映出代码怎么写,需要慢慢地积累)。