首先要熟悉一下怎么使用PyTorch来实现前馈神经网络吧.为了方便理解,我们这里只拿只有一个隐藏层的前馈神经网络来举例:
一个前馈神经网络的源码和注释如下:比较简单,这里就不多介绍了.
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size) //输入层
self.relu = nn.ReLU() //隐藏网络:elu的功能是将输入的feature的tensor所有的元素中如果小于零的就取零。
self.fc2 = nn.Linear(hidden_size, num_classes) //输出层 def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
下面要看一下怎么调用和使用前馈神经网络的:为了提高运算效率,要把该网络优先使用GPU来进行运算.这里的输入尺寸和隐藏尺寸要和训练的图片保持一致的.
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = NeuralNet(input_size, hidden_size, num_classes).to(device)
为了训练网络,都需要定义一个loss function来描述模型对问题的求解精度。loss越小,代表模型的结果和真实值偏差越小,这里使用CrossEntropyLoss()来计算.Adam,这是一种基于一阶梯度来优化随机目标函数的算法。详细的概念和推导我们后续再专门做分析.
criterion = nn.CrossEntropyLoss() //针对单目标分类问题, 结合了 nn.LogSoftmax() 和 nn.NLLLoss() 来计算 loss.
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) //优化器,设置学习的速度和使用的模型
接下来就是训练模型了,训练模型这部分是有点绕的,首先我们来看代码,后面再针对各个函数做说明:
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Move tensors to the configured device
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device) # Forward pass
outputs = model(images)
loss = criterion(outputs, labels) # Backward and optimize
optimizer.zero_grad() //把梯度置零,也就是把loss关于weight的导数变成0.
loss.backward()
optimizer.step()
训练模型,首先把图片矩阵变换成25*25的矩阵单元.其次,把运算参数绑定到特定设备上.
然后就是网络的前向传播了:
outputs = model(inputs)
然后将输出的outputs和原来导入的labels作为loss函数的输入就可以得到损失了:
loss = criterion(outputs, labels)
计算得到loss后就要回传损失。要注意的是这是在训练的时候才会有的操作,测试时候只有forward过程。
loss.backward()
回传损失过程中会计算梯度,然后需要根据这些梯度更新参数,optimizer.step()就是用来更新参数的。optimizer.step()后,你就可以从optimizer.param_groups[0][‘params’]里面看到各个层的梯度和权值信息。
optimizer.step()
测试这个模型,没有梯度的模型,这样就大大大额减少了内存的使用量和运算效率,这个测试模型,其实只有一个关键的语句就可以预测模型了,那就是:_, predicted = torch.max(outputs.data, 1).
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
print(labels.size(0))
correct += (predicted == labels).sum().item()
这里有个问题.训练好的数据怎么和预测联系起来呢?
训练输出的outputs也是torch.autograd.Variable格式,得到输出后(网络的全连接层的输出)还希望能到到模型预测该样本属于哪个类别的信息,这里采用torch.max。torch.max()的第一个输入是tensor格式,所以用outputs.data而不是outputs作为输入;第二个参数1是代表dim的意思,也就是取每一行的最大值,其实就是我们常见的取概率最大的那个index;第三个参数loss也是torch.autograd.Variable格式。
总体源码:
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms # Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Hyper-parameters
input_size = 784
hidden_size = 500
num_classes = 10
#input_size = 84
#hidden_size = 50
#num_classes = 2
num_epochs = 5
batch_size = 100
learning_rate = 0.001 # MNIST dataset
train_dataset = torchvision.datasets.MNIST(root='../../data',
train=True,
transform=transforms.ToTensor(),
download=True) test_dataset = torchvision.datasets.MNIST(root='../../data',
train=False,
transform=transforms.ToTensor()) # Data loader
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False) # Fully connected neural network with one hidden layer
class NeuralNet(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes) def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out model = NeuralNet(input_size, hidden_size, num_classes).to(device) # Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) # Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Move tensors to the configured device
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device) # Forward pass
outputs = model(images)
loss = criterion(outputs, labels) # Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step() if (i+1) % 100 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
# Test the model
# In test phase, we don't need to compute gradients (for memory efficiency)
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
#print(predicted)
correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total)) # Save the model checkpoint
torch.save(model.state_dict(), 'model.ckpt')
每日一言:人之所畏,不可不畏。
参考文档:
1 https://blog.csdn.net/fireflychh/article/details/75516165