pytorch构建模型训练数据集
- pytorch构建模型训练数据集
- 1.AlexNet:
- 1.1.导入必要的库:
- 1.2.数据预处理和增强:
- 1.3.加载数据集:
- 1.4.划分测试集和训练集:
- 1.5.创建数据加载器:
- 1.6.加载AlexNet模型:
- 1.7.修改模型以适应您的数据集类别数
- 1.8.定义损失函数和优化器
- 1.9.将模型移动到GPU(如果可用)
- 1.10.初始化列表来存储每个epoch的损失和准确率
- 1.11.训练模型
- 1.12.绘制损失图表和准确率图标:
- 2.LeNet-5:
- 2.1.导入必要的库:
- 2.2.数据预处理和增强:
- 2.3.加载数据集:
- 2.4.划分训练集和测试集:
- 2.5.创建数据加载器:
- 2.6.定义LeNet-5模型结构:
- 2.7.初始化LeNet-5模型
- 2.8. 定义损失函数和优化器
- 2.9.将模型移动到GPU(如果可用)
- 2.10.初始化列表来存储每个epoch的损失和准确率
- 2.11.训练模型
- 3.ResNet:
- 3.1.导入必要的库:
- 3.2.数据预处理和增强:
- 3.3.加载数据集:
- 3.4.划分训练集和测试集:
- 3.5.创建数据加载器:
- 3.6.使用ResNet-18模型:
- 3.7.修改全连接层以适应数据集:
- 3.8.定义损失函数和优化器:
- 3.9.将模型移动到GPU(如果可用):
- 3.10.初始化列表来存储每个epoch的损失和准确率:
- 3.11.训练模型并输出:
- 4.VGG-16:
- 4.1.导入必要的库:
- 4.2.数据预处理和增强:
- 4.3.加载数据集:
- 4.4.划分训练集和测试集:
- 4.5.创建数据加载器:
- 4.6.加载VGG16模型:
- 4.8.定义损失函数和优化器:
- 4.9.将模型移动到GPU(如果可用):
- 4.10.初始化列表来存储每个epoch的损失和准确率:
- 4.11.训练模型与输出:
- 5.VGG-19:
- 5.1.导入必要的库:
- 5.2.数据预处理和增强:
- 5.3.加载数据集:
- 5.4.划分训练集和测试集:
- 5.5.创建数据加载器:
- 5.6.加载VGG19模型:
- 5.7.修改模型以适应数据集类别数:
- 5.8.定义损失函数和优化器:
- 5.9.将模型移动到GPU(如果可用):
- 5.10.初始化列表来存储每个epoch的损失和准确率:
- 5.11.训练模型与输出:
pytorch构建模型训练数据集
1.AlexNet:
1.1.导入必要的库:
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
1.2.数据预处理和增强:
transform = transforms.Compose([
transforms.Resize((227, 227)), # AlexNet需要227x227像素的输入
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # AlexNet的标准归一化参数
])
1.3.加载数据集:
data_path = 'D:/工坊/Pytorch的框架/flower_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)
1.4.划分测试集和训练集:
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])
1.5.创建数据加载器:
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
1.6.加载AlexNet模型:
model = models.alexnet(pretrained=True)
1.7.修改模型以适应您的数据集类别数
num_classes = len(dataset.classes)
model.classifier[6] = nn.Linear(model.classifier[6].in_features, num_classes)
1.8.定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
1.9.将模型移动到GPU(如果可用)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
1.10.初始化列表来存储每个epoch的损失和准确率
train_losses = []
train_accuracies = []
1.11.训练模型
num_epochs = 50
for epoch in range(num_epochs):
model.train()
running_loss = 0.0
correct = 0
total = 0
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
epoch_loss = running_loss / len(train_loader)
epoch_accuracy = 100 * correct / total
train_losses.append(epoch_loss)
train_accuracies.append(epoch_accuracy)
print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}%')
运行结果:
1.12.绘制损失图表和准确率图标:
#创建图表
plt.figure(figsize=(10, 5))
#绘制损失
plt.subplot(1, 2, 1)
plt.plot(range(1, len(train_losses) + 1), train_losses, 'bo-', label='Training Loss')
plt.title('Training Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
#绘制准确率
plt.subplot(1, 2, 2)
plt.plot(range(1, len(train_accuracies) + 1), train_accuracies, 'ro-', label='Training Accuracy')
plt.title('Training Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy (%)')
plt.legend()
#显示图表
plt.tight_layout()
plt.show()
2.LeNet-5:
2.1.导入必要的库:
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
2.2.数据预处理和增强:
#数据预处理和增强
transform = transforms.Compose([
transforms.Resize((227, 227)), # AlexNet需要227x227像素的输入
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # AlexNet的标准归一化参数
])
2.3.加载数据集:
data_path = 'D:/工坊/Pytorch的框架/flower_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)
2.4.划分训练集和测试集:
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])
2.5.创建数据加载器:
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
2.6.定义LeNet-5模型结构:
- 包含两个卷积层和三个全连接层
class LeNet5(nn.Module):
def __init__(self, num_classes):
super(LeNet5, self).__init__()
self.conv_net = nn.Sequential(
nn.Conv2d(3, 6, kernel_size=5),
nn.Tanh(),
nn.AvgPool2d(kernel_size=2, stride=2),
nn.Conv2d(6, 16, kernel_size=5),
nn.Tanh(),
nn.AvgPool2d(kernel_size=2, stride=2)
)
self.fc_net = nn.Sequential(
nn.Linear(44944, 120), # 修改这里以匹配卷积层的输出尺寸
nn.Tanh(),
nn.Linear(120, 84),
nn.Tanh(),
nn.Linear(84, num_classes)
)
def forward(self, x):
x = self.conv_net(x)
x = x.view(x.size(0), -1) # 展平多维卷积层输出
x = self.fc_net(x)
return x
2.7.初始化LeNet-5模型
num_classes = len(dataset.classes)
model = LeNet5(num_classes)
2.8. 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
2.9.将模型移动到GPU(如果可用)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
2.10.初始化列表来存储每个epoch的损失和准确率
train_losses = []
train_accuracies = []
2.11.训练模型
num_epochs = 50
for epoch in range(num_epochs):
model.train()
running_loss = 0.0
correct = 0
total = 0
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
epoch_loss = running_loss / len(train_loader)
epoch_accuracy = 100 * correct / total
train_losses.append(epoch_loss)
train_accuracies.append(epoch_accuracy)
print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}%')
3.ResNet:
3.1.导入必要的库:
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
3.2.数据预处理和增强:
#数据预处理和增强
transform = transforms.Compose([
transforms.Resize((224, 224)), # 调整图像大小为 224x224像素,符合ResNet输入
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
#ResNet的标准化参数
])
3.3.加载数据集:
#加载数据集
data_path = 'D:/工坊/深度学习/img/weather_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)
3.4.划分训练集和测试集:
#划分训练集和测试集
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])
3.5.创建数据加载器:
- 为数据集提供批量加载和随机洗牌的功能。
#创建数据加载器
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
3.6.使用ResNet-18模型:
- models.resnet18(pretrained=True):加载预训练的ResNet-18模型,修改全连接层以适应您的数据集:替换模型的最后一层,使其输出类别数与您的数据集类别数相匹配。
#使用ResNet-18模型
model = models.resnet18(pretrained=True)
3.7.修改全连接层以适应数据集:
num_classes = len(dataset.classes) # 假设dataset是您之前定义的ImageFolder对象
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, num_classes)
3.8.定义损失函数和优化器:
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters())
3.9.将模型移动到GPU(如果可用):
- 检查是否有可用的GPU,如果有,则将模型和数据移动到GPU以加速训练。
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
3.10.初始化列表来存储每个epoch的损失和准确率:
- 用于监控训练过程中的损失和准确率。
train_losses = []
train_accuracies = []
3.11.训练模型并输出:
- 在多个epoch上迭代训练模型,在每个epoch中,遍历训练数据集,进行前向传播、计算损失、反向传播和参数更新,计算每个epoch的总损失和准确率,并打印出来。
- 每个epoch的损失和准确率会被打印出来,以便监控训练过程
num_epochs = 50
for epoch in range(num_epochs):
model.train()
running_loss = 0.0
correct = 0
total = 0
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
epoch_loss = running_loss / len(train_loader)
epoch_accuracy = 100 * correct / total
train_losses.append(epoch_loss)
train_accuracies.append(epoch_accuracy)
print(f'Epoch {epoch + 1}/{num_epochs}, Loss: {epoch_loss}, Accuracy: {epoch_accuracy}%')
4.VGG-16:
4.1.导入必要的库:
- 用于构建和训练神经网络,以及处理图像数据。
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms, models
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
4.2.数据预处理和增强:
- 使用VGG16模型的标准归一化参数。
transform = transforms.Compose([
transforms.Resize((224, 224)), # VGG16需要224x224像素的输入
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # VGG16的标准归一化参数
])
4.3.加载数据集:
- 从指定路径加载数据集。
data_path = 'D:/工坊/Pytorch的框架/flower_photos'
dataset = datasets.ImageFolder(data_path, transform=transform)
4.4.划分训练集和测试集:
- 将数据集随机分为训练集和测试集。
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])
4.5.创建数据加载器:
- 为训练集和测试集创建数据加载器。
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
4.6.加载VGG16模型:
- 使用models.vgg16(pretrained=True)加载预训练的VGG16模型。
model = models.vgg16(pretrained=True)
- 修改VGG16模型的分类器层的最后一层,使其输出类别数与您的数据集类别数相匹配。
num_classes = len(dataset.classes)
model.classifier[6] = nn.Linear(model.classifier[6].in_features, num_classes)
4.8.定义损失函数和优化器:
- 使用交叉熵损失函数和Adam优化器。
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam相关文章
- pytorch构建模型训练数据集
- pytorch使用DataParallel并行化保存和加载模型(单卡、多卡各种情况讲解)-2 单卡训练,多卡加载
- 使用 Django Model 构建强大的数据库模型
- Darknet YOLOv3-tiny ubuntu配置,训练自己数据集(行人检测)及调参总结
- ArcGIS构建网络数据集步骤
- 基于光伏电站真实数据集的深度学习预测模型(Python代码,深度学习五个模型)
- 利用pytorch两层线性网络对titanic数据集进行分类(kaggle)
- 使用 Docker 在 PyTorch 环境中训练模型
- 深度学习:Pytorch分布式训练-模型并行
- 10-Python实现数据集划分(训练集/验证集/测试集)