[cnn]cnn训练MINST数据集demo

发布时间 2023-05-25 11:55:38作者: J1nWan

[cnn]cnn训练MINST数据集demo

tips:

在文件路径进入conda

输入

jupyter nbconvert --to markdown test.ipynb

将ipynb文件转化成markdown文件

jupyter nbconvert --to html test.ipynb

jupyter nbconvert --to pdf test.ipynb

(html,pdf文件同理)

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as Data
from torchvision import datasets,transforms
import matplotlib.pyplot as plt
import numpy as np
input_size = 28   #图像尺寸 28*28
num_class = 10    #标签总数
num_epochs = 3    #训练总周期
batch_size = 64    #一个批次多少图片

train_dataset = datasets.MNIST(
  root='data',
  train=True,
  transform=transforms.ToTensor(),
  download=True,
)

test_dataset = datasets.MNIST(
  root='data',
   train=False,
  transform=transforms.ToTensor(),
  download=True,
)

train_loader = torch.utils.data.DataLoader(
  dataset = train_dataset,
  batch_size = batch_size,
  shuffle = True,
)
test_loader = torch.utils.data.DataLoader(
  dataset = test_dataset,
  batch_size = batch_size,
  shuffle = True,
)


class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        self.conv1 = nn.Sequential(   #输入为(1,28,28)
            nn.Conv2d(
                in_channels=1,              
                out_channels=16,      #要得到几个特征图      
                kernel_size=5,        #卷积核大小      
                stride=1,             #步长     
                padding=2,                  
            ),                         #输出特征图为(16*28*28)     
            nn.ReLU(),                      
            nn.MaxPool2d(kernel_size=2), #池化(2x2) 输出为(16,14,14)
        )
        self.conv2 = nn.Sequential(          #输入(16,14,14)
            nn.Conv2d(16, 32, 5, 1, 2),     #输出(32,14,14)
            nn.ReLU(),                      
            nn.MaxPool2d(2),                #输出(32,7,7)
        )
        self.out = nn.Linear(32 * 7 * 7, 10) #全连接

    def forward(self, x):
        x = self.conv1(x)
        x = self.conv2(x)
        x = x.view(x.size(0), -1) #flatten操作 输出为(batch_size,32*7*7)
        output = self.out(x)
        return output, x 
def accuracy(predictions,labels):
  pred = torch.max(predictions.data,1)[1]
  rights = pred.eq(labels.data.view_as(pred)).sum()
  return rights,len(labels)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
'cuda'
net = CNN().to(device)
criterion = nn.CrossEntropyLoss() #损失函数
#优化器
optimizer = optim.Adam(net.parameters(),lr = 0.001)

for epoch in range(num_epochs+1):
  #保留epoch的结果
  train_rights = []
  for batch_idx,(data,target) in enumerate(train_loader):
    data = data.to(device)
    target = target.to(device)
    net.train()
    output = net(data)[0]
    loss = criterion(output,target)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    right = accuracy(output,target)
    train_rights.append(right)

    if batch_idx %100 ==0:
      net.eval()
      val_rights = []
      for(data,target) in test_loader:
        data = data.to(device)
        target = target.to(device)
        output = net(data)[0]
        right = accuracy(output,target)
        val_rights.append(right)
      #计算准确率
      train_r = (sum([i[0] for i in train_rights]),sum(i[1] for i in train_rights))
      val_r = (sum([i[0] for i in val_rights]),sum(i[1] for i in val_rights))

      print('当前epoch:{}[{}/{}({:.0f}%)]\t损失:{:.2f}\t训练集准确率:{:.2f}%\t测试集准确率:{:.2f}%'.format(
        epoch,
        batch_idx * batch_size,
        len(train_loader.dataset),
        100. * batch_idx / len(train_loader),
        loss.data,
        100. * train_r[0].cpu().numpy() / train_r[1],
        100. * val_r[0].cpu().numpy() / val_r[1]
      )
      )

当前epoch:0[0/60000(0%)]	损失:2.31	训练集准确率:4.69%	测试集准确率:21.01%
当前epoch:0[6400/60000(11%)]	损失:0.51	训练集准确率:75.94%	测试集准确率:91.43%
当前epoch:0[12800/60000(21%)]	损失:0.28	训练集准确率:84.05%	测试集准确率:93.87%
当前epoch:0[19200/60000(32%)]	损失:0.15	训练集准确率:87.77%	测试集准确率:96.42%
当前epoch:0[25600/60000(43%)]	损失:0.08	训练集准确率:89.82%	测试集准确率:97.02%
当前epoch:0[32000/60000(53%)]	损失:0.14	训练集准确率:91.20%	测试集准确率:97.42%
当前epoch:0[38400/60000(64%)]	损失:0.04	训练集准确率:92.13%	测试集准确率:97.59%
当前epoch:0[44800/60000(75%)]	损失:0.08	训练集准确率:92.83%	测试集准确率:97.83%
当前epoch:0[51200/60000(85%)]	损失:0.12	训练集准确率:93.38%	测试集准确率:97.77%
当前epoch:0[57600/60000(96%)]	损失:0.19	训练集准确率:93.81%	测试集准确率:98.24%
当前epoch:1[0/60000(0%)]	损失:0.07	训练集准确率:95.31%	测试集准确率:97.90%
当前epoch:1[6400/60000(11%)]	损失:0.08	训练集准确率:97.96%	测试集准确率:98.27%
当前epoch:1[12800/60000(21%)]	损失:0.10	训练集准确率:97.99%	测试集准确率:98.30%
当前epoch:1[19200/60000(32%)]	损失:0.02	训练集准确率:98.07%	测试集准确率:98.20%
当前epoch:1[25600/60000(43%)]	损失:0.17	训练集准确率:98.09%	测试集准确率:98.40%
当前epoch:1[32000/60000(53%)]	损失:0.12	训练集准确率:98.11%	测试集准确率:98.68%
当前epoch:1[38400/60000(64%)]	损失:0.05	训练集准确率:98.11%	测试集准确率:98.63%
当前epoch:1[44800/60000(75%)]	损失:0.10	训练集准确率:98.14%	测试集准确率:98.70%
当前epoch:1[51200/60000(85%)]	损失:0.04	训练集准确率:98.19%	测试集准确率:98.56%
当前epoch:1[57600/60000(96%)]	损失:0.03	训练集准确率:98.23%	测试集准确率:98.67%
当前epoch:2[0/60000(0%)]	损失:0.06	训练集准确率:98.44%	测试集准确率:98.32%
当前epoch:2[6400/60000(11%)]	损失:0.03	训练集准确率:98.64%	测试集准确率:98.63%
当前epoch:2[12800/60000(21%)]	损失:0.05	训练集准确率:98.70%	测试集准确率:98.62%
当前epoch:2[19200/60000(32%)]	损失:0.01	训练集准确率:98.72%	测试集准确率:98.69%
当前epoch:2[25600/60000(43%)]	损失:0.01	训练集准确率:98.70%	测试集准确率:98.76%
当前epoch:2[32000/60000(53%)]	损失:0.03	训练集准确率:98.70%	测试集准确率:98.76%
当前epoch:2[38400/60000(64%)]	损失:0.07	训练集准确率:98.70%	测试集准确率:98.62%
当前epoch:2[44800/60000(75%)]	损失:0.07	训练集准确率:98.72%	测试集准确率:98.60%
当前epoch:2[51200/60000(85%)]	损失:0.03	训练集准确率:98.71%	测试集准确率:98.99%
当前epoch:2[57600/60000(96%)]	损失:0.05	训练集准确率:98.74%	测试集准确率:98.84%