소스코드 원본 링크 : https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html
Transfer Learning은 다음과 같은 몇가지 단계를 거친다.
- Initialize the pretrained model
- Reshape the final layer(s) to have the same number of outputs as the number of classes in the new dataset
- Define for the optimization algorithm which parameters we want to update during training
- Run the training step
from __future__ import print_function
from __future__ import division
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
print("PyTorch Version: ",torch.__version__)
print("Torchvision Version: ",torchvision.__version__)
INPUTS
# Top level data directory. Here we assume the format of the directory conforms
# to the ImageFolder structure
data_dir = "/data/hymenoptera_data"
# Models to choose from [resnet, alexnet, vgg, squeezenet, densenet, inception]
model_name = "squeezenet"
# Number of classes in the dataset
# ants and bees 2개의 label 존재
num_classes = 2
# Batch size for training (change depending on how much memory you have)
batch_size = 8
# Number of epochs to train for
num_epochs = 15
# Flag for feature extracting. When False, we finetune the whole model,
# when True we only update the reshaped layer params
feature_extract = True
squeezenet이라는 모델을 처음 사용해서 검색을 해봤더니 좋은 자료가 있었다.
https://jayhey.github.io/deep%20learning/2018/05/26/SqueezeNet/
동등한 성능을 갖는 여러 모델 중 파라미터 수가 적은 모델 정도라고 생각하면 될 것 같다.
Model Training and Validation Code
train_model은 모델의 학습 및 유효성 검사를 처리한다.
GoogLENet 참고자료 : https://m.blog.naver.com/PostView.nhn?blogId=laonple&logNo=220710707354&proxyReferer=https%3A%2F%2Fwww.google.com%2F
Sigmoid에서 Relu로 1차적인 vanishing gradient 문제를 해결했었는데 신경망이 깊어지면 Relu 조차도 성능을 보장할 수 없다. 이를 해결하기 위해서 is_inception 을 통해 aux를 다룰 수 있도록 옵션으로 파라미터를 넣은 것 같다.
def train_model(model, dataloaders, criterion, optimizer, num_epochs=25, is_inception=False):
since = time.time()
val_acc_history = []
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
# Get model outputs and calculate loss
# Special case for inception because in training it has an auxiliary output. In train
# mode we calculate the loss by summing the final output and the auxiliary output
# but in testing we only consider the final output.
if is_inception and phase == 'train':
# From https://discuss.pytorch.org/t/how-to-optimize-inception-model-with-auxiliary-classifiers/7958
outputs, aux_outputs = model(inputs)
loss1 = criterion(outputs, labels)
loss2 = criterion(aux_outputs, labels)
loss = loss1 + 0.4*loss2
else:
outputs = model(inputs)
loss = criterion(outputs, labels)
_, preds = torch.max(outputs, 1)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / len(dataloaders[phase].dataset)
epoch_acc = running_corrects.double() / len(dataloaders[phase].dataset)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
if phase == 'val':
val_acc_history.append(epoch_acc)
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model, val_acc_history
Set Model Parameters’ .requires_grad attribute
feature extracting 참고 자료 :
https://discuss.pytorch.org/t/how-to-extract-features-of-an-image-from-a-trained-model/119/2
Alexnet 같은 경우는 conv1~conv5 + fc6 + fc7 의 단계로 이루어진다. 위의 링크에서 어느 feature를 뽑아낼 것인지 슬라이싱으로 가져오면 된다.
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
feature extracting 을 하는 동안은 .requires_grad = False로 한다. default 값은 True 이다.
INITIALIZE AND RESHAPE THE NETWORKS
fine-tuning과 feature-extraction 의 차이에 대해 먼저 다뤄보자. feature extracting 을 하는 동안은 마지막 레이어의 파라미터만 업데이트 해야 한다. 따라서 우리가 바꾸지 않고 있는 파라미터의 gradient 는 계산할 필요가 없고 .requires_grad 를 False로 지정했었다. reshape을 처리하고 싶은 새롭게 만들어질 마지막 레이어는 default 값으로 .requires_grad = True 일테니 마지막 레이어의 grad 값만 업데이트 될 것이다. inception_v3 는 (299,299), 다른 모델들은 (224,224)를 입력 사이즈로 갖을 것이다.
Resnet
각 모델 비교자료 :
http://pythonkim.tistory.com/54
https://github.com/zzsza/Deep_Learning_starting_with_the_latest_papers/blob/master/Lecture_Note/02.%20DL%20Intro/01.%20CNN(AlexNet%2CVGG%2CGoogleNet%2CResNet).md
(fc): Linear(in_features=512, out_features=1000, bias=True)
model.fc = nn.Linear(512, num_classes)
Alexnet
(classifier): Sequential(
...
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
model.classifier[6] = nn.Linear(4096,num_classes)
VGG
(classifier): Sequential(
...
(6): Linear(in_features=4096, out_features=1000, bias=True)
)
model.classifier[6] = nn.Linear(4096,num_classes)
Squeezenet
(classifier): Sequential(
(0): Dropout(p=0.5)
(1): Conv2d(512, 1000, kernel_size=(1, 1), stride=(1, 1))
(2): ReLU(inplace)
(3): AvgPool2d(kernel_size=13, stride=1, padding=0)
)
model.classifier[1] = nn.Conv2d(512, num_classes, kernel_size=(1,1), stride=(1,1))
Densenet
(classifier): Linear(in_features=1024, out_features=1000, bias=True)
model.classifier = nn.Linear(1024, num_classes)
Inception v3
Inception v3 모델은 다른 모델들과는 달리 2개의 output layer를 갖는다. GoogLE Net 알고리즘 그래프를 확인해보면 왜 이렇게 되는지 알 수 있다.
(AuxLogits): InceptionAux(
...
(fc): Linear(in_features=768, out_features=1000, bias=True)
)
...
(fc): Linear(in_features=2048, out_features=1000, bias=True)
model.AuxLogits.fc = nn.Linear(768, num_classes)
model.fc = nn.Linear(2048, num_classes)
Initialize_model
in_features는 final fully connected layer 또는 output layer의 input 개수를 나타낸다. 그래서 nn.Linear의 입력단에는 in_features 만큼, 출력단에는 우리가 원하는 클래스 개수만큼 들어간다.
def initialize_model(model_name, num_classes, feature_extract, use_pretrained=True):
# Initialize these variables which will be set in this if statement. Each of these
# variables is model specific.
model_ft = None
input_size = 0
if model_name == "resnet":
""" Resnet18
"""
model_ft = models.resnet18(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, num_classes)
input_size = 224
elif model_name == "alexnet":
""" Alexnet
"""
model_ft = models.alexnet(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.classifier[6].in_features
model_ft.classifier[6] = nn.Linear(num_ftrs,num_classes)
input_size = 224
elif model_name == "vgg":
""" VGG11_bn
"""
model_ft = models.vgg11_bn(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.classifier[6].in_features
model_ft.classifier[6] = nn.Linear(num_ftrs,num_classes)
input_size = 224
elif model_name == "squeezenet":
""" Squeezenet
"""
model_ft = models.squeezenet1_0(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
model_ft.classifier[1] = nn.Conv2d(512, num_classes, kernel_size=(1,1), stride=(1,1))
model_ft.num_classes = num_classes
input_size = 224
elif model_name == "densenet":
""" Densenet
"""
model_ft = models.densenet121(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
num_ftrs = model_ft.classifier.in_features
model_ft.classifier = nn.Linear(num_ftrs, num_classes)
input_size = 224
elif model_name == "inception":
""" Inception v3
Be careful, expects (299,299) sized images and has auxiliary output
"""
model_ft = models.inception_v3(pretrained=use_pretrained)
set_parameter_requires_grad(model_ft, feature_extract)
# Handle the auxilary net
num_ftrs = model_ft.AuxLogits.fc.in_features
model_ft.AuxLogits.fc = nn.Linear(num_ftrs, num_classes)
# Handle the primary net
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs,num_classes)
input_size = 299
else:
print("Invalid model name, exiting...")
exit()
return model_ft, input_size
# Initialize the model for this run
model_ft, input_size = initialize_model(model_name, num_classes, feature_extract, use_pretrained=True)
# Print the model we just instantiated
print(model_ft)
Load Data
데이터 선처리 및 로드 과정
# Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(input_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(input_size),
transforms.CenterCrop(input_size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
print("Initializing Datasets and Dataloaders...")
# Create training and validation datasets
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']}
# Create training and validation dataloaders
dataloaders_dict = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True, num_workers=4) for x in ['train', 'val']}
# Detect if we have a GPU available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
Create The Optimizer
# Send the model to GPU
model_ft = model_ft.to(device)
# Gather the parameters to be optimized/updated in this run. If we are
# finetuning we will be updating all parameters. However, if we are
# doing feature extract method, we will only update the parameters
# that we have just initialized, i.e. the parameters with requires_grad
# is True.
params_to_update = model_ft.parameters()
print("Params to learn:")
if feature_extract:
params_to_update = []
for name,param in model_ft.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
print("\t",name)
else:
for name,param in model_ft.named_parameters():
if param.requires_grad == True:
print("\t",name)
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(params_to_update, lr=0.001, momentum=0.9)
feature_extract 가 True 면 newly created layer 의 파라미터만 업데이트 시키도록 리스트에 추가시켜주고 feature_extract 가 False 면 모든 파라미터를 업데이트 시켜준다.
Run training and Validation step
# Setup the loss fxn
criterion = nn.CrossEntropyLoss()
# Train and evaluate
model_ft, hist = train_model(model_ft, dataloaders_dict, criterion, optimizer_ft, num_epochs=num_epochs, is_inception=(model_name=="inception"))