{ "cells": [ { "cell_type": "markdown", "id": "21081332", "metadata": {}, "source": [ "# Introduction to Pytorch on a CPU\n", "\n", "In this notebook demonstrates how to use [Pytorch](https://pytorch.org). Pytorch is an open-source optimized tensor library for deep learning using CPUs and GPUs (hardware accelerators).\n", "\n", "To run this notebook you need the following packages:\n", "\n", "- Pytorch\n", "- Scikit-learn\n", "- Numpy\n", "- Matplotlib" ] }, { "cell_type": "markdown", "id": "923565f5", "metadata": {}, "source": [ "You can run this on the SCC using OnDemand Jupyter notebook or Desktop sessions." ] }, { "cell_type": "markdown", "id": "79f19029", "metadata": {}, "source": [ "## Notebook outline\n", "\n", "1. Neural network\n", "2. CPU vs GPU\n", "3. Autograd\n", "4. Convoluational neural networks\n", "5. Natural language processing" ] }, { "cell_type": "markdown", "id": "a2d680b0", "metadata": {}, "source": [ "### Neural network\n", "\n", "This example implements a neural network to classify a tumor as malignant or benign. This is an example of supervised learning. The neural network is built from scratch. \n", "\n", "The following essential objects of the Pytorch library are introduced:\n", "\n", "- Dataset\n", "- Dataloader\n", "- Module\n", "- Loss functions\n", "- Optimizers" ] }, { "cell_type": "code", "execution_count": null, "id": "b696469a", "metadata": {}, "outputs": [], "source": [ "# Import the modules needed for our example.\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "\n", "# torch imports\n", "import torch\n", "import torch.nn as nn\n", "\n", "# sklearn imports\n", "from sklearn.datasets import load_breast_cancer\n", "from sklearn.model_selection import train_test_split\n", "from sklearn.preprocessing import StandardScaler\n", "\n", "# torch.utils.data imports\n", "from torch.utils.data import Dataset, DataLoader" ] }, { "cell_type": "markdown", "id": "73dd4f2b", "metadata": {}, "source": [ "#### Load and prepare the dataset\n", "\n", "We use the Wisconsin breast cancer dataset. The features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. " ] }, { "cell_type": "code", "execution_count": null, "id": "7e23987f", "metadata": {}, "outputs": [], "source": [ "# Load the breast cancer dataset as a dataframe\n", "dataset = load_breast_cancer(as_frame=True)\n", "\n", "# Get a description of the datasetb\n", "print(dataset.DESCR)" ] }, { "cell_type": "code", "execution_count": null, "id": "582128e9-c26d-42d0-9d54-998909e0eae0", "metadata": {}, "outputs": [], "source": [ "# Get the feature names\n", "print(\"Feature names: \", dataset.feature_names)\n", "\n", "# Get the target names\n", "print(\"Target names: \", dataset.target_names)" ] }, { "cell_type": "code", "execution_count": null, "id": "371db605", "metadata": {}, "outputs": [], "source": [ "# X is a Pandas dataframe\n", "# The columns are the features \n", "X = dataset.data\n", "# y is a Pandas series with the target class labels (0 - malignant, 1 - benign)\n", "y = dataset.target" ] }, { "cell_type": "code", "execution_count": null, "id": "e246693e", "metadata": {}, "outputs": [], "source": [ "X.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "b021a15a", "metadata": {}, "outputs": [], "source": [ "# Get information on the dataframe\n", "# Column names\n", "# NaN counts for each column\n", "# Data type of each column \n", "X.info()" ] }, { "cell_type": "code", "execution_count": null, "id": "0a59023d", "metadata": {}, "outputs": [], "source": [ "# Show the last five entries of the targets\n", "y.tail()" ] }, { "cell_type": "code", "execution_count": null, "id": "9c3f1dd2", "metadata": { "scrolled": true }, "outputs": [], "source": [ "y.info()" ] }, { "cell_type": "code", "execution_count": null, "id": "6d51e70a", "metadata": {}, "outputs": [], "source": [ "y.value_counts()" ] }, { "cell_type": "code", "execution_count": null, "id": "ab3df8b5", "metadata": {}, "outputs": [], "source": [ "# Using the train_test_split method we split 80% of the data into the X_train, y_train numpy arrays\n", "# The remaining 20% is our X_test and y_test \n", "X_train, X_test, y_train, y_test = train_test_split(X.to_numpy(), y.to_numpy(), test_size=0.20, random_state=10)\n", "\n", "# Create a StandardScaler object\n", "sc = StandardScaler()\n", "\n", "# The StandardScaler standardizes features by removing the mean and scaling to unit variance\n", "# Prevents features with larger variances to dominate\n", "# We only need to apply this to our training/testing input data since the output is binary 0/1\n", "X_train = sc.fit_transform(X_train)\n", "X_test = sc.fit_transform(X_test)" ] }, { "cell_type": "markdown", "id": "ffb47263", "metadata": {}, "source": [ "#### Dataset class\n", "\n", "The dataset class retrieves the dataset's features and labels one sample at a time. To create your own instance from this class you must overload the three methods:\n", "\n", "- `__init__(self, params)`, which runs when the Dataset object is created\n", "- `__len__(self)`, which returns the number of samples in the dataset\n", "- `__getitem__(self, index)`, which returns a sample from the dataset at a given index" ] }, { "cell_type": "code", "execution_count": null, "id": "4e33fe6c", "metadata": {}, "outputs": [], "source": [ "# Create a dataset class called WisconsinDataset\n", "class WisconsinDataset(Dataset):\n", " def __init__(self, X_train, y_train):\n", " # need to convert float64 to float32\n", " self.X = torch.from_numpy(X_train.astype(np.float32))\n", " # Need to convert int64 to float32 and use the\n", " # unsqueeze function to create an output vector with appropriate dimensions\n", " self.y = torch.from_numpy(y_train.astype(np.float32)).unsqueeze(1)\n", " self.len = self.X.shape[0]\n", "\n", " def __len__(self):\n", " return self.len\n", "\n", " def __getitem__(self, index):\n", " return self.X[index], self.y[index]" ] }, { "cell_type": "code", "execution_count": null, "id": "26ffbe4c", "metadata": {}, "outputs": [], "source": [ "# Instantiate the training data \n", "traindata = WisconsinDataset(X_train, y_train)\n", "\n", "print(traindata[0])\n", "print(len(traindata))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "dc2ab229", "metadata": {}, "source": [ "#### Dataloader class\n", "\n", "The DataLoader class is an iterable object that abstracts the complex data passing process from the dataset to the model. It passes samples in minibatches. It can reshuffle the data at every epoch to reduce model overfitting. It uses Python’s multiprocessing to speed up data retrieval." ] }, { "cell_type": "code", "execution_count": null, "id": "89e4e71f", "metadata": {}, "outputs": [], "source": [ "# Instantiate the dataloader object\n", "batch_size = 4\n", "trainloader = DataLoader(traindata, batch_size=batch_size)" ] }, { "cell_type": "markdown", "id": "85cccdfa", "metadata": {}, "source": [ "#### Module class \n", "\n", "The torch `nn.Module` class is how to implement a specific model. If you are creating a neural network model from scratch, this is where you will define the model architecture. You only need to overload the following two methods:\n", "\n", "- `__init__(self)`, which sets up the architecture\n", "- `forward(self, x)`, which defines the forward propagation.\n", "\n", "You can also define other methods in this class. For instance you could create functions to take train/validate/test the model.\n", "\n", "For details on the kinds of layers you can create, see the documentation on the `torch.nn` class:\n", "\n", "https://pytorch.org/docs/stable/nn.html\n", "\n", "For details on the kinds of activation functions you can use, see:\n", "\n", "https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity" ] }, { "cell_type": "code", "execution_count": null, "id": "fca4a13d", "metadata": {}, "outputs": [], "source": [ "# Set the number of features (len of X cols)\n", "input_dim = X_train.shape[1]\n", "\n", "# Set the number of hidden layers\n", "hidden_layer_dim = 4\n", "\n", "# Set the number of classes\n", "output_dim = 1" ] }, { "cell_type": "code", "execution_count": null, "id": "9394ea81", "metadata": {}, "outputs": [], "source": [ "# Define the Neural Network class\n", "class NeuralNetwork(torch.nn.Module):\n", " def __init__(self, input_dim, hidden_layer_dim, output_dim):\n", " super(NeuralNetwork, self).__init__()\n", " self.linear1 = nn.Linear(input_dim, hidden_layer_dim)\n", " self.linear2 = nn.Linear(hidden_layer_dim, output_dim)\n", " \n", " def forward(self, x):\n", " x = torch.relu(self.linear1(x))\n", " x = torch.sigmoid(self.linear2(x))\n", " return x" ] }, { "cell_type": "code", "execution_count": null, "id": "9fae9c97", "metadata": {}, "outputs": [], "source": [ "# Set a random seed for reproducibility\n", "torch.manual_seed(42);\n", "\n", "# Instantiate the Neural Network class\n", "clf = NeuralNetwork(input_dim, hidden_layer_dim, output_dim)\n", "\n", "# Print out details about the layers of the model\n", "print(clf.parameters)" ] }, { "cell_type": "code", "execution_count": null, "id": "17365129", "metadata": {}, "outputs": [], "source": [ "# Access the layers of a model\n", "\n", "# Output the parameters of the model and store them in a list\n", "[w, b] = clf.linear1.parameters()\n", "\n", "# Print the weights of layer 1, this object is a tuple with\n", "print(w)\n", "\n", "# Print the biases in layer 1\n", "print(b)\n", "\n", "print(\"Type of w: \", type(w))" ] }, { "cell_type": "code", "execution_count": null, "id": "cade67a7", "metadata": {}, "outputs": [], "source": [ "# To access the tensor data we use the .data attribute\n", "print(w.data)\n", "\n", "print(\"Type of w.data \", type(w.data))" ] }, { "attachments": {}, "cell_type": "markdown", "id": "5ccf08c4", "metadata": {}, "source": [ "#### Loss function\n", "\n", "During training, the loss function tells you how close your predicted value is to the actual output value. The choice of loss function depends on the activation function used in the output layer and what your model is predicting.\n", "\n", "For example:\n", "- Regression\n", " - Mean squared error, `nn.MSELoss`\n", "- Binary classification\n", " - Binary cross-entropy, `nn.BCELoss`\n", "\n", "Here is documentation on the loss functions available in Pytorch: \n", "\n", "https://pytorch.org/docs/stable/nn.html#loss-functions" ] }, { "cell_type": "code", "execution_count": null, "id": "db3f3b0c", "metadata": {}, "outputs": [], "source": [ "# Define the loss function\n", "loss_function = nn.BCELoss()" ] }, { "cell_type": "markdown", "id": "3a395ed8", "metadata": {}, "source": [ "#### Optimizer\n", "\n", "The optimizer is an object that is used to update the weights of your model. The optimizer updates the model weights by iteratively minimizing the loss function. This process is called backward propagation. \n", "\n", "The process of training a model requires multiple steps of forward propagation (data flowing through the model), backward propagation (updating model weights to minimize the cost function). One step of forward and backward propagation is called an epoch. As you increase the number of epochs to train your model, the model should become more accurate making predicting on the training data.\n", "\n", "For details on the optimizers that you can choose:\n", "\n", "https://pytorch.org/docs/stable/optim.html\n", "\n", "Backward propagation requires calculating partial derivatives of tensors. This is done efficiently using Pytroch Autograd capabilities. This is discussed further in the Autograd section of this notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "56e293a8", "metadata": {}, "outputs": [], "source": [ "# Set the optimzer as Stochastic Gradient Descent with a learning rate of 0.01\n", "optimizer = torch.optim.SGD(clf.parameters(), lr=0.01)" ] }, { "cell_type": "code", "execution_count": null, "id": "3ea9a5c9", "metadata": {}, "outputs": [], "source": [ "# Set the number of epochs\n", "epochs = 10" ] }, { "cell_type": "code", "execution_count": null, "id": "a5144186", "metadata": {}, "outputs": [], "source": [ "# Define correct and total variables that are initially set to zero\n", "# These variables are used to compute the training accuracy\n", "correct, total = 0, 0\n", "\n", "# Define empty list variables that store the losses and accuracies at each epoch\n", "losses = []\n", "accuracies = []\n", "\n", "# Loop over the number of epochs\n", "for epoch in range(epochs):\n", " # Iterate over the minibatches in the trainloader\n", " for data in trainloader:\n", " # Get the input and target values in the minibatch\n", " inputs, targets = data\n", " # Forward propagation step\n", " outputs = clf(inputs)\n", " # Compute the loss\n", " loss = loss_function(outputs, targets)\n", " \n", " # Compute prediction, anything greater than 0.5 is rounded up to 1, less than 0.5 rounded down to 0\n", " predicted = torch.round(outputs.data)\n", " total += targets.size(0)\n", " correct += (predicted == targets).sum().item()\n", " \n", " # Zero out previous epoch gradients\n", " optimizer.zero_grad() \n", " # Backward propagation\n", " loss.backward() \n", " # Update model parameters \n", " optimizer.step()\n", " # Compute accuracy \n", " acc = correct / total\n", " \n", " losses.append(loss.item())\n", " accuracies.append(acc)\n", " print(\"epoch {} loss : {:.5f} accuracy : {:.5f}\".format(epoch, loss, acc))" ] }, { "cell_type": "code", "execution_count": null, "id": "98707752", "metadata": {}, "outputs": [], "source": [ "fig, (ax1, ax2) = plt.subplots(1, 2, sharex=True)\n", "# Adjust figure and axes properties\n", "fig.set_size_inches(10, 5)\n", "ax1.spines[['top','bottom','left','right']].set_linewidth(2)\n", "ax1.tick_params(width=2, labelsize=12)\n", "ax2.spines[['top','bottom','left','right']].set_linewidth(2)\n", "ax2.tick_params(width=2, labelsize=12)\n", "\n", "# Give the plot title and axis labels\n", "ax1.plot(losses)\n", "ax1.set_title('Loss vs Epochs', fontsize=20)\n", "ax1.set_xlabel('Epochs', fontsize=16)\n", "ax1.set_ylabel('Loss', fontsize=16)\n", "ax2.plot(accuracies)\n", "ax2.set_title('Accuracy vs Epochs', fontsize=20)\n", "ax2.set_xlabel('Epochs', fontsize=16)\n", "ax1.set_ylabel('Accuracy', fontsize=16)\n", "\n", "# Adjust the subplots to look nice\n", "plt.subplots_adjust(left=0.1,\n", " bottom=0.1,\n", " right=0.9,\n", " top=0.9,\n", " wspace=0.4,\n", " hspace=0.4)\n", "# Display the plot\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "c3e88060", "metadata": {}, "source": [ "#### Test the model" ] }, { "cell_type": "code", "execution_count": null, "id": "7746b7f9", "metadata": {}, "outputs": [], "source": [ "testdata = WisconsinDataset(X_test, y_test)\n", "testloader = DataLoader(testdata, batch_size=batch_size)" ] }, { "cell_type": "code", "execution_count": null, "id": "319c87b0", "metadata": {}, "outputs": [], "source": [ "correct, total = 0, 0\n", "# no need to calculate gradients when making predictions\n", "with torch.no_grad():\n", " for data in testloader:\n", " inputs, labels = data\n", " # calculate output by running through the network\n", " outputs = clf(inputs)\n", " # get the predictions\n", " predicted = torch.round(outputs.data)\n", " # update results\n", " total += labels.size(0)\n", " correct += (predicted == labels).sum().item()\n", " print(f'Accuracy of the network on the {len(testdata)} test data: {100 * correct // total} %')" ] }, { "cell_type": "markdown", "id": "9b58ddc6", "metadata": {}, "source": [ "### CPU vs GPU\n", "\n", "The above code blocks were executed on a central processing unit (CPU) of our computer. The CPU interprets, processes, and executes instructions from the software that is running on the machine. As model and datasest sizes scale up, a CPU takes longer to execute the mathematical operations used in the forward and backward propagation steps during training. This leads to longer training times.\n", "\n", "Speical hardware called a graphics processing unit (GPU) exists that can speed up training times. GPUs are optimized for parallelizing the computations during model training. This helps to speed up computations. It is why GPUs on the SCC are a high-demand resource.\n", "\n", "We will now demonstrate how to adapt the previous code to run on a GPU. The only modifications to the previous code are calling functions to send the model parameters and the minibatches to the GPU." ] }, { "cell_type": "code", "execution_count": null, "id": "632a65d3", "metadata": {}, "outputs": [], "source": [ "# Function returns True if GPU is available (False otherwise)\n", "torch.cuda.is_available()" ] }, { "cell_type": "code", "execution_count": null, "id": "7e3aba69", "metadata": {}, "outputs": [], "source": [ "# Set the device name to cuda when GPU is available, cpu otherwise\n", "if torch.cuda.is_available():\n", " device = torch.device(\"cuda\")\n", "else:\n", " device = torch.device(\"cpu\")\n", "print(\"Code is running on :\", \"CPU\" if device.type==\"cpu\" else \"GPU\")" ] }, { "cell_type": "code", "execution_count": null, "id": "eed834f4", "metadata": {}, "outputs": [], "source": [ "# Set a random seed for reproducibility\n", "torch.manual_seed(42);\n", "\n", "# Instantiate a second Neural Network class\n", "clf2 = NeuralNetwork(input_dim, hidden_layer_dim, output_dim)\n", "loss_function2 = nn.BCELoss()\n", "optimizer2 = torch.optim.SGD(clf2.parameters(), lr=0.01)" ] }, { "cell_type": "code", "execution_count": null, "id": "7da85041", "metadata": {}, "outputs": [], "source": [ "# Send the model to the GPU\n", "clf2.to(device)" ] }, { "cell_type": "code", "execution_count": null, "id": "45c68046", "metadata": {}, "outputs": [], "source": [ "# Define correct and total variables that are initially set to zero\n", "# These variables are used to compute the training accuracy\n", "correct, total = 0, 0\n", "\n", "# Define empty list variables that store the losses and accuracies at each epoch\n", "losses = []\n", "accuracies = []\n", "\n", "# Loop over the number of epochs\n", "for epoch in range(epochs):\n", " # Iterate over the minibatches in the trainloader\n", " for data in trainloader:\n", " # Get the input and target values in the minibatch\n", " inputs, targets = data\n", " # Send the inputs and targets minibatches to the device\n", " inputs.to(device)\n", " targets.to(device)\n", " # Forward propagation step\n", " outputs = clf2(inputs)\n", " # Compute the loss\n", " loss = loss_function2(outputs, targets)\n", " \n", " # Compute prediction, anything greater than 0.5 is rounded up to 1, less than 0.5 rounded down to 0\n", " predicted = torch.round(outputs.data)\n", " total += targets.size(0)\n", " correct += (predicted == targets).sum().item()\n", " \n", " # Zero out previous epoch gradients\n", " optimizer2.zero_grad() \n", " # Backward propagation\n", " loss.backward() \n", " # Update model parameters \n", " optimizer2.step()\n", " # Compute accuracy \n", " acc = correct / total\n", " \n", " losses.append(loss.item())\n", " accuracies.append(acc)\n", " print(\"epoch {} loss : {:.5f} accuracy : {:.5f}\".format(epoch, loss, acc))" ] }, { "cell_type": "markdown", "id": "93f5b711", "metadata": {}, "source": [ "### Autograd\n", "\n", "Autograd is a feature of Pytorch that makes it flexible and fast for backpropagation based neural network learning. It allows for fast and easy computation of gradients (partial derivatives). We will illustrate some of the basic features of autograd in this section. \n", "\n", "Understanding how to enable and disable autograd is important. An example of this is when you want to finetune a pretrained model. You can add downstream layers to a pretrained model. To only train these layers, you can enable autograd features to only compute the gradients and update the model weights of the downstream layers. \n", "\n", "#### Example\n", "\n", "Consider the following sequence of operations:\n", "\n", "```\n", "a = torch.linspace(0., 2.*math.pi, steps=25, requires_grad=True)\n", "b = torch.sin(a)\n", "c = 2 * b\n", "d = c + 1\n", "out = d.sum()\n", "```" ] }, { "cell_type": "code", "execution_count": null, "id": "ec8b6d99", "metadata": {}, "outputs": [], "source": [ "import math \n", "\n", "a = torch.linspace(0., 2.*math.pi, steps=25, requires_grad=True)\n", "b = torch.sin(a)\n", "c = 2 * b\n", "d = c + 1\n", "out = d.sum()" ] }, { "cell_type": "code", "execution_count": null, "id": "80ae0d46", "metadata": { "scrolled": true }, "outputs": [], "source": [ "plt.plot(a.detach(), b.detach())\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "150cbd73", "metadata": { "scrolled": true }, "outputs": [], "source": [ "print(b)\n", "\n", "print(c)\n", "\n", "print(d)\n", "\n", "print(out)" ] }, { "cell_type": "code", "execution_count": null, "id": "33bac632", "metadata": {}, "outputs": [], "source": [ "print('d:')\n", "print(d.grad_fn)\n", "print(d.grad_fn.next_functions)\n", "print(d.grad_fn.next_functions[0][0].next_functions)\n", "print(d.grad_fn.next_functions[0][0].next_functions[0][0].next_functions)\n", "print(d.grad_fn.next_functions[0][0].next_functions[0][0].next_functions[0][0].next_functions)\n", "print('\\nc:')\n", "print(c.grad_fn)\n", "print('\\nb:')\n", "print(b.grad_fn)\n", "print('\\na:')\n", "print(a.grad_fn)" ] }, { "cell_type": "code", "execution_count": null, "id": "170dd1a2", "metadata": {}, "outputs": [], "source": [ "out.backward()\n", "print(a.grad)\n", "plt.plot(a.detach(), a.grad.detach())\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "1b05f928", "metadata": {}, "source": [ "#### Disabling autograd\n", "\n", "The way to disable a gradient being calculated is by setting the property `requires_grad = False`. To achieve this in a model you can adapt the following code:\n", "\n", "```\n", "for param in model.parameters():\n", " param.requires_grad = False\n", "```\n", "\n", "We will show a more explicit example of this when we work with a CNN model." ] }, { "cell_type": "markdown", "id": "ff1f69a0", "metadata": {}, "source": [ "### Convolutional neural networks\n", "\n", "Convolutional neural networks make predictions on images directly. In this example we will do the following:\n", "\n", "- **Finetuning the ConvNet**: Instead of random initialization, we\n", " initialize the network with a pretrained network, like the one that is\n", " trained on imagenet 1000 dataset. Rest of the training looks as\n", " usual.\n", "- **ConvNet as fixed feature extractor**: Here, we will freeze the weights\n", " for all of the network except that of the final fully connected\n", " layer. This last fully connected layer is replaced with a new one\n", " with random weights and only this layer is trained.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "6b1006b3", "metadata": {}, "outputs": [], "source": [ "import torch\n", "import torch.nn as nn\n", "import torch.optim as optim\n", "from torch.optim import lr_scheduler\n", "import torch.backends.cudnn as cudnn\n", "import numpy as np\n", "import torchvision\n", "from torchvision import datasets, models, transforms\n", "import matplotlib.pyplot as plt\n", "import time\n", "import os\n", "from PIL import Image\n", "from tempfile import TemporaryDirectory\n", "\n", "cudnn.benchmark = True\n", "plt.ion() # interactive mode" ] }, { "cell_type": "markdown", "id": "2a3e2692", "metadata": {}, "source": [ "#### Load Data\n", "\n", "We will use torchvision and torch.utils.data packages for loading the\n", "data.\n", "\n", "The problem we're going to solve today is to train a model to classify\n", "**ants** and **bees**. We have about 120 training images each for ants and bees.\n", "There are 75 validation images for each class. Usually, this is a very\n", "small dataset to generalize upon, if trained from scratch. Since we\n", "are using transfer learning, we should be able to generalize reasonably\n", "well.\n", "\n", "This dataset is a very small subset of imagenet. The dataset is in the tutorial directory. " ] }, { "cell_type": "code", "execution_count": null, "id": "c1f95524", "metadata": {}, "outputs": [], "source": [ "# Data augmentation and normalization for training\n", "# Just normalization for validation\n", "data_transforms = {\n", " 'train': transforms.Compose([\n", " transforms.RandomResizedCrop(224),\n", " transforms.RandomHorizontalFlip(),\n", " transforms.ToTensor(),\n", " transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n", " ]),\n", " 'val': transforms.Compose([\n", " transforms.Resize(256),\n", " transforms.CenterCrop(224),\n", " transforms.ToTensor(),\n", " transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n", " ]),\n", "}\n", "\n", "data_dir = 'data/hymenoptera_data'\n", "image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),\n", " data_transforms[x])\n", " for x in ['train', 'val']}\n", "dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,\n", " shuffle=True, num_workers=4)\n", " for x in ['train', 'val']}\n", "dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}\n", "class_names = image_datasets['train'].classes\n", "\n", "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")" ] }, { "cell_type": "markdown", "id": "544f9b85", "metadata": {}, "source": [ "#### Visualize a few images\n", "\n", "Let's visualize a few training images so as to understand the data\n", "augmentations." ] }, { "cell_type": "code", "execution_count": null, "id": "55416289", "metadata": {}, "outputs": [], "source": [ "def imshow(inp, title=None):\n", " \"\"\"Display image for Tensor.\"\"\"\n", " inp = inp.numpy().transpose((1, 2, 0))\n", " mean = np.array([0.485, 0.456, 0.406])\n", " std = np.array([0.229, 0.224, 0.225])\n", " inp = std * inp + mean\n", " inp = np.clip(inp, 0, 1)\n", " plt.imshow(inp)\n", " if title is not None:\n", " plt.title(title)\n", " plt.pause(0.001) # pause a bit so that plots are updated\n", "\n", "\n", "# Get a batch of training data\n", "inputs, classes = next(iter(dataloaders['train']))\n", "\n", "# Make a grid from batch\n", "out = torchvision.utils.make_grid(inputs)\n", "\n", "imshow(out, title=[class_names[x] for x in classes])" ] }, { "cell_type": "markdown", "id": "ebd0596a", "metadata": {}, "source": [ "#### Train the model\n", "\n", "Now, let's write a general function to train a model. Here, we will\n", "illustrate:\n", "\n", "- Scheduling the learning rate\n", "- Saving the best model\n", "\n", "In the following, parameter ``scheduler`` is an LR scheduler object from\n", "``torch.optim.lr_scheduler``." ] }, { "cell_type": "code", "execution_count": null, "id": "81bea117", "metadata": {}, "outputs": [], "source": [ "def train_model(model, criterion, optimizer, scheduler, num_epochs=25):\n", " since = time.time()\n", "\n", " # Create a temporary directory to save training checkpoints\n", " with TemporaryDirectory() as tempdir:\n", " best_model_params_path = os.path.join(tempdir, 'best_model_params.pt')\n", " \n", " torch.save(model.state_dict(), best_model_params_path)\n", " best_acc = 0.0\n", "\n", " for epoch in range(num_epochs):\n", " print(f'Epoch {epoch}/{num_epochs - 1}')\n", " print('-' * 10)\n", "\n", " # Each epoch has a training and validation phase\n", " for phase in ['train', 'val']:\n", " if phase == 'train':\n", " model.train() # Set model to training mode\n", " else:\n", " model.eval() # Set model to evaluate mode\n", "\n", " running_loss = 0.0\n", " running_corrects = 0\n", "\n", " # Iterate over data.\n", " for inputs, labels in dataloaders[phase]:\n", " inputs = inputs.to(device)\n", " labels = labels.to(device)\n", "\n", " # zero the parameter gradients\n", " optimizer.zero_grad()\n", "\n", " # forward\n", " # track history if only in train\n", " with torch.set_grad_enabled(phase == 'train'):\n", " outputs = model(inputs)\n", " _, preds = torch.max(outputs, 1)\n", " loss = criterion(outputs, labels)\n", "\n", " # backward + optimize only if in training phase\n", " if phase == 'train':\n", " loss.backward()\n", " optimizer.step()\n", "\n", " # statistics\n", " running_loss += loss.item() * inputs.size(0)\n", " running_corrects += torch.sum(preds == labels.data)\n", " if phase == 'train':\n", " scheduler.step()\n", "\n", " epoch_loss = running_loss / dataset_sizes[phase]\n", " epoch_acc = running_corrects.double() / dataset_sizes[phase]\n", "\n", " print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')\n", "\n", " # deep copy the model\n", " if phase == 'val' and epoch_acc > best_acc:\n", " best_acc = epoch_acc\n", " torch.save(model.state_dict(), best_model_params_path)\n", "\n", " print()\n", "\n", " time_elapsed = time.time() - since\n", " print(f'Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s')\n", " print(f'Best val Acc: {best_acc:4f}')\n", "\n", " # load best model weights\n", " model.load_state_dict(torch.load(best_model_params_path))\n", " return model" ] }, { "cell_type": "markdown", "id": "8de62153", "metadata": {}, "source": [ "#### Visualizing the model predictions\n", "\n", "Generic function to display predictions for a few images\n" ] }, { "cell_type": "code", "execution_count": null, "id": "33e6b298", "metadata": {}, "outputs": [], "source": [ "def visualize_model(model, num_images=6):\n", " was_training = model.training\n", " model.eval()\n", " images_so_far = 0\n", " fig = plt.figure()\n", "\n", " with torch.no_grad():\n", " for i, (inputs, labels) in enumerate(dataloaders['val']):\n", " inputs = inputs.to(device)\n", " labels = labels.to(device)\n", "\n", " outputs = model(inputs)\n", " _, preds = torch.max(outputs, 1)\n", "\n", " for j in range(inputs.size()[0]):\n", " images_so_far += 1\n", " ax = plt.subplot(num_images//2, 2, images_so_far)\n", " ax.axis('off')\n", " ax.set_title(f'predicted: {class_names[preds[j]]}')\n", " imshow(inputs.cpu().data[j])\n", "\n", " if images_so_far == num_images:\n", " model.train(mode=was_training)\n", " return\n", " model.train(mode=was_training)" ] }, { "cell_type": "markdown", "id": "2a52a3eb", "metadata": {}, "source": [ "#### Finetuning the CNN\n", "\n", "Load a pretrained model and reset final fully connected layer.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "8a5b0100", "metadata": {}, "outputs": [], "source": [ "model_ft = models.resnet18(weights='IMAGENET1K_V1')\n", "num_ftrs = model_ft.fc.in_features\n", "# Here the size of each output sample is set to 2.\n", "# Alternatively, it can be generalized to ``nn.Linear(num_ftrs, len(class_names))``.\n", "model_ft.fc = nn.Linear(num_ftrs, 2)\n", "\n", "model_ft = model_ft.to(device)\n", "\n", "criterion = nn.CrossEntropyLoss()\n", "\n", "# Observe that all parameters are being optimized\n", "optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)\n", "\n", "# Decay LR by a factor of 0.1 every 7 epochs\n", "exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)" ] }, { "cell_type": "markdown", "id": "d2289a09", "metadata": {}, "source": [ "#### Train and evaluate\n", "\n", "It should take around 15-25 min on CPU. On GPU though, it takes less than a\n", "minute.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "dab6dd1e", "metadata": {}, "outputs": [], "source": [ "model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)" ] }, { "cell_type": "code", "execution_count": null, "id": "809f83f3", "metadata": {}, "outputs": [], "source": [ "visualize_model(model_ft)" ] }, { "cell_type": "markdown", "id": "4bcb8b97", "metadata": {}, "source": [ "#### ConvNet as fixed feature extractor\n", "\n", "Here, we need to freeze all the network except the final layer. We need\n", "to set ``requires_grad = False`` to freeze the parameters so that the\n", "gradients are not computed in ``backward()``.\n", "\n", "You can read more about this in the documentation\n", "[here](https://pytorch.org/docs/notes/autograd.html#excluding-subgraphs-from-backward)_.\n" ] }, { "cell_type": "code", "execution_count": null, "id": "1bf68e1a", "metadata": {}, "outputs": [], "source": [ "model_conv = torchvision.models.resnet18(weights='IMAGENET1K_V1')\n", "for param in model_conv.parameters():\n", " param.requires_grad = False\n", "\n", "# Parameters of newly constructed modules have requires_grad=True by default\n", "num_ftrs = model_conv.fc.in_features\n", "model_conv.fc = nn.Linear(num_ftrs, 2)\n", "\n", "model_conv = model_conv.to(device)\n", "\n", "criterion = nn.CrossEntropyLoss()\n", "\n", "# Observe that only parameters of final layer are being optimized as\n", "# opposed to before.\n", "optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)\n", "\n", "# Decay LR by a factor of 0.1 every 7 epochs\n", "exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)" ] }, { "cell_type": "markdown", "id": "059dc85b", "metadata": {}, "source": [ "#### Train and evaluate\n", "\n", "On CPU this will take about half the time compared to previous scenario.\n", "This is expected as gradients don't need to be computed for most of the\n", "network. However, forward does need to be computed." ] }, { "cell_type": "code", "execution_count": null, "id": "f8d9c8d2", "metadata": {}, "outputs": [], "source": [ "model_conv = train_model(model_conv, criterion, optimizer_conv,\n", " exp_lr_scheduler, num_epochs=25)" ] }, { "cell_type": "code", "execution_count": null, "id": "8679d523", "metadata": {}, "outputs": [], "source": [ "visualize_model(model_conv)\n", "\n", "plt.ioff()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "d1c8c484", "metadata": {}, "source": [ "#### Inference on custom images\n", "\n", "Use the trained model to make predictions on custom images and visualize\n", "the predicted class labels along with the images." ] }, { "cell_type": "code", "execution_count": null, "id": "1cdb9cae", "metadata": {}, "outputs": [], "source": [ "def visualize_model_predictions(model,img_path):\n", " was_training = model.training\n", " model.eval()\n", "\n", " img = Image.open(img_path)\n", " img = data_transforms['val'](img)\n", " img = img.unsqueeze(0)\n", " img = img.to(device)\n", "\n", " with torch.no_grad():\n", " outputs = model(img)\n", " _, preds = torch.max(outputs, 1)\n", "\n", " ax = plt.subplot(2,2,1)\n", " ax.axis('off')\n", " ax.set_title(f'Predicted: {class_names[preds[0]]}')\n", " imshow(img.cpu().data[0])\n", " \n", " model.train(mode=was_training)" ] }, { "cell_type": "code", "execution_count": null, "id": "10767162", "metadata": {}, "outputs": [], "source": [ "visualize_model_predictions(\n", " model_conv,\n", " img_path='data/hymenoptera_data/val/bees/72100438_73de9f17af.jpg'\n", ")\n", "\n", "plt.ioff()\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "7df10cb9", "metadata": {}, "source": [ "### Natural language processing" ] }, { "cell_type": "markdown", "id": "a64e6f34-e677-4771-bf9f-fefc14bb0a91", "metadata": {}, "source": [ "Link to tutorial evaluation http://scv.bu.edu/eval " ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.6" } }, "nbformat": 4, "nbformat_minor": 5 }