{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "# Inmas Machine Learning Workshop January 2023\n", "Instructor: Christian Kuemmerle - kuemmerle@uncc.edu
\n", "Teaching Assistants: Emily Shinkle, Yuxuan Li, Derek Kielty, Yashil Sukurdeep, Tim Wang, Ben Brindle." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Neural Network & Deep Learning\n", "This workbook is divided into three parts.
\n", "\n", "In the first part, you'll learn to build a basic type of neural network (NN) from scratch.
\n", "\n", "In the second part, you'll use PyTorch's functionality to build a more complicated NN.
\n", "\n", "In the third part, you'll be able to build a NN of your own design and test it out.\n", "\n", "All three sections will use the `FashionMNIST` dataset for training and testing." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import torch as th\n", "import torch.nn as nn\n", "import PIL\n", "PIL.PILLOW_VERSION = PIL.__version__" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from torch.utils.data import Dataset\n", "from torchvision import datasets\n", "from torchvision.transforms import ToTensor\n", "import matplotlib.pyplot as plt\n", "from torch.utils.data import DataLoader" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Tensors" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's make sure we understand PyTorch's main data structure, the **tensor**.
\n", "Torch tensors work very similarly to Numpy arrays. In fact, most of the same syntax has been implemented for tensors, so you can use a lot of the same commands." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Fill in some numbers to create a 3x2 tensor.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "first_tensor = th.tensor([])\n", "print(first_tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Access the (1,2)th element of the tensor the same way you would for a numpy array.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "element = \n", "print(element)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**\"Flatten\" the tensor `first_tensor` above into 1xN shape. See [its documentation](https://pytorch.org/docs/stable/generated/torch.flatten.html) for some context.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "first_flat = \n", "print(first_flat)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The main difference is that Torch tracks every operation that is performed on a **tensor**. \n", "This is done so that we can easily compute derivatives later.
\n", "As such, using tensors is more computationally intensive, so we should use np arrays whenever we don't need gradients." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Cast the tensor `first_tensor` above into a numpy array.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "first_array = \n", "print(first_array)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we go the other way round: We create a `numpy.array` and then cast it as a `torch.Tensor`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create any np array \n", "second_array = np.array([1,2,3,4])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Create the tensor `second_tensor` by casting `second_array` into a `torch.Tensor`.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "second_tensor = " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We print the results:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(first_array, second_tensor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Data\n", "\n", "Here we load up the `FashionMNIST` dataset, which is included with PyTorch and which we used in previous sessions.\n", "Your computer will download it automatically online if it is not yet in your working folder." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "training_data = datasets.FashionMNIST(\n", " root=\"./\",\n", " train=True,\n", " download=True,\n", " transform=ToTensor()\n", ")\n", "\n", "test_data = datasets.FashionMNIST(\n", " root=\"./\",\n", " train=False,\n", " download=True,\n", " transform=ToTensor()\n", ")\n", "print(training_data)\n", "print(test_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is a dictionary of the different types of images and their corresponding labels." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "labels_map = {\n", " 0: \"T-Shirt\",\n", " 1: \"Trouser\",\n", " 2: \"Pullover\",\n", " 3: \"Dress\",\n", " 4: \"Coat\",\n", " 5: \"Sandal\",\n", " 6: \"Shirt\",\n", " 7: \"Sneaker\",\n", " 8: \"Bag\",\n", " 9: \"Ankle Boot\",\n", "}\n", "print(labels_map)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We create [`DataLoader`](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html) objects for each set.
\n", "Basically, these are sophisticated ways of parsing through the data in batches." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "train_dataloader = DataLoader(training_data, batch_size=50, shuffle=True)\n", "test_dataloader = DataLoader(test_data, batch_size=1, shuffle=True)\n", "vars(train_dataloader)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Basic NN From Scratch" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this section, we will build a neural network from scratch.
\n", "This means we'll be doing all of the multiplications and updates by hand.
\n", "We will still use PyTorch's autograd to compute derivatives!
\n", "\n", "We first create a **feed-forward, fully connected NN with one hidden layer**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In Pytorch, neural networks are instances of a class we will create.
\n", "If you are unfamiliar with classes - we basically create a type of object (a class), and imbue that type with methods (functions that it knows).
\n", "Subsequently, we create an instance of this class to \"create\" an actual NN." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class first_NN(nn.Module):\n", " # This method creates the needed parameters for the NN\n", " def __init__(self, input_size, hidden_size, output_size):\n", " super(first_NN, self).__init__()\n", " \n", " # Set up parameters based on inputs of init function\n", " self.input_size = input_size\n", " self.hidden_size = hidden_size\n", " self.output_size = output_size\n", " \n", " # Set up the weights.\n", " # Since our network is fully connected, each layer needs one matrix of weights.\n", " # They can be initialized to random values.\n", " self.W1 = th.randn(self.input_size, self.hidden_size, requires_grad=True)\n", " self.W2 = th.randn(self.hidden_size, self.output_size, requires_grad=True)\n", "\n", " # Here we create the sigmoid activation function.\n", " # Look up the sigmoid function and fill in the missing value here.\n", " # You will need th.exp(), the exponential function for tensors.\n", " def sigmoid(self, x):\n", " return 1 / (1 + th.exp(-x))\n", " \n", " # We will need the softmax function. Look it up and fill it in here.\n", " def softmax(self,x):\n", " return th.exp(x)/sum(th.exp(x))\n", " \n", " # We will also need Cross-Entropy Loss.\n", " def loss(self,probs,label):\n", " return -th.log(probs[label]+(1e-15)) # Note that we add a small number to the probs to avoid calculating log(0)\n", " \n", " # This function defines a \"forward pass\" of the network; we take input X, send it through the net, and output class probabilities.\n", " # In between each layer of the network, we include the sigmoid activation function.\n", " # At the final layer, the softmax function turns the outputs into a vector of probabilities - one per class.\n", " def forward(self, x):\n", " x_flat = th.flatten(x) # We need to turn our images into 1D vectors\n", " layer1 = th.matmul(x_flat,self.W1)\n", " activ1 = self.sigmoid(layer1)\n", " layer2 = th.matmul(activ1,self.W2)\n", " soft = self.softmax(layer2)\n", " return soft\n", " \n", " # This function takes an input image x and a correct class label y.\n", " # It performs a forward pass, then compares the results to the true label.\n", " # It then computes the value of a loss function, takes the derivative with respect to the loss function, and uses the derivative to update the parameters (W1,W2,W3).\n", " # We do this with multiple samples before performing an update for stability.\n", " # This is called Stochastic Gradient Descent.\n", " def train(self, x_list, y_list, gamma, display=0):\n", " batch_size = x_list.shape[0]\n", " update_W1 = th.zeros(self.input_size,self.hidden_size,batch_size)\n", " update_W2 = th.zeros(self.hidden_size,self.output_size,batch_size)\n", " accumulated_loss = 0\n", " \n", " for i in range(batch_size):\n", " input_data = x_list[i,:,:]\n", " label = y_list[i]\n", " \n", " # Perform a forward pass here using the function we created above\n", " output_probs = self.forward(input_data)\n", " \n", " # Compute the loss function\n", " loss_value = self.loss(output_probs, label)\n", " accumulated_loss += loss_value\n", " \n", " # Perform backpropogation using autograd. This is the power of PyTorch!\n", " # All PyTorch tensors initialized with parameter `requires_grad=True` store a gradient value which can be accessed\n", " # using the `.grad` method. When we call `loss_value.backward()` in the line below, the gradients for all tensors \n", " # involved in the calculation of `loss_value` are calculated and stored.\n", " loss_value.backward()\n", " nn.utils.clip_grad_norm_([self.W1, self.W2], 10000) # This rescales the gradients if their norm exceeds 10,000, to prevent taking too large of a step.\n", " update_W1[:,:,i] = self.W1.grad.clone().detach() #Extract the calculated gradients and store for later.\n", " update_W2[:,:,i] = self.W2.grad.clone().detach()\n", " \n", " # Reset the gradients in preparation for the next loop. This is an essential step; future gradients will not be correct otherwise.\n", " self.W1.grad.data.zero_()\n", " self.W2.grad.data.zero_()\n", "\n", " # Sum up the updates from our batch, scale them, and add them to the paramter matrices.\n", " sum_W1 = th.sum(update_W1,2)\n", " sum_W2 = th.sum(update_W2,2)\n", " \n", " with th.no_grad(): # This line prevents this operation from being considered when calculating future gradients.\n", " self.W1 += -gamma*sum_W1\n", " self.W2 += -gamma*sum_W2\n", " \n", " if display:\n", " print(output_probs)\n", " return accumulated_loss.item()\n", " \n", " # This function will be what we use to actually classify an image.\n", " # Set up the function to perform a forward pass, then choose the most likely class as the classification.\n", " def classify(self,x,display=0):\n", " with th.no_grad():\n", " probs = self.forward(x)\n", " most_likely_class = th.argmax(probs).item()\n", " if display:\n", " print(\"This image is a \" + str(labels_map[most_likely_class]) + \".\")\n", " return most_likely_class" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We recall the batch size from above - it was defined in the data loader section." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(train_dataloader.batch_size)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we create the actual neural network.
\n", "We note that the input size is the number of pixels in the image.
\n", "The hidden size is the dimension of our hidden layers and is chosen arbitrarily.
\n", "The output size is the number of image classes." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "net = first_NN(input_size=784, hidden_size=20, output_size=10)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_batches = 12000 # the number of batches provides a maximum to the the number of optimization \"steps\" to be taken\n", "tol = 20 # We define a tolerance for the training process. If the value of the evaluated loss is smaller than this number, we stop training\n", "\n", "returned_losses = [] # list of returned losses (initialize as empty list)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After having defined the neural network architecture we want to use and having specified its main parameters, we can *train* the network in the following for loop. The losses per iteration are collected in `returned_losses`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for i in range(num_batches):\n", "\n", " # This line grabs the next bit of data from our set\n", " x_batch, y_batch = next(iter(train_dataloader))\n", " x_batch = x_batch[:,0,:,:]\n", " \n", " # Run a training instance on the NN and store the returned loss. Print value every 100 iterations.\n", " if i % 100 == 0:\n", " returned_losses.append(net.train(x_batch, y_batch, gamma=.01, display=1))\n", " print(\"The accumulated loss value at iteration \" +str(i) + \" is \" + str(returned_losses[i]) + \".\")\n", " else:\n", " returned_losses.append(net.train(x_batch, y_batch, gamma=.01))\n", " \n", " # If we ever do really well in terms of loss, we quit training!\n", " # Note that we set a pretty high tolerance here since we don't want to spend the entire workshop training.\n", " if returned_losses[i] < tol:\n", " break" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Use matplotlib or other plotting software to plot your accumulated loss function versus time.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### Add your code here ###\n", "plt.figure()\n", "plt.plot(returned_losses)\n", "plt.xlabel(\"Iterations\")\n", "plt.ylabel(\"Value of Loss Function\")\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we test the NN using the test set." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "size_testset = 10000\n", "errors = 0\n", "\n", "for i in range(size_testset):\n", " x_test, y_test = next(iter(test_dataloader))\n", " \n", " # Call the prediction function\n", " if (i%1000) == 0:\n", " prediction = net.classify(x_test, display=1)\n", " else:\n", " prediction = net.classify(x_test)\n", " \n", " # Count how many misclassifications we make\n", " if prediction != y_test:\n", " errors += 1\n", " \n", "print(\"The misclassification rate is \" + str(errors/size_testset) + \".\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise:\n", "* **Create a new neural network class `first_NN_mod`. Compared to `first_NN`, add another hidden layer of shape `hidden_size` x `hidden_size`. Add it after the first sigmoid function. Add an additional sigmoid function immediately following the new hidden layer. This will involve updating the code in a number of places. Consider making a copy of the cells above to edit.**
\n", "Start by copying the entire code of the class `first_NN` above, and then proceeding with the modifications by adding code at certain locations wherever necessary." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class first_NN_mod(nn.Module):\n", "### Add your code here ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* **Train `first_NN_mod` and evaluate the performance of it after the training process with respect to the test set. Visualize the `returned_losses`.
Compare the resulting misclassification rate compared to the one of `first_NN`.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### Add your code here ###\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "### Add your code here ###\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "### Add your code here ###\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# A more intricate Neural Network Architecture" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this section, we use the full functionality of PyTorch to build a neural network with more intricate architechtural elements.\n", "\n", "**Make sure to identify the parts of the next cell defining `second_NN` that need additional code to be written.**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class second_NN(nn.Module): \n", " def __init__(self):\n", " super(second_NN, self).__init__()\n", " \n", " # We set up each layer using PyTorch language. \n", " # We start with a convolutional layer, then a normalization, then a ReLU activation function, and finally some pooling.\n", " self.layer1 = nn.Sequential(\n", " nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3, padding=1), # Kernel size is the size of the convolution filter. out_channels is the number of outputs produced by the layer.\n", " nn.BatchNorm2d(32), # A normalization to keep our values in check. Good practice.\n", " nn.ReLU(), # An activation function, probably the most popular one.\n", " nn.MaxPool2d(kernel_size=2, stride=2) # A pooling layer.\n", " )\n", " \n", " # Do it all again!\n", " self.layer2 = nn.Sequential(\n", " nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3),\n", " # Finish the layer! Keep in mind that the number of out_channels is different here, so the shape of the input\n", " # to the next layer will also be different.\n", " )\n", " \n", " # End with a few linear (fully connected) layers, plus a dropout layer.\n", " self.fc1 = nn.Linear(in_features=64*6*6, out_features=600) # This is the same thing as our fully connected layers above, ie. multiplication by a matrix.\n", " self.drop = nn.Dropout2d(0.25) # We remove some features that do not contain useful information.\n", " self.fc2 = nn.Linear(in_features=600, out_features=120)\n", " self.fc3 = nn.Linear(in_features=120, out_features=10)\n", " \n", " # We've already defined what every layer will be; now we just put them in order.\n", " def forward(self, x):\n", " x = self.layer1(x)\n", " x = self.layer2(x)\n", " x = x.view(x.size(0), -1) # Here we reshape the tensor so it can go into the linear layer.\n", " x = self.fc1(x)\n", " x = self.drop(x)\n", " x = self.fc2(x)\n", " out = self.fc3(x)\n", " return out\n", " \n", " def classify(self,x,display=0):\n", " with th.no_grad():\n", " # Create the softmax function using built-in classes.\n", " f = nn.Softmax(dim=1)\n", " probs = f(self.forward(x)) # Call softmax on the output of the forward function.\n", " most_likely_class = th.argmax(probs).item() # Choose the most likely class.\n", " if display:\n", " print(\"This image is a \" + str(labels_map[most_likely_class]) + \".\")\n", " return most_likely_class" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A NOTE: How does one know what types of layers to use for a particular application?
Often, researchers begin by \n", "looking for similar problems to determine a starting point. Then one can experiement by adding or removing layers,\n", "resizing layers, and adjusting other layer parameters. Networks can be compared by looking at their performances\n", "on the test data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# This package allows us to easily add progress bars. Look for the use of `trange` below.\n", "try: \n", " from tqdm.notebook import trange\n", "except ModuleNotFoundError:\n", " !conda install --yes tqdm\n", " from tqdm.notebook import trange" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Here we create an instance of the net\n", "net = second_NN()\n", "\n", "# Here we select a loss function that's built into PyTorch\n", "loss = nn.CrossEntropyLoss()\n", "\n", "# Here we select an optimizer. This replaces all of the manual backprop we did before.\n", "# This is usually done externally to the net, unlike earlier.\n", "# There are several choices, but Adam is the most popular.\n", "learning_rate = 0.001\n", "optimizer = th.optim.Adam(net.parameters(), lr=learning_rate)\n", "\n", "\n", "num_batches = 200 # A small number, for speed\n", "returned_losses = []\n", "\n", "for i in trange(num_batches): # Notice the replacement of `range` with `trange`\n", " # As before\n", " x_batch, y_batch = next(iter(train_dataloader))\n", " print(y_batch.shape)\n", "\n", " # Perform a forward pass\n", " outputs = net.forward(x_batch)\n", " print(outputs.shape)\n", " \n", " # Make sure we zero out the gradient each time - we don't want leftover gradients from the previous iteration to be added in!\n", " optimizer.zero_grad()\n", "\n", " # Perform backpropogation\n", " losses = loss(outputs, y_batch)\n", " losses.backward()\n", " returned_losses.append(losses.item())\n", " if (i % 10) == 0:\n", " print(\"The accumulated loss value at iteration \" + str(i) + \" is \" + str(losses.item()))\n", "\n", " # Adam automatically updates all the parameters!\n", " optimizer.step()\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Test the NN out on the test set!\n", "# No guidance here - you can program this yourself!\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises:\n", "* (Optional) Adjust the parameters of the convolutional steps in each layer (`nn.Conv2d`). Make note of how this impacts the network performance and training time.\n", "* (Optional) Try using a different optimizer." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Build Your Own Neural Network" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this section, you can design your own neural work to test on the same dataset.\n", "\n", "**Set up the net as before, train it, and test it out!**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create your net!\n", "\n", "class my_NN(nn.Module):\n", " # This method creates the needed parameters for the NN\n", " def __init__(self, input_size, hidden_size, output_size):\n", " super(my_NN, self).__init__()\n", " # Set up the basic parameters of the network\n", "\n", " \n", " \n", " # Be sure to define any math functions you may need here.\n", " \n", " \n", "\n", " # Create a forward pass function. Most of the creativity happens here! \n", " # You can use any combination of linear, convolution, pooling layers etc., and any activation functions.\n", " # You may need to look at the PyTorch documentation to learn how to set the dimensions of each layer etc.\n", " # It is probably good to end with a softmax function, unless you have another way to convert the output into a probability vector!\n", " def forward(self, x):\n", " x = th.flatten(x, start_dim=-3) #`start_dim=-3` means we only flatten the last three layers, to keep separate images distinct \n", "\n", " \n", " return x\n", " \n", " # Create a function that performs the needed classification.\n", " def classify(self,x,display=0):\n", " \n", " \n", " return \n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Train your network!\n", "\n", "batch = 50\n", "num_batches = \n", "returned_losses = []\n", "tol = \n", "\n", "# Actually create the neural network.\n", "# The hidden size is the dimension of our hidden layers.\n", "# Let's see what happens if we increase the size of the hidden layer\n", "net3 = my_NN(input_size=784, hidden_size= , output_size = 10)\n", "\n", "\n", "\n", "\n", "for i in trange(num_batches):\n", " x_batch, y_batch = next(iter(train_dataloader))\n", " \n", "\n", " \n", "\n", " # Perform backpropogation\n", "\n", " \n", " \n", " # Quit if better than your chosen tolerance\n", " if returned_losses[i] < tol:\n", " break\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Use matplotlib or other plotting software to plot your accumulated loss function versus time\n", "\n", "\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Test your network!\n", "\n", "\n", "\n", "\n", "\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises:\n", "* (Optional) Alter your training loop so that, at each step, the loss is also calculated for the test set data. Record this data in an additional list named `returned_test_losses`. Make a plot showing the evolution of both `returned_loss` and `returned_test_loss` over time. What do you observe?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 4 }