{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "LZULE2do86zk" }, "source": [ "
\n", "\n", "# Inmas Machine Learning Workshop January 2023\n", "Instructor: Christian Kuemmerle - kuemmerle@uncc.edu
\n", "Teaching Assistants: Emily Shinkle, Yuxuan Li, Derek Kielty, Yashil Sukurdeep, Tim Wang, Ben Brindle.\n", "\n", "\n", "## Session II - Classification of Fashion MNIST data" ] }, { "cell_type": "markdown", "metadata": { "id": "LZULE2do86zk" }, "source": [ "In this workshop we will explore classification on the [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset containing 10 different types of clothing.\n", "\n", "Let's begin by importing our common libraries as well as the datasets. Notice that we set the numpy seed to 0 for reproducibility. It's a good idea to set random seeds to specific values while you are experimenting." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "cXzfnLB38tMG" }, "outputs": [], "source": [ "import numpy as np; np.random.seed(0)\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "\n", "import torchvision\n", "import torch\n", "import torchvision.transforms as transforms\n", "train_set = torchvision.datasets.FashionMNIST(root=\"./\", download=True, \n", " train=True,\n", " transform=transforms.Compose([transforms.ToTensor()]))\n", "\n", "test_set = torchvision.datasets.FashionMNIST(root=\"./\", download=True, \n", " train=False,\n", " transform=transforms.Compose([transforms.ToTensor()]))" ] }, { "cell_type": "markdown", "metadata": { "id": "tEpRGdFz9HxZ" }, "source": [ "We used [torchvision](https://pytorch.org/vision/stable/index.html), a part of the [PyTorch](https://pytorch.org) framework (we will come back to that later) to load the dataset in a way that seperates it already into training and test set. I highly recommend only looking at training data. In real life this is the data that you actually have, and in many cases the test data doesn't even exist yet. \n", "\n", "A huge problem in machine learning is leaking testing data into training data. You'll find that models are REALLY good at predicting data that they've already seen. In our case this leakage into the model is unlikely to happen since we were given preseparated data (though how well do you trust the person who gave you it?), but I assure you it definitely happens when you are separating data yourself. Even if you don't leak data directly into your models you can still have indirect leakage because any knowledge you have about the test data from taking a peek could influence how you design your models.\n", "\n", "We HIGHLY recommend you scrutinize your data separation over and over again for any nontrivial data leaks in any project you undertake." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "Qtg2WL_29N8x", "outputId": "899e6243-0cd2-4cb7-a9d2-018fc482dfc6" }, "outputs": [], "source": [ "#select training data tensor and convert the type to a numpy array\n", "X_train = train_set.data.numpy()\n", "Y_train = train_set.targets.numpy()\n", "\n", "#do the same with the test data\n", "X_test = \n", "Y_test = " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the above cell, we converted the data already into our more familiar format of multidimensional [numpy](https://numpy.org/doc/stable/) arrays (numpy.ndarray).\n", "Let's get a sense for the size and shape of our data; how many images are in the training set and what are their dimensions?
With the following commands, we obtain an overview of the size of the data:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(X_train.shape)\n", "print(Y_train.shape)\n", "print(X_test.shape)\n", "print(Y_test.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let us have a glimpse at the some of the target variables. We observe that it consists of 10 different numbers, corresponding to 10 different class labels" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Y_train[0:50]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The class labels correspond to the following object classes, see [here](https://github.com/zalandoresearch/fashion-mnist#labels)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UYHf1S-O9Rc8" }, "outputs": [], "source": [ "label_dict= {\n", "0 : \"tshirt\",\n", "1 : \"pants\",\n", "2 : \"sweater\",\n", "3 : \"dress\",\n", "4 : \"long sleeve\",\n", "5 : \"sandal\",\n", "6 : \"jacket\",\n", "7 : \"sneaker\",\n", "8 : \"bag\",\n", "9 : \"shoe\"}\n", "data = list(label_dict.items())\n", "label_array = np.array(data)[:,1]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following visualizes some data samples. We first define a 'function' as we might re-use the code snippet further below." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 358 }, "id": "6TZWud1M9XGu", "outputId": "44aafdc4-9138-42d0-9896-3ecbf79d9f0b" }, "outputs": [], "source": [ "def visualize_images(X,Y,label_dict,n_row=3,n_col=5,fsize=(12,10)):\n", " fig, axs = plt.subplots(n_row, n_col, figsize=fsize)\n", "\n", " for i, ax in zip(label_dict, axs.ravel()):\n", " \n", " # print the label and remove axis ticks\n", " # since these are images\n", " ax.set_title(\"{}: {}\".format(i, label_dict[i]))\n", " ax.set_xticks([])\n", " ax.set_yticks([])\n", "\n", " # try to parse this. for each i we find a\n", " # random image in X_train which has label i\n", " if X.ndim == 2:\n", " ax.imshow(X[np.random.choice(np.argwhere(Y == i).flatten())] \\\n", " .reshape(np.sqrt(X.shape[1]).astype(int),np.sqrt(X.shape[1]).astype(int)), \n", " cmap='gray' )\n", " else:\n", " ax.imshow(X[np.random.choice(np.argwhere(Y == i).flatten())], \n", " cmap='gray' )\n", " # make the rows closer together since we have 3 \n", " # but only 2 rows have images\n", " for ax in axs.ravel()[len(label_dict):]:\n", " ax.set_visible(False)\n", "\n", " plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Y_train.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "visualize_images(X_train,Y_train,label_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now have a look at the distribution of the different class labels in the training set by counting and visualizing how many occurences each label has." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 265 }, "id": "340fwaVT9eVN", "outputId": "883fcbec-8c61-4147-935c-4399db4a080b" }, "outputs": [], "source": [ "labels_train, counts_train = np.unique(Y_train, return_counts=True)\n", "plt.bar(labels_train, counts_train)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We recall the size and shape of the images." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(X_train.shape[1:])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because our data are images they are stored in 3D arrays, there are many ways to approach machine learning problems on images, and we will use a basic one here. We will use the classification methods that we learned, which do not take the spatial 2D structure of the image into account, so we need to reshape the data to transform the (28 x 28) pixels into 784 predictor variables. Thus, we will think of each image as a string of pixel values with no spatial relation, allowing us to flatten it into a vector of length width*height. \n", "\n", "We can flatten our entire 2D array into a 3D array using the reshape method." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0C6kOXgV9gyM" }, "outputs": [], "source": [ "im_dims = X_train.shape[1:]\n", "\n", "X_train_flat = # use reshape to flatten 3D array to 2D array (training data)\n", "X_test_flat = # use reshape to flatten 3D array to 2D array (t data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In order to find the \"best\" classification model for our purpose, we will split the training set further into a (smaller) training set and a validation set. The value of \"test_size\" here corresponds to the fraction of samples that will go into the validation set." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "FE1kYz_Q9ja2" }, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split, KFold, cross_validate, GridSearchCV\n", "\n", "x_train, x_val, y_train, y_val = train_test_split(X_train_flat, Y_train, test_size=0.1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our training data has a certain distribution of labels (if it was sampled well) which models the natural distribution of whatever data we expect to find in the wild. This doesn't neccesarily mean that this distribution is uniform, but we should keep this in mind whenever we sample our data as it plays a big role in how our models generalize.\n", "\n", "In general, models tend to work better on more abundant labels. This may or may not be desirable, and one should look into ways to deal with class imbalance for their specific case.\n", "\n", "Let's say we are using the MNIST digit dataset. The train_test_split() function provided us with a random sampling of evaluation data and training data, but suppose we were really unlucky and our training data consisted of only 2s and 3s, while our evaluation data was all 8s and 9s. In this case our models would be pretty good at recognizing 2s and 3s, but wouldn't score very well on the evaluation data.\n", "\n", "Of course this is unlikely to happen for us, but in many cases of great class imbalance train_test_split may miss or underrepresent classes. Let's check to see if our train labels y_train and our evaluation labels y_val have the same distribution, by visualizing the label counts for the two subsets of samples." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 550 }, "id": "rlz7YmGZ9nrs", "outputId": "74a6987b-2156-4a34-d0e1-5ab86d5ef34f" }, "outputs": [], "source": [ "# bar graph labels distribution for train\n", "labels_train, counts_train = np.unique(y_train, return_counts=True)\n", "plt.bar(labels_train, counts_train)\n", "plt.show()\n", "\n", "# bar graph labels distribution for val\n", "labels_val, counts_val = np.unique(y_val, return_counts=True)\n", "plt.bar(labels_val, counts_val)\n", "plt.show()\n", "\n", "# ratio of class labels in train and in val\n", "(counts_val/counts_val.sum()) / (counts_train/counts_train.sum())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We observe that the distribution of samples in the different label classes is not balanced anymore (even though it is not too imbalanced). In order to avoid any imbalance, one case use the optional parameter `stratify` and set it to equate the label vector, see also the documentation of [sklearn.model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html?highlight=train_test_split#sklearn.model_selection.train_test_split). " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 531 }, "id": "LfYjzyWo9pMx", "outputId": "57cb806d-9495-423e-e018-676c7051bace" }, "outputs": [], "source": [ "x_train, x_val, y_train, y_val = train_test_split(X_train, Y_train, test_size=0.3, stratify=Y_train)\n", "\n", "# distribution of stratified y_train\n", "labels_train, counts_train = np.unique(y_train, return_counts=True)\n", "plt.bar(labels_train, counts_train)\n", "plt.show()\n", "\n", "# distribution of stratified y_val\n", "labels_val, counts_val = np.unique(y_val, return_counts=True)\n", "plt.bar(labels_val, counts_val)\n", "plt.show()\n", "\n", "# ratio of class labels in train/val very close to 1\n", "(counts_val/counts_val.sum()) / (counts_train/counts_train.sum())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ideally, we should work with the image data using the full available resolution.\n", "However, it might be computationally demanding to work with the full 28x28 pixel images for small exercise today. For that reason, we \"subsample\" the images by deleting the pixels in every other row and column." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train_orig = X_train_flat\n", "Y_train_orig = Y_train\n", "X_test_orig = X_test_flat\n", "Y_test_orig = Y_test" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "v9Fm-VBr9sW8" }, "outputs": [], "source": [ "# delete every other row and column of images\n", "X_train_tmp = X_train_orig[:, np.vstack([2*i*28 + np.arange(0,28) for i in range(0, 14)]).ravel()]\n", "X_train1414_ = X_train_tmp[:, [2*i for i in range(X_train_tmp.shape[-1]//2)]]\n", "\n", "X_test_tmp = X_test_orig[:, np.vstack([2*i*28 + np.arange(0,28) for i in range(0, 14)]).ravel()]\n", "X_test1414_ = X_test_tmp[:, [2*i for i in range(X_test_tmp.shape[-1]//2)]]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Ideally, we would work with all 60000 samples in the training set, for example. For computational reasons, however, we also restrict ourselves only to a subset of 12000 random samples of these (and similarly, for the test set).\n", "\n", "This kind of downsampling may be useful while you experiment, but of course you should make sure your results generalize to your real dataset if you are working on a \"real\" problem." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "v9Fm-VBr9sW8" }, "outputs": [], "source": [ "# throw away 80% of our data\n", "X_train1414, _, Y_train_subsampled, _ = train_test_split(X_train1414_, Y_train, train_size=0.2, stratify=Y_train, random_state=0)\n", "X_test1414, _, Y_test_subsampled, _ = train_test_split(X_test1414_, Y_test, train_size=0.2, stratify=Y_test, random_state=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "Y_train_subsampled.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train1414.shape" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "visualize_images(X_train1414,Y_train_subsampled,label_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Classification of Fashion MNIST: Using Logistic Regression\n", "Now, we have obtained an intuition about the dataset and processed it so that we can feed it into some of the classification methods we learned about. \n", "\n", "We start with logistic regression." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "y3M9KfRc91Wt" }, "outputs": [], "source": [ "from sklearn.linear_model import LogisticRegression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We split the processed dataset into a training set and validation set. The split is, as above, done in a stratified manner which means that the class label frequencies remain balanced in both sets." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "zG2obScw94SS" }, "outputs": [], "source": [ "x_train, x_val, y_train, y_val = train_test_split(X_train1414, Y_train_subsampled, test_size=0.3, stratify=Y_train_subsampled)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#print the dimensions to x_train to find the size of the training set and number of predictive variables\n", "print(...)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Discuss the following questions with your group members:**\n", "1. How many parameters are there in our logistic regression model?\n", "2. Is the size of the training set appropriate compared to the number of parameters to fit a predictive model? If you are not sure, consider the more concrete situation of doing a least squares regression to fit a single-variable polynomial to a training set of 2-dimensional points. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Due to defaults in sklearn.linear_model.LogisticRegression, we need to pass the parameter in `penalty='none'` and `multi_class='multinomial'` to get the version described in class." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0pXIQkml95Wg" }, "outputs": [], "source": [ "lr = LogisticRegression(penalty='none',multi_class='multinomial',max_iter=1000)\n", "lr.fit(x_train, y_train)\n", "y_train_lr_pred = lr.predict(x_train)\n", "y_val_lr_pred = lr.predict(x_val)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you see a warning here, this is an encounter with the realities and limitations of optimization in the context even of simple statistical models such as logistic regression. You can increase the maximal number of iterations and see if it significantly changes something in the resulting accuracies.\n", "\n", "There are many metrics one can use on a classification problem, indeed in the real world you'll often find that choosing the correct metric is THE problem itself. For now we will stick to accuracy, which corresponds to the quotient between correctly labeled samples and all samples.\n", "\n", "Why might you want a different metric? Suppose you have a binary classification problem with 90% of your labels 0 and 10% 1. A model which always predicts 0 will have a great score of 90%! In this case you might chose a different metric which takes this imbalance into account, for example the f1 score. Worse yet, maybe a 0 is \"all normal\" and 1 is \"catastrophic failure\", then you better be sure you get those 1 predictions correct..." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "-5KTnh2d98Na", "outputId": "aff12afe-1cf1-46db-e951-804863fbdc7e" }, "outputs": [], "source": [ "from sklearn.metrics import accuracy_score\n", "acc_train_lr = accuracy_score(y_train, y_train_lr_pred)\n", "acc_val_lr = accuracy_score(y_val, y_val_lr_pred)\n", "\n", "print(\"Logistic Regression: Training Accuracy: {:.4f}, Validation Accuracy: {:.4f}\".format(acc_train_lr, acc_val_lr))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You should find that the training accuracy is higher than the accuracy on the validation set.\n", "\n", "The discrepency between these two scores is important to keep in mind. A much higher training score could imply overfitting of your model to the training data, suggesting a form of regularization. A much higher validation accuracy score might make you rethink if your validation and training dataset were sampled well." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Confusion Matrix\n", "\n", "If fitting models was the easy part of data science, one of the hardest parts is figuring out why models are giving you certain results. If your model didn't give you 100% accuracy then it is making mistakes. Are these mistakes random? Is there a pattern to them? Does the model make the same mistake every time?\n", "\n", "For example, in the [MNIST digit dataset](https://en.wikipedia.org/wiki/MNIST_database) you might find that logistic regression mixes up 3s and 8s a lot, or 1s and 7s. This is pretty reasonable since handwritten versions of these digits look similar. Is there anything you can do about this? Should you be worried?\n", "\n", "We can use the so-called _confusion matrix_ (see [sklearn.metrics.confusion_matrix](https://scikit-learn.org/stable/modules/model_evaluation.html#confusion-matrix)) to see exactly how our predictions are going wrong." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 363 }, "id": "phCiJIRj9-pD", "outputId": "c23de375-c064-48f6-96b3-78c7c5dd1517" }, "outputs": [], "source": [ "from sklearn.metrics import confusion_matrix, make_scorer, ConfusionMatrixDisplay\n", "#confusion_frame = pd.DataFrame(confusion_matrix(y_val, y_val_lr_pred), index=label_dict.values(), columns=label_dict.values())\n", "confusionmatrix = confusion_matrix(y_val, y_val_lr_pred)\n", "disp = ConfusionMatrixDisplay(confusionmatrix,display_labels=label_array)\n", "fig, ax = plt.subplots(figsize=(10,10))\n", "disp.plot(ax=ax)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise Part: K-Nearest Neighbor Classifier & Model Selection\n", "\n", "Now, it is your turn!\n", "Let's run a different classification algorithm, the K Nearest Neighbors classifier. Feel free to choose how many neighbors you want to consider." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (a) K-Nearest Neighbor for fixed k." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "QFIHukhK-AsJ", "outputId": "dc93d482-bd73-4f4f-c412-c7ccc3b97014" }, "outputs": [], "source": [ "from sklearn.neighbors import KNeighborsClassifier\n", "# integer parameter for how many neighbors to use\n", "n_neighbors = \n", "\n", "# instantiate KNeighborsClassifier with n_neighbors\n", "# and fit to data\n", "knn = \n", "\n", "# use the model.predict() method to make predictions on train and val data\n", "y_train_pred = \n", "y_val_pred = \n", "\n", "# compute accuracies of train and val using accuracy_score)\n", "acc_train = \n", "acc_val = \n", "\n", "# print results in nice format\n", "print(\"KNN {}: Train Accuracy: {:.4f}, Test Accuracy: {:.4f}\".format(n_neighbors, acc_train, acc_val))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Discuss the following questions with your group members:\n", "1. Calculate the Train and Test accuracy for n_neighbors = 1, 5, 1000, 2000. What do you notice about the accuracies in these cases?\n", "2. The KNN algorithm has approximately N/K effective parameters, where N is the size of the training set and K is the number of nearest neighbors (i.e. the n_neighbors variable). Given this effective number of parameters, do your results from question 1 make sense? " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Depending on which dataset you used you might find that k-Nearest Neighbors does about the same as Logistic Regression or much better.\n", "\n", "When one algorithm does significantly better than another it is worth thinking about why that algorithm should be better on your data. For example perhaps you are using image data and one algorithm takes into account the spatial properties of the image (this doesn't apply to us in the current state because we flattened our images).\n", "\n", "In the case that two models do about the same it becomes more complicated to decide which is better. There is some noise in our evaluation accuracy depending on which portion of the data was randomly assigned to x_val. \n", "\n", "The most common way to deal with this is through [cross validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)), and the most popular form of cross validation is [K-fold cross validation](https://scikit-learn.org/stable/modules/cross_validation.html). In K-fold cross validation, the data is separated into k chunks. For each of these chunks a model is trained on the other k-1 chunks and evaluated on the held out chunk. Because each data point is assigned to only one chunk it has a unique evaluation score, and these scores are used to determine the effectiveness of the model.\n", "\n", "Scikit-learn offers both normal and Stratified version of [K-fold cross validation (documentation)](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html?highlight=kfold#sklearn.model_selection.KFold). It is really easy to write your own version of cross validation, and in many cases your data will require a non trivial form of validation that will require you to do this." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (b) Cross Validation of logistic regression\n", "\n", "Let's use 5-fold cross validation on our logistic regression model via scikit-learn's K-fold object." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# we use this to suppress any warning that might occur during the training of the logistic regression models\n", "import warnings\n", "warnings.filterwarnings(\"ignore\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "EoxiMf8o-Iw9", "outputId": "3159fc69-bf35-4adb-a18f-02914ffd3bac" }, "outputs": [], "source": [ "# create a KFold cross validation object with, say, 5 folds\n", "cv =\n", "\n", "# note when we use cross validation we\n", "# pass in the whole X_train1414 instead of using the manually made split into\n", "# into x_train, x_val from above!\n", "\n", "# complete the loop below to train a logistic regression classifier\n", "# and make predictions on each held out fold\n", "for train_index, val_index in cv.split(X_train1414, Y_train_subsampled):\n", " x_train, x_val = X_train1414[train_index], X_train1414[val_index]\n", " y_train, y_val = Y_train_subsampled[train_index], Y_train_subsampled[val_index]\n", "\n", " # instantiate and fit a LogisticRegression object\n", " # with multi_class='multinomial' and penalty='none'\n", " # fit it on the training folds x_train\n", " lr = \n", " \n", " # make predictions on the infold training data x_train\n", " # and out of fold texting data x_val\n", " y_train_lr_pred = \n", " y_val_lr_pred = \n", "\n", " acc_train_lr = \n", " acc_val_lr = \n", "\n", " print(\"Logistic Regression: Train Accuracy: {:.4f}, Test Accuracy: {:.4f}\".format(acc_train_lr, acc_val_lr))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Because K-fold cross validation loops all look the same, scikit-learn offers a higher level function called cross_validate to take care of this for you. All you have to do is pass in an estimator, a cross validation strategy and a dictionary of scores." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UXXdBAQ7-JZc" }, "outputs": [], "source": [ "# create the LogisticRegression object, the cv object, and the score dictionary\n", "lr = \n", "cv = \n", "scores = {'acc' : make_scorer(accuracy_score)} # filled out for you because slightly weird with make_scorer\n", "\n", "# run the cross_validate function on X_train, Y_train using your \n", "# estimator, cv, scores above\n", "cv_results =" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "cross_validate produces a helpful table of times and test scores. Notice that the scores for each fold vary slightly. When we cross validate a model we are looking not only for high scores but for CONSISTENCY of scores.\n", "\n", "If your results across folds looke like 90%, 90%, 90%, 60%, 90%, your job as a data scientist is to figure out what happened in that fourth fold to throw your model so far off. \n", "\n", "In general consistent cross validation scores will lead to results on the test data which are also consistent with what you've seen so far. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 206 }, "id": "gQI4eHfC-RQA", "outputId": "c38860de-c74f-4d49-c376-94af344c7a07" }, "outputs": [], "source": [ "pd.DataFrame(cv_results).round(4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (c) Cross Validation of k-nearest neighbors classifier\n", "\n", "Before we cross validate our KNeighborsClassifier, note that above, we arbitrarily set 'n_neighbors=5'. Was this a good choice? We should try various values of this parameter to see what the optimal value is i.e the one that achieves the highest cross validation score. By hand this could be done in a nested loop, with the outer loop running through choices of n_neighbors and the inner loop through folds.\n", "\n", "Luckily for us scikit-learn packages this up into the super useful [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html?highlight=gridsearchcv#sklearn.model_selection.GridSearchCV) object. GridSearchCV takes an estimator, a cross validation strategy, and a dictionary of parameters and finds the best parameters for the estimator. Unlike cross_validate, GridSearchCV acts as an estimator itself with fit and predict methods, so you dont need to retrain a new estimator with the optimal parameters. What a bargain!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 36 }, "id": "vXB19C6F-U8w", "outputId": "3500a555-a72c-48a6-d542-8d1b0616c736" }, "outputs": [], "source": [ "# create the KNeighborsClassifier object, the cv object, and the parameter dictionary\n", "# I would reccomend using n_folds=3 and optimizing over n_neighbors=3,5,10\n", "# for speed reasons\n", "knn = \n", "cv = \n", "params = # this should be a dictionary {\"param_name\" : [param, options, ...]}\n", "\n", "gcv = " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Fitting the GridSearchCV estimator might take a minute..." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "9StNy36E-XFl", "outputId": "562c052b-9011-412e-a38e-e7170d4008cc" }, "outputs": [], "source": [ "%time" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The GridSearchCV object holds a dictionary cv_results_ with full results from the fitting which you should look at carefully for consistency" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "GNRscilg-bAs", "outputId": "69f8a866-f737-4e89-8893-8dbdeb40eceb" }, "outputs": [], "source": [ "# print and examine the variable cv_results_ in the class object of GridSearchCV from above\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "best_estimator_ holds the information for the optimal configuration of parameters found by the gridsearch." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "a4wtV9ITHzlS", "outputId": "84a920e2-f145-4349-ef61-c828d8b19e6b" }, "outputs": [], "source": [ "# print the best_estimator_ to see which n_neighbors did the best\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have a good idea of where Logistic Regression and K Nearest Neighbors stand, let's train them on the entire training data (the 14x14 pixel one, but you can also later try the full 28x28 pixel data!) and evaluate them on the held out, never looked at testing data.\n", "\n", "K-Nearest Neighbors has already been trained via GridSearchCV, so we only need to train logistic regression." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "bkUIFRmCP02M" }, "outputs": [], "source": [ "# one more time: Create a LogisticRegression object with the usual parameters\n", "# and fit it on the entire X_train1414, Y_train_subsampled\n", "\n", "# make predictions from the GridsearchCV object and LogisticRegression object\n", "# on the held out X_test1414 testing set\n", "y_pred_knn = \n", "y_pred_lr = " ] }, { "cell_type": "markdown", "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 388 }, "id": "yR0YpaHVP4cR", "outputId": "2876e91b-e424-43ff-b7de-1d850f250af4" }, "source": [ "Let's take a look at the labels assigned by k-nearest neighbors and logistic regression on some of our images. Do you see any disagreements? You can run the cell below multiple times to produce different images." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 388 }, "id": "yR0YpaHVP4cR", "outputId": "2876e91b-e424-43ff-b7de-1d850f250af4" }, "outputs": [], "source": [ "n_row = 3\n", "n_col = 5\n", "\n", "shuff = np.random.permutation(len(Y_test_subsampled))\n", "\n", "fig, axs = plt.subplots(n_row, n_col, figsize=fsize)\n", "for i, ax in zip(shuff, axs.ravel()):\n", " ax.set_title('True label: \\n %s\\n Prediction by KNN: \\n %s \\n Prediction by LR: \\n %s' % \\\n", " (label_array[Y_test_subsampled[i]], label_array[y_pred_knn[i]], label_array[y_pred_lr[i]]))\n", " ax.set_xticks([])\n", " ax.set_yticks([])\n", " ax.imshow(X_test1414[i].reshape(14,14), \n", " cmap='gray' )\n", "\n", "for ax in axs.ravel()[len(label_dict):]:\n", " ax.set_visible(False)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We compute the scores on the never seen testing data set. You should hopefully see some consistency with your cross validation results. If not you need to think about what could cause a discrepancy. Maybe the characterstics of the testing data has changes since you collected the training data. Perhaps people stopped wearing pants and so these are missing from the testing data?\n", "\n", "In general the test scores should not surprise you at this point." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "xzNJiOcDP8C1", "outputId": "2da00450-a29c-4a86-abc6-016d4bb5ba29" }, "outputs": [], "source": [ "# compute the knn and lr scored using accuracy_score\n", "knn_acc = \n", "lr_acc = \n", "\n", "# print the accuracy scores\n", "print(\"KNN acc: {:.4f}, lr acc: {:.4f}\".format(knn_acc, lr_acc))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There is more than one way to reach a certain accuracy score. Perhaps regardless of their overall performance one of these models was better than the other at a particular digit or a particular article of clothing. We can check the correlation of the results to see how similar these models are." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "2b2JICXOQBjw", "outputId": "6ba013ce-3827-402b-ee63-e283a26f16e1" }, "outputs": [], "source": [ "(y_pred_knn == y_pred_lr).mean()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a more in depth look we can form a confusion matrix between the K Nearest Neighbors predictions and the Logistic Regression predictions. Where there specific items that they tended to disagree on? In these cases who was correct?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# create a confusion_matrix between y_pred_knn and y_pred_lr. You can see further above how this was done in a similar case.\n", "# Note that sklearn.metrics.confusion_matrix consider the first argument as the \"true label\" data and the second\n", "# argument as \"predicted label\". However, in this case we are just comparing the labels predicted by KNN and LR, respectively!\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (d) BONUS Problem\n", "\n", "If we have two or more estimators that are fairly uncorrelated we can leverage their different strengths in a new model via a precedure known as stacking.\n", "\n", "The idea goes like this. Model A and Model B are both looking at pictures of clothes and making predictions. They recieve an image which Model A confidently identifies as pants. Model B thinks that the image is of a dress, but is less sure. Model C recieves these results from Models A and B and knows from experience that Model A is an expert on things that look more like pants. Model C gives us our final prediction \"pants\".\n", "\n", "Stacking can be thought of as a form of feature engineering or nonlinear transformation, which we haven't used so far in this notebook. From the original features (pixel values) we generate new features which are the outputs of our models. We then train a new model on these outputs to give us our overall prediction. \n", "\n", "Many models such as Random Forests and Neural Networks can be thought of as kinds of stacked models. Indeed you can even stack stacked models! In practice the best performance on machine learning problems tends to come from large ensembles of stacked models.\n", "\n", "How to train and evaluate stacked models is a complex task that is very prone to overfitting. I would highly recommend reading into various strategies, particularly from the Kaggle competition community (these peope love stacking), before using this in your own work.\n", "\n", "For now we illustrate the power of stacking using scikit-learn's [StackingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.StackingClassifier.html?highlight=stackingclassifier#sklearn.ensemble.StackingClassifier). Using this object is quite simple, you need to pass in the base estimators, the meta estimator (Model C above), and a cross validation strategy. Under the hood StackingClassifier trains the base estimators on the entire training set, as well as performs out of fold class probability estimates using the cross validation strategy. The meta estimator is then trained on these held out probability estimates instead of the actual training set predictions to help avoid overfitting.\n", "\n", "Once again, using and understanding are very different things. I recommend doing your own research. This section is vague on purpose." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.ensemble import StackingClassifier" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "estimators = \n", "final_estimator = \n", "cv = \n", "\n", "sc = StackingClassifier(estimators=estimators, final_estimator=final_estimator, cv=cv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "StackingClassifier can be fit like any estimator in the scikit-learn API." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sc.fit(X_train1414, Y_train_subsampled)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Likewise, we can use the predict method to get results from our stacked model on X_test." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "y_pred_stack = sc.predict(X_test1414)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You might find an improvement over the individual models. Isn't that cool! It might be worth digging through the models, which are stored in the StackingClassifier object, to get a better understanding of how this stacked model is working." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "stack_acc = accuracy_score(Y_test_subsampled, y_pred_stack)\n", "print(\"KNN acc: {:.4f}, lr acc: {:.4f}, Stack acc: {:.4f}\".format(knn_acc, lr_acc, stack_acc))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have an overall increase in accuracy, but to get a better understanding of why the accuracy improved we can look at the individual categories using the the confusion matrices. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#make confusion matrix for the lr, knn, and stacked models\n", "#recall: y_pred_lr, y_pred_knn, y_pred_stack are the predictions from the test data X_test1414\n", "\n", "confusionmatrix_lr = confusion_matrix(Y_test_subsampled, y_pred_lr)\n", "disp_lr = ConfusionMatrixDisplay(confusionmatrix_lr,display_labels=label_array)\n", "\n", "confusionmatrix_knn = confusion_matrix(Y_test_subsampled, y_pred_knn)\n", "disp_knn = ConfusionMatrixDisplay(confusionmatrix_knn,display_labels=label_array)\n", "\n", "confusionmatrix_stack = confusion_matrix(Y_test_subsampled, y_pred_stack)\n", "disp_stack = ConfusionMatrixDisplay(confusionmatrix_stack,display_labels=label_array)\n", "\n", "fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize = (15,5))\n", "disp_lr.plot(ax=ax1)\n", "disp_knn.plot(ax=ax2)\n", "disp_stack.plot(ax=ax3)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To get a better idea of how our predictions have changed after stacking we can examine the accuracies for each category individually. \n", "\n", "Let $x$ be a data point in the test set and $\\ell$ be a category label (tshirt, pants, etc.) Calculate the accuracies: \n", "$$\\frac{\\#\\{x : x \\text{ has label } \\ell \\text{ and } x \\text{ predicted to have label } \\ell\\}}{\\#\\{x : x \\text{ predicted to be } \\ell\\}}, \\quad \\text{ for each label } \\ell,$$ as an array of length 10. (Hint: There is a straightforward way to compute the accuracy arrays from the confusion matrix.) Do this calculation for each model: logistic regression, KNN, and the stacked model. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# compute category-wise accuracies for each model \n", "lr_cw_acc = #numpy array of accuracies for each category\n", "knn_cw_acc = \n", "stack_cw_acc = \n", "\n", "#dataframes are easier on the eyes\n", "acc_df = pd.DataFrame([lr_cw_acc,knn_cw_acc,stack_cw_acc], index = ['LR', 'KNN', 'LR+KNN'])\n", "acc_df.rename(columns = label_dict, inplace = True)\n", "acc_df.round(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Examine each category to see by how much the accuracies improved after stacking." ] } ], "metadata": { "colab": { "collapsed_sections": [], "name": "fashionMNIST_solutions.ipynb", "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 1 }