{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "![MLU Logo](../data/MLU_Logo.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Machine Learning Accelerator - Tabular Data - Lecture 2\n", "\n", "\n", "## Decision Tree Model\n", "\n", "In this notebook, we build, train, and tune by [__GridSearchCV__](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) a [__Decision Tree Classifier__](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) to predict the __Outcome Type__ field of our review dataset.\n", "\n", "\n", "1. Read the dataset\n", "2. Exploratory Data Analysis\n", "3. Select features to build the model\n", "4. Training and test datasets\n", "5. Data processing with Pipeline and ColumnTransformer\n", "6. Train and tune a classifier\n", "7. Test the classifier\n", "8. Improvement ideas\n", "\n", "__Austin Animal Center Dataset__:\n", "\n", "In this exercise, we are working with pet adoption data from __Austin Animal Center__. We have two datasets that cover intake and outcome of animals. Intake data is available from [here](https://data.austintexas.gov/Health-and-Community-Services/Austin-Animal-Center-Intakes/wter-evkm) and outcome is from [here](https://data.austintexas.gov/Health-and-Community-Services/Austin-Animal-Center-Outcomes/9t4d-g238). \n", "\n", "In order to work with a single table, we joined the intake and outcome tables using the \"Animal ID\" column and created a single __review.csv__ file. We also didn't consider animals with multiple entries to the facility to keep our dataset simple. If you want to see the original datasets and the merged data with multiple entries, they are available under data/review folder: Austin_Animal_Center_Intakes.csv, Austin_Animal_Center_Outcomes.csv and Austin_Animal_Center_Intakes_Outcomes.csv.\n", "\n", "__Dataset schema:__ \n", "- __Pet ID__ - Unique ID of pet\n", "- __Outcome Type__ - State of pet at the time of recording the outcome (0 = not placed, 1 = placed). This is the field to predict.\n", "- __Sex upon Outcome__ - Sex of pet at outcome\n", "- __Name__ - Name of pet \n", "- __Found Location__ - Found location of pet before entered the center\n", "- __Intake Type__ - Circumstances bringing the pet to the center\n", "- __Intake Condition__ - Health condition of pet when entered the center\n", "- __Pet Type__ - Type of pet\n", "- __Sex upon Intake__ - Sex of pet when entered the center\n", "- __Breed__ - Breed of pet \n", "- __Color__ - Color of pet \n", "- __Age upon Intake Days__ - Age of pet when entered the center (days)\n", "- __Age upon Outcome Days__ - Age of pet at outcome (days)\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[33mWARNING: You are using pip version 21.3.1; however, version 22.3.1 is available.\n", "You should consider upgrading via the '/home/ec2-user/anaconda3/envs/pytorch_p39/bin/python -m pip install --upgrade pip' command.\u001b[0m\n", "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install -q -r ../requirements.txt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Read the dataset\n", "(Go to top)\n", "\n", "Let's read the dataset into a dataframe, using Pandas." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The shape of the dataset is: (95485, 13)\n" ] } ], "source": [ "import pandas as pd\n", "\n", "import warnings\n", "warnings.filterwarnings(\"ignore\")\n", " \n", "df = pd.read_csv('../data/review/review_dataset.csv')\n", "\n", "print('The shape of the dataset is:', df.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Exploratory Data Analysis\n", "(Go to top)\n", "\n", "We will look at number of rows, columns and some simple statistics of the dataset." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Pet IDOutcome TypeSex upon OutcomeNameFound LocationIntake TypeIntake ConditionPet TypeSex upon IntakeBreedColorAge upon Intake DaysAge upon Outcome Days
0A7940111.0Neutered MaleChunkAustin (TX)Owner SurrenderNormalCatNeutered MaleDomestic Shorthair MixBrown Tabby/White730730
1A7763591.0Neutered MaleGizmo7201 Levander Loop in Austin (TX)StrayNormalDogIntact MaleChihuahua Shorthair MixWhite/Brown365365
2A6747540.0Intact MaleNaN12034 Research in Austin (TX)StrayNursingCatIntact MaleDomestic Shorthair MixOrange Tabby66
3A6897241.0Neutered Male*Donatello2300 Waterway Bnd in Austin (TX)StrayNormalCatIntact MaleDomestic Shorthair MixBlack6060
4A6809691.0Neutered Male*Zeus4701 Staggerbrush Rd in Austin (TX)StrayNursingCatIntact MaleDomestic Shorthair MixWhite/Orange Tabby760
\n", "
" ], "text/plain": [ " Pet ID Outcome Type Sex upon Outcome Name \\\n", "0 A794011 1.0 Neutered Male Chunk \n", "1 A776359 1.0 Neutered Male Gizmo \n", "2 A674754 0.0 Intact Male NaN \n", "3 A689724 1.0 Neutered Male *Donatello \n", "4 A680969 1.0 Neutered Male *Zeus \n", "\n", " Found Location Intake Type Intake Condition \\\n", "0 Austin (TX) Owner Surrender Normal \n", "1 7201 Levander Loop in Austin (TX) Stray Normal \n", "2 12034 Research in Austin (TX) Stray Nursing \n", "3 2300 Waterway Bnd in Austin (TX) Stray Normal \n", "4 4701 Staggerbrush Rd in Austin (TX) Stray Nursing \n", "\n", " Pet Type Sex upon Intake Breed Color \\\n", "0 Cat Neutered Male Domestic Shorthair Mix Brown Tabby/White \n", "1 Dog Intact Male Chihuahua Shorthair Mix White/Brown \n", "2 Cat Intact Male Domestic Shorthair Mix Orange Tabby \n", "3 Cat Intact Male Domestic Shorthair Mix Black \n", "4 Cat Intact Male Domestic Shorthair Mix White/Orange Tabby \n", "\n", " Age upon Intake Days Age upon Outcome Days \n", "0 730 730 \n", "1 365 365 \n", "2 6 6 \n", "3 60 60 \n", "4 7 60 " ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Print the first five rows\n", "# NaN means missing data\n", "df.head()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "RangeIndex: 95485 entries, 0 to 95484\n", "Data columns (total 13 columns):\n", " # Column Non-Null Count Dtype \n", "--- ------ -------------- ----- \n", " 0 Pet ID 95485 non-null object \n", " 1 Outcome Type 95485 non-null float64\n", " 2 Sex upon Outcome 95484 non-null object \n", " 3 Name 59138 non-null object \n", " 4 Found Location 95485 non-null object \n", " 5 Intake Type 95485 non-null object \n", " 6 Intake Condition 95485 non-null object \n", " 7 Pet Type 95485 non-null object \n", " 8 Sex upon Intake 95484 non-null object \n", " 9 Breed 95485 non-null object \n", " 10 Color 95485 non-null object \n", " 11 Age upon Intake Days 95485 non-null int64 \n", " 12 Age upon Outcome Days 95485 non-null int64 \n", "dtypes: float64(1), int64(2), object(10)\n", "memory usage: 9.5+ MB\n" ] } ], "source": [ "# Let's see the data types and non-null values for each column\n", "df.info()" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Outcome TypeAge upon Intake DaysAge upon Outcome Days
count95485.00000095485.00000095485.000000
mean0.564005703.436959717.757313
std0.4958891052.2521971055.023160
min0.0000000.0000000.000000
25%0.00000030.00000060.000000
50%1.000000365.000000365.000000
75%1.000000730.000000730.000000
max1.0000009125.0000009125.000000
\n", "
" ], "text/plain": [ " Outcome Type Age upon Intake Days Age upon Outcome Days\n", "count 95485.000000 95485.000000 95485.000000\n", "mean 0.564005 703.436959 717.757313\n", "std 0.495889 1052.252197 1055.023160\n", "min 0.000000 0.000000 0.000000\n", "25% 0.000000 30.000000 60.000000\n", "50% 1.000000 365.000000 365.000000\n", "75% 1.000000 730.000000 730.000000\n", "max 1.000000 9125.000000 9125.000000" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# This prints basic statistics for numerical columns\n", "df.describe()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's separate model features and model target. " ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Index(['Pet ID', 'Outcome Type', 'Sex upon Outcome', 'Name', 'Found Location',\n", " 'Intake Type', 'Intake Condition', 'Pet Type', 'Sex upon Intake',\n", " 'Breed', 'Color', 'Age upon Intake Days', 'Age upon Outcome Days'],\n", " dtype='object')\n" ] } ], "source": [ "print(df.columns)" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model features: Index(['Pet ID', 'Sex upon Outcome', 'Name', 'Found Location', 'Intake Type',\n", " 'Intake Condition', 'Pet Type', 'Sex upon Intake', 'Breed', 'Color',\n", " 'Age upon Intake Days', 'Age upon Outcome Days'],\n", " dtype='object')\n", "Model target: Outcome Type\n" ] } ], "source": [ "model_features = df.columns.drop('Outcome Type')\n", "model_target = 'Outcome Type'\n", "\n", "print('Model features: ', model_features)\n", "print('Model target: ', model_target)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can explore the features set further, figuring out first what features are numerical or categorical. Beware that some integer-valued features could actually be categorical features, and some categorical features could be text features. " ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Numerical columns: Index(['Age upon Intake Days', 'Age upon Outcome Days'], dtype='object')\n", "\n", "Categorical columns: Index(['Pet ID', 'Sex upon Outcome', 'Name', 'Found Location', 'Intake Type',\n", " 'Intake Condition', 'Pet Type', 'Sex upon Intake', 'Breed', 'Color'],\n", " dtype='object')\n" ] } ], "source": [ "import numpy as np\n", "numerical_features_all = df[model_features].select_dtypes(include=np.number).columns\n", "print('Numerical columns:',numerical_features_all)\n", "\n", "print('')\n", "\n", "categorical_features_all = df[model_features].select_dtypes(include='object').columns\n", "print('Categorical columns:',categorical_features_all)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Target distribution\n", "\n", "Let's check our target distribution." ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "scrolled": true }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYMAAAD+CAYAAADYr2m5AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8QVMy6AAAACXBIWXMAAAsTAAALEwEAmpwYAAAPLUlEQVR4nO3df6zddX3H8efLVpTMISB3Hestu2TcxFQTEZvSxf2xSVZaNCt/KIEsa0MauwRINFky6/5pUEngn7GRoFszOluzWYkbo8OyrimaZVkKvSgDC2O9QwltkF5pgRkjDnzvj/spHK7n9p5C7zmXnucjOTnf7/vz+X7P+yQ3fZ3vj3OaqkKSNNzeMegGJEmDZxhIkgwDSZJhIEnCMJAkAYsH3cCbdcEFF9TY2Nig25Ckt42HH374x1U10m3sbRsGY2NjTExMDLoNSXrbSPL0bGOeJpIkGQaSJMNAkoRhIEnCMJAkYRhIkjAMJEkYBpIkDANJEm/jbyC/HYxt/tagWzij/PDWjw+6BemM5ZGBJMkwkCQZBpIkDANJEoaBJAnDQJKEYSBJwjCQJGEYSJIwDCRJ9BgGSX6Y5LEkjySZaLXzk+xNcqg9n9fqSXJHkskkjya5rGM/G9r8Q0k2dNQ/0vY/2bbN6X6jkqTZncqRwe9V1aVVtaKtbwb2VdU4sK+tA6wFxttjE/AVmA4PYAtwObAS2HIiQNqcT3dst+ZNvyNJ0il7K6eJ1gHb2/J24OqO+o6ath84N8mFwJXA3qo6VlXHgb3AmjZ2TlXtr6oCdnTsS5LUB72GQQH/muThJJtabUlVPduWfwQsactLgWc6tj3caierH+5SlyT1Sa8/Yf07VXUkya8Be5P8V+dgVVWSOv3tvVELok0AF1100Xy/nCQNjZ6ODKrqSHs+CtzD9Dn/59opHtrz0Tb9CLCsY/PRVjtZfbRLvVsfW6tqRVWtGBkZ6aV1SVIP5gyDJL+S5FdPLAOrge8Du4ATdwRtAO5ty7uA9e2uolXAi+100h5gdZLz2oXj1cCeNvZSklXtLqL1HfuSJPVBL6eJlgD3tLs9FwN/X1X/kuQAcHeSjcDTwDVt/m7gKmAS+ClwPUBVHUvyReBAm/eFqjrWlm8AvgqcDdzfHpKkPpkzDKrqKeBDXerPA1d0qRdw4yz72gZs61KfAD7YQ7+SpHngN5AlSYaBJMkwkCRhGEiSMAwkSRgGkiQMA0kShoEkCcNAkoRhIEnCMJAkYRhIkjAMJEkYBpIkDANJEoaBJAnDQJKEYSBJwjCQJGEYSJIwDCRJGAaSJAwDSRKGgSQJw0CShGEgScIwkCQBiwfdgKTBGNv8rUG3cEb54a0fH3QLb4lHBpIkw0CSdAphkGRRku8lua+tX5zkwSSTSb6R5KxWf1dbn2zjYx37+HyrP5nkyo76mlabTLL5NL4/SVIPTuXI4DPAEx3rtwG3V9UlwHFgY6tvBI63+u1tHkmWA9cCHwDWAF9uAbMIuBNYCywHrmtzJUl90lMYJBkFPg78TVsP8DHgm23KduDqtryurdPGr2jz1wE7q+rlqvoBMAmsbI/Jqnqqqn4O7GxzJUl90uuRwV8Afwr8oq2/D3ihql5p64eBpW15KfAMQBt/sc1/rT5jm9nqvyTJpiQTSSampqZ6bF2SNJc5wyDJJ4CjVfVwH/o5qaraWlUrqmrFyMjIoNuRpDNGL98z+CjwB0muAt4NnAP8JXBuksXt0/8ocKTNPwIsAw4nWQy8F3i+o35C5zaz1SVJfTDnkUFVfb6qRqtqjOkLwA9U1R8C3wY+2aZtAO5ty7vaOm38gaqqVr+23W10MTAOPAQcAMbb3UlntdfYdVrenSSpJ2/lG8ifA3Ym+RLwPeCuVr8L+FqSSeAY0/+4U1UHk9wNPA68AtxYVa8CJLkJ2AMsArZV1cG30Jck6RSdUhhU1XeA77Tlp5i+E2jmnJ8Bn5pl+1uAW7rUdwO7T6UXSdLp4zeQJUmGgSTJMJAkYRhIkjAMJEkYBpIkDANJEoaBJAnDQJKEYSBJwjCQJGEYSJIwDCRJGAaSJAwDSRKGgSQJw0CShGEgScIwkCRhGEiSMAwkSRgGkiQMA0kShoEkCcNAkoRhIEnCMJAkYRhIkjAMJEn0EAZJ3p3koST/meRgkptb/eIkDyaZTPKNJGe1+rva+mQbH+vY1+db/ckkV3bU17TaZJLN8/A+JUkn0cuRwcvAx6rqQ8ClwJokq4DbgNur6hLgOLCxzd8IHG/129s8kiwHrgU+AKwBvpxkUZJFwJ3AWmA5cF2bK0nqkznDoKb9pK2+sz0K+BjwzVbfDlzdlte1ddr4FUnS6jur6uWq+gEwCaxsj8mqeqqqfg7sbHMlSX3S0zWD9gn+EeAosBf4H+CFqnqlTTkMLG3LS4FnANr4i8D7OusztpmtLknqk57CoKperapLgVGmP8m/fz6bmk2STUkmkkxMTU0NogVJOiOd0t1EVfUC8G3gt4FzkyxuQ6PAkbZ8BFgG0MbfCzzfWZ+xzWz1bq+/tapWVNWKkZGRU2ldknQSvdxNNJLk3LZ8NvD7wBNMh8In27QNwL1teVdbp40/UFXV6te2u40uBsaBh4ADwHi7O+kspi8y7zoN702S1KPFc0/hQmB7u+vnHcDdVXVfkseBnUm+BHwPuKvNvwv4WpJJ4BjT/7hTVQeT3A08DrwC3FhVrwIkuQnYAywCtlXVwdP2DiVJc5ozDKrqUeDDXepPMX39YGb9Z8CnZtnXLcAtXeq7gd099CtJmgd+A1mSZBhIkgwDSRKGgSQJw0CShGEgScIwkCRhGEiSMAwkSRgGkiQMA0kShoEkCcNAkoRhIEnCMJAkYRhIkjAMJEkYBpIkDANJEoaBJAnDQJKEYSBJwjCQJGEYSJIwDCRJGAaSJAwDSRKGgSQJw0CShGEgSaKHMEiyLMm3kzye5GCSz7T6+Un2JjnUns9r9SS5I8lkkkeTXNaxrw1t/qEkGzrqH0nyWNvmjiSZjzcrSequlyODV4A/qarlwCrgxiTLgc3AvqoaB/a1dYC1wHh7bAK+AtPhAWwBLgdWAltOBEib8+mO7da89bcmSerVnGFQVc9W1Xfb8v8CTwBLgXXA9jZtO3B1W14H7Khp+4Fzk1wIXAnsrapjVXUc2AusaWPnVNX+qipgR8e+JEl9cErXDJKMAR8GHgSWVNWzbehHwJK2vBR4pmOzw612svrhLvVur78pyUSSiampqVNpXZJ0Ej2HQZL3AP8AfLaqXuoca5/o6zT39kuqamtVraiqFSMjI/P9cpI0NHoKgyTvZDoI/q6q/rGVn2uneGjPR1v9CLCsY/PRVjtZfbRLXZLUJ73cTRTgLuCJqvrzjqFdwIk7gjYA93bU17e7ilYBL7bTSXuA1UnOaxeOVwN72thLSVa111rfsS9JUh8s7mHOR4E/Ah5L8kir/RlwK3B3ko3A08A1bWw3cBUwCfwUuB6gqo4l+SJwoM37QlUda8s3AF8Fzgbubw9JUp/MGQZV9e/AbPf9X9FlfgE3zrKvbcC2LvUJ4INz9SJJmh9+A1mSZBhIkgwDSRKGgSQJw0CShGEgScIwkCRhGEiSMAwkSRgGkiQMA0kShoEkCcNAkoRhIEnCMJAkYRhIkjAMJEkYBpIkDANJEoaBJAnDQJKEYSBJwjCQJGEYSJIwDCRJGAaSJAwDSRKGgSQJw0CSRA9hkGRbkqNJvt9ROz/J3iSH2vN5rZ4kdySZTPJokss6ttnQ5h9KsqGj/pEkj7Vt7kiS0/0mJUkn18uRwVeBNTNqm4F9VTUO7GvrAGuB8fbYBHwFpsMD2AJcDqwEtpwIkDbn0x3bzXwtSdI8mzMMqurfgGMzyuuA7W15O3B1R31HTdsPnJvkQuBKYG9VHauq48BeYE0bO6eq9ldVATs69iVJ6pM3e81gSVU925Z/BCxpy0uBZzrmHW61k9UPd6l3lWRTkokkE1NTU2+ydUnSTG/5AnL7RF+noZdeXmtrVa2oqhUjIyP9eElJGgpvNgyea6d4aM9HW/0IsKxj3mirnaw+2qUuSeqjNxsGu4ATdwRtAO7tqK9vdxWtAl5sp5P2AKuTnNcuHK8G9rSxl5KsancRre/YlySpTxbPNSHJ14HfBS5Icpjpu4JuBe5OshF4GrimTd8NXAVMAj8FrgeoqmNJvggcaPO+UFUnLkrfwPQdS2cD97eHJKmP5gyDqrpulqEruswt4MZZ9rMN2NalPgF8cK4+JEnzx28gS5IMA0mSYSBJwjCQJGEYSJIwDCRJGAaSJAwDSRKGgSQJw0CShGEgScIwkCRhGEiSMAwkSRgGkiQMA0kShoEkCcNAkoRhIEnCMJAkYRhIkjAMJEkYBpIkDANJEoaBJAnDQJKEYSBJwjCQJGEYSJIwDCRJLKAwSLImyZNJJpNsHnQ/kjRMFkQYJFkE3AmsBZYD1yVZPtiuJGl4LIgwAFYCk1X1VFX9HNgJrBtwT5I0NBYPuoFmKfBMx/ph4PKZk5JsAja11Z8kebIPvQ2DC4AfD7qJueS2QXegAfHv8/T5zdkGFkoY9KSqtgJbB93HmSbJRFWtGHQfUjf+ffbHQjlNdARY1rE+2mqSpD5YKGFwABhPcnGSs4BrgV0D7kmShsaCOE1UVa8kuQnYAywCtlXVwQG3NUw89aaFzL/PPkhVDboHSdKALZTTRJKkATIMJEmGgSTJMJC0ACU5P8n5g+5jmBgGkhaEJBcl2ZlkCngQeCjJ0VYbG3B7ZzzDYEglWZLksvZYMuh+JOAbwD3Ar1fVeFVdAlwI/BPTv1emeeStpUMmyaXAXwHv5fVveY8CLwA3VNV3B9OZhl2SQ1U1fqpjOj0MgyGT5BHgj6vqwRn1VcBfV9WHBtKYhl6SncAxYDuv/3DlMmADcEFVXTOo3oaBYTBk5vj0NdkOzaW+az9Fs5Hpn69f2sqHgX8G7qqqlwfV2zAwDIZMkjuA3wJ28MZPX+uBH1TVTYPqTdLgGAZDKMla3vjp6wiwq6p2D64raXZJPlFV9w26jzOZYSBpwUtyc1VtGXQfZzLDQK9Jsqn9B0LSQCR5P92PWp8YXFfDwe8ZqFMG3YCGV5LPMf19ggAPtUeAryfZPMjehoFHBnpNkuur6m8H3YeGU5L/Bj5QVf83o34WcNDvGcwvjwzU6eZBN6Ch9gvgN7rUL2xjmkcL4n86U/8keXS2IcCfpdAgfRbYl+QQr9/2fBFwCeAtz/PM00RDJslzwJXA8ZlDwH9UVbdPZlJfJHkHsJI3XkA+UFWvDq6r4eCRwfC5D3hPVT0ycyDJd/rejdShqn4B7B90H8PIIwNJkheQJUmGgSQJw0CShGEgSQL+HwXgTl2dbdU4AAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "\n", "df[model_target].value_counts().plot.bar()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the target plots we can identify whether or not we are dealing with imbalanced datasets - this means one result type is dominating the other one(s). \n", "\n", "Handling class imbalance is highly recommended, as the model performance can be greatly impacted. In particular the model may not work well for the infrequent classes, as there are not enough samples to learn patterns from, and so it would be hard for the classifier to identify and match those patterns. \n", "\n", "We might want to downsample the dominant class or upsample the rare the class, to help with learning its patterns. However, we should only fix the imbalance in training set, without changing the validation and test sets, as these should follow the original distribution. We will perform this task after train/test split. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Select features to build the model\n", "(Go to top)\n", "\n", "This time we build a model using all features. That is, we build a classifier including __numerical, categorical__ and __text__ features. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "# Grab model features/inputs and target/output\n", "\n", "# can also grab less numerical features, as some numerical data might not be very useful\n", "numerical_features = ['Age upon Intake Days', 'Age upon Outcome Days']\n", "\n", "# dropping the IDs features, RescuerID and PetID here \n", "categorical_features = ['Sex upon Outcome', 'Intake Type',\n", " 'Intake Condition', 'Pet Type', 'Sex upon Intake']\n", "\n", "# from EDA, select the text features\n", "text_features = ['Name', 'Found Location', 'Breed', 'Color']\n", " \n", "model_features = numerical_features + categorical_features + text_features\n", "model_target = 'Outcome Type'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Cleaning numerical features " ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Age upon Intake Days\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZEAAAD4CAYAAAAtrdtxAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8QVMy6AAAACXBIWXMAAAsTAAALEwEAmpwYAAAaB0lEQVR4nO3df7RV5X3n8fcnEBRtFNA7jOEyBSeMGWKrwVslyzbTSgXUVpwZY8mkwx2HkbbiTNLMWi22s4ZWY5fO6sSETmJCAxWcJEjID5mIZRBd7Zo/QK4/qgKx3PiLS1BuBSHRVoP9zh/7e2V7vReO+7LP5XA/r7XOOs/+Ps/e59nbc/2yn/2cvRURmJmZVfG+4e6AmZm1LicRMzOrzEnEzMwqcxIxM7PKnETMzKyy0cPdgWY7++yzY8qUKcPdDTOzlvHoo4/+XUS0DVQ34pLIlClT6OrqGu5umJm1DEkvDFbn4SwzM6vMScTMzCpzEjEzs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzypxEzMysMicRMzOrbMT9Yn0opiy5f7i70HTP337VcHfBzE5gPhMxM7PKnETMzKyyWpOIpN+VtF3S05K+KelUSVMlbZXULeleSWOy7Sm53J31U0rbuTnjz0iaU4rPzVi3pCV17ouZmb1bbUlE0iTgvwAdEXE+MAqYD9wB3BkRHwIOAAtzlYXAgYzfme2QND3X+wgwF/iypFGSRgFfAq4ApgOfzLZmZtYkdQ9njQbGShoNnAbsBS4D1mX9KuCaLM/LZbJ+liRlfE1EvBERzwHdwMX56o6IZyPiTWBNtjUzsyapLYlExB7gT4EXKZLHQeBR4NWIOJzNeoBJWZ4E7M51D2f7s8rxfusMFjczsyapczhrPMWZwVTgg8DpFMNRTSdpkaQuSV29vb3D0QUzs5NSncNZvwo8FxG9EfFT4DvApcC4HN4CaAf2ZHkPMBkg688EXinH+60zWPxdImJ5RHREREdb24BPeDQzswrqTCIvAjMlnZbXNmYBO4CHgWuzTSdwX5bX5zJZ/1BERMbn5+ytqcA04BFgGzAtZ3uNobj4vr7G/TEzs35q+8V6RGyVtA54DDgMPA4sB+4H1kj6XMZW5CorgHskdQP7KZICEbFd0lqKBHQYWBwRbwFIugnYSDHza2VEbK9rf8zM7N1qve1JRCwFlvYLP0sxs6p/238APjHIdm4DbhsgvgHYMPSemplZFf7FupmZVeYkYmZmlTmJmJlZZU4iZmZWmZOImZlV5iRiZmaVOYmYmVllTiJmZlaZk4iZmVXmJGJmZpU5iZiZWWVOImZmVpmTiJmZVeYkYmZmlTmJmJlZZU4iZmZWWW1JRNJ5kp4ovQ5J+oykCZI2SdqV7+OzvSQtk9Qt6UlJM0rb6sz2uyR1luIXSXoq11mWj+E1M7MmqS2JRMQzEXFhRFwIXAS8DnwXWAJsjohpwOZcBriC4vnp04BFwF0AkiZQPB3xEoonIi7tSzzZ5obSenPr2h8zM3u3Zg1nzQJ+GBEvAPOAVRlfBVyT5XnA6ihsAcZJOgeYA2yKiP0RcQDYBMzNujMiYktEBLC6tC0zM2uCZiWR+cA3szwxIvZm+SVgYpYnAbtL6/Rk7GjxngHiZmbWJLUnEUljgKuBb/WvyzOIaEIfFknqktTV29tb98eZmY0YzTgTuQJ4LCJezuWXcyiKfN+X8T3A5NJ67Rk7Wrx9gPi7RMTyiOiIiI62trYh7o6ZmfVpRhL5JEeGsgDWA30zrDqB+0rxBTlLayZwMIe9NgKzJY3PC+qzgY1Zd0jSzJyVtaC0LTMza4LRdW5c0unA5cBvlcK3A2slLQReAK7L+AbgSqCbYibX9QARsV/SrcC2bHdLROzP8o3A3cBY4IF8mZlZk9SaRCLiNeCsfrFXKGZr9W8bwOJBtrMSWDlAvAs4/7h01szM3jP/Yt3MzCpzEjEzs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzypxEzMysMicRMzOrzEnEzMwqcxIxM7PKnETMzKwyJxEzM6vMScTMzCpzEjEzs8qcRMzMrDInETMzq6zWJCJpnKR1kn4gaaekj0maIGmTpF35Pj7bStIySd2SnpQ0o7Sdzmy/S1JnKX6RpKdynWX5mFwzM2uSus9Evgj8ZUR8GLgA2AksATZHxDRgcy4DXAFMy9ci4C4ASROApcAlwMXA0r7Ek21uKK03t+b9MTOzktqSiKQzgY8DKwAi4s2IeBWYB6zKZquAa7I8D1gdhS3AOEnnAHOATRGxPyIOAJuAuVl3RkRsyUfrri5ty8zMmqDOM5GpQC/wF5Iel/Q1SacDEyNib7Z5CZiY5UnA7tL6PRk7WrxngLiZmTVJnUlkNDADuCsiPgq8xpGhKwDyDCJq7AMAkhZJ6pLU1dvbW/fHmZmNGHUmkR6gJyK25vI6iqTycg5Fke/7sn4PMLm0fnvGjhZvHyD+LhGxPCI6IqKjra1tSDtlZmZH1JZEIuIlYLek8zI0C9gBrAf6Zlh1AvdleT2wIGdpzQQO5rDXRmC2pPF5QX02sDHrDkmambOyFpS2ZWZmTTC65u3/Z+DrksYAzwLXUySutZIWAi8A12XbDcCVQDfwerYlIvZLuhXYlu1uiYj9Wb4RuBsYCzyQLzMza5Jak0hEPAF0DFA1a4C2ASweZDsrgZUDxLuA84fWSzMzq8q/WDczs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzypxEzMysMicRMzOrzEnEzMwqcxIxM7PKnETMzKwyJxEzM6vMScTMzCpzEjEzs8oaSiKSfq7ujpiZWetp9Ezky5IekXSjpDNr7ZGZmbWMhpJIRPwS8CmKx9Q+Kukbki6vtWdmZnbCa/iaSETsAv4b8PvAvwKWSfqBpH8z2DqSnpf0lKQnJHVlbIKkTZJ25fv4jEvSMkndkp6UNKO0nc5sv0tSZyl+UW6/O9fVez8EZmZWVaPXRH5e0p3ATuAy4Ncj4l9m+c5jrP4rEXFhRPQ94XAJsDkipgGbcxngCmBavhYBd+VnTwCWApcAFwNL+xJPtrmhtN7cRvbHzMyOj0bPRP4MeAy4ICIWR8RjABHxI4qzk/diHrAqy6uAa0rx1VHYAoyTdA4wB9gUEfsj4gCwCZibdWdExJZ8tO7q0rbMzKwJGn3G+lXA30fEWwCS3gecGhGvR8Q9R1kvgP8rKYCvRsRyYGJE7M36l4CJWZ4E7C6t25Oxo8V7BoibmVmTNHom8iAwtrR8WsaO5RcjYgbFUNViSR8vV+YZRDTYh8okLZLUJamrt7e37o8zMxsxGk0ip0bET/oWsnzasVaKiD35vg/4LsU1jZdzKIp835fN91DM/urTnrGjxdsHiA/Uj+UR0RERHW1tbcfqtpmZNajRJPJav9lSFwF/f7QVJJ0u6QN9ZWA28DSwHuibYdUJ3Jfl9cCCnKU1EziYw14bgdmSxucF9dnAxqw7JGlmzspaUNqWmZk1QaPXRD4DfEvSjwAB/xT4jWOsMxH4bs66HQ18IyL+UtI2YK2khcALwHXZfgNwJdANvA5cDxAR+yXdCmzLdrdExP4s3wjcTTHU9kC+zMysSRpKIhGxTdKHgfMy9ExE/PQY6zwLXDBA/BVg1gDxABYPsq2VwMoB4l3A+cfcATMzq0WjZyIAvwBMyXVmSCIiVtfSKzMzawkNJRFJ9wD/HHgCeCvDfb/NMDOzEarRM5EOYHoOOZmZmQGNz856muJiupmZ2dsaPRM5G9gh6RHgjb5gRFxdS6/MzKwlNJpE/qjOTpiZWWtqdIrvX0n6WWBaRDwo6TRgVL1dMzOzE12jt4K/AVgHfDVDk4Dv1dQnMzNrEY1eWF8MXAocgrcfUPVP6uqUmZm1hkaTyBsR8WbfgqTRNOHuu2ZmdmJrNIn8laQ/AMbms9W/Bfyf+rplZmatoNEksgToBZ4CfoviZonv9YmGZmZ2kml0dtY/An+eLzMzM6Dxe2c9xwDXQCLi3OPeIzMzaxnv5d5ZfU4FPgFMOP7dMTOzVtLQNZGIeKX02hMRXwCuqrdrZmZ2omv0x4YzSq8OSb9N40NhoyQ9Lun7uTxV0lZJ3ZLulTQm46fkcnfWTylt4+aMPyNpTik+N2Pdkpa8lx03M7Oha3Q463+WyoeB5znyWNtj+TSwEzgjl+8A7oyINZK+AiwE7sr3AxHxIUnzs91vSJoOzAc+AnwQeFDSv8htfQm4HOgBtklaHxE7GuyXmZkNUaOzs36lysYltVMMe90GfFbFA9cvA/5dNllFcXPHu4B5HLnR4zrgf2X7ecCaiHgDeE5SN3BxtuvOx/AiaU22dRIxM2uSRoekPnu0+oj4/CBVXwB+D/hALp8FvBoRh3O5h+I+XOT77tzeYUkHs/0kYEtpm+V1dveLX3KsfTEzs+On0R8bdgC/Q/E/70nAbwMzKJLDBwZaQdKvAfsi4tHj0M8hkbRIUpekrt7e3uHujpnZSaPRayLtwIyI+DGApD8C7o+I3zzKOpcCV0u6kmJa8BnAF4Fxkkbn2Ug7sCfb7wEmAz15b64zgVdK8XJf+tYZLP4OEbEcWA7Q0dHhe36ZmR0njZ6JTATeLC2/mbFBRcTNEdEeEVMoLow/FBGfAh4Grs1mncB9WV6fy2T9Q/lM9/XA/Jy9NRWYBjwCbAOm5WyvMfkZ6xvcHzMzOw4aPRNZDTwi6bu5fA3FRfEqfh9YI+lzwOPAioyvAO7JC+f7KZICEbFd0lqKC+aHgcUR8RaApJuAjRQPyFoZEdsr9snMzCpQ8Y/9BhpKM4BfysW/jojHa+tVjTo6OqKrq6vSulOW3H+ce3Pie/52/6bUbKST9GhEdAxU1+hwFsBpwKGI+CLFdYupx6V3ZmbWshr9xfpSimGomzP0fuB/19UpMzNrDY2eifxr4GrgNYCI+BGDTO01M7ORo9Ek8mbOlAoASafX1yUzM2sVjSaRtZK+SvEbjxuAB/EDqszMRrxjTvHN+1fdC3wYOAScB/z3iNhUc9/MzOwEd8wkEhEhaUNE/BzgxGFmZm9rdDjrMUm/UGtPzMys5TT6i/VLgN+U9DzFDC1RnKT8fF0dMzOzE99Rk4ikfxYRLwJzjtbOzMxGpmOdiXyP4u69L0j6dkT82yb0yczMWsSxromoVD63zo6YmVnrOVYSiUHKZmZmxxzOukDSIYozkrFZhiMX1s+otXdmZnZCO2oSiYhRzeqImZm1nvdyK3gzM7N3cBIxM7PKaksikk6V9Iikv5G0XdIfZ3yqpK2SuiXdm89HJ5+hfm/Gt0qaUtrWzRl/RtKcUnxuxrolLalrX8zMbGB1nom8AVwWERcAFwJzJc0E7gDujIgPAQeAhdl+IXAg43dmOyRNp3je+keAucCXJY2SNAr4EnAFMB34ZLY1M7MmqS2JROEnufj+fAVwGbAu46uAa7I8L5fJ+ll5B+F5wJqIeCMingO6gYvz1R0Rz0bEm8CabGtmZk1S6zWRPGN4AthHcQfgHwKvRsThbNIDTMryJGA3QNYfBM4qx/utM1h8oH4sktQlqau3t/c47JmZmUHNSSQi3oqIC4F2ijOHD9f5eUfpx/KI6IiIjra2tuHogpnZSakps7Mi4lXgYeBjFE9H7Pt9SjuwJ8t7gMkAWX8m8Eo53m+dweJmZtYkdc7OapM0LstjgcuBnRTJ5Nps1gncl+X1uUzWP5TPdV8PzM/ZW1OBacAjwDZgWs72GkNx8X19XftjZmbv1ujzRKo4B1iVs6jeB6yNiO9L2gGskfQ54HFgRbZfAdwjqRvYT5EUiIjtktYCO4DDwOKIeAtA0k3ARmAUsDIitte4P2Zm1k9tSSQingQ+OkD8WYrrI/3j/wB8YpBt3QbcNkB8A7BhyJ01M7NK/It1MzOrzEnEzMwqcxIxM7PKnETMzKwyJxEzM6vMScTMzCpzEjEzs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzypxEzMysMicRMzOrzEnEzMwqcxIxM7PKnETMzKyyOh+PO1nSw5J2SNou6dMZnyBpk6Rd+T4+45K0TFK3pCclzShtqzPb75LUWYpfJOmpXGeZJNW1P2Zm9m51nokcBv5rREwHZgKLJU0HlgCbI2IasDmXAa6geH76NGARcBcUSQdYClxC8UTEpX2JJ9vcUFpvbo37Y2Zm/dSWRCJib0Q8luUfAzuBScA8YFU2WwVck+V5wOoobAHGSToHmANsioj9EXEA2ATMzbozImJLRASwurQtMzNrgqZcE5E0heJ561uBiRGxN6teAiZmeRKwu7RaT8aOFu8ZID7Q5y+S1CWpq7e3d2g7Y2Zmb6s9iUj6GeDbwGci4lC5Ls8gou4+RMTyiOiIiI62tra6P87MbMSoNYlIej9FAvl6RHwnwy/nUBT5vi/je4DJpdXbM3a0ePsAcTMza5I6Z2cJWAHsjIjPl6rWA30zrDqB+0rxBTlLayZwMIe9NgKzJY3PC+qzgY1Zd0jSzPysBaVtmZlZE4yucduXAv8eeErSExn7A+B2YK2khcALwHVZtwG4EugGXgeuB4iI/ZJuBbZlu1siYn+WbwTuBsYCD+TLzMyapLYkEhH/DxjsdxuzBmgfwOJBtrUSWDlAvAs4fwjdNDOzIfAv1s3MrDInETMzq8xJxMzMKnMSMTOzypxEzMysMicRMzOrzEnEzMwqcxIxM7PKnETMzKwyJxEzM6vMScTMzCpzEjEzs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzyup8PO5KSfskPV2KTZC0SdKufB+fcUlaJqlb0pOSZpTW6cz2uyR1luIXSXoq11mWj8g1M7MmqvNM5G5gbr/YEmBzREwDNucywBXAtHwtAu6CIukAS4FLgIuBpX2JJ9vcUFqv/2eZmVnNaksiEfHXwP5+4XnAqiyvAq4pxVdHYQswTtI5wBxgU0Tsj4gDwCZgbtadERFb8rG6q0vbMjOzJmn2NZGJEbE3yy8BE7M8CdhdateTsaPFewaID0jSIkldkrp6e3uHtgdmZva2YbuwnmcQ0aTPWh4RHRHR0dbW1oyPNDMbEZqdRF7OoSjyfV/G9wCTS+3aM3a0ePsAcTMza6LRTf689UAncHu+31eK3yRpDcVF9IMRsVfSRuBPShfTZwM3R8R+SYckzQS2AguAP2vmjowUU5bcP9xdaLrnb79quLtg1jJqSyKSvgn8MnC2pB6KWVa3A2slLQReAK7L5huAK4Fu4HXgeoBMFrcC27LdLRHRd7H+RooZYGOBB/JlZmZNVFsSiYhPDlI1a4C2ASweZDsrgZUDxLuA84fSRzMzGxr/Yt3MzCpzEjEzs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzypxEzMysMicRMzOrzEnEzMwqcxIxM7PKmn0DRrMT3ki76aRvOGlD4TMRMzOrzEnEzMwqcxIxM7PKnETMzKwyJxEzM6us5ZOIpLmSnpHULWnJcPfHzGwkaekpvpJGAV8CLgd6gG2S1kfEjuHtmVnrGGlTmsHTmo+nVj8TuRjojohnI+JNYA0wb5j7ZGY2YrT0mQgwCdhdWu4BLunfSNIiYFEu/kTSMxU/72zg7yque7LxsSj4OBzRMsdCd9T+ES1zLBr0s4NVtHoSaUhELAeWD3U7kroiouM4dKnl+VgUfByO8LE4YiQdi1YfztoDTC4tt2fMzMyaoNWTyDZgmqSpksYA84H1w9wnM7MRo6WHsyLisKSbgI3AKGBlRGyv8SOHPCR2EvGxKPg4HOFjccSIORaKiOHug5mZtahWH84yM7Nh5CRiZmaVOYk0YCTcWkXSZEkPS9ohabukT2d8gqRNknbl+/iMS9KyPCZPSppR2lZntt8lqXO49mkoJI2S9Lik7+fyVElbc3/vzYkcSDoll7uzfkppGzdn/BlJc4ZpV4ZE0jhJ6yT9QNJOSR8bwd+J382/jaclfVPSqSP1e/EOEeHXUV4UF+x/CJwLjAH+Bpg+3P2qYT/PAWZk+QPA3wLTgf8BLMn4EuCOLF8JPAAImAlszfgE4Nl8H5/l8cO9fxWOx2eBbwDfz+W1wPwsfwX4nSzfCHwly/OBe7M8Pb8rpwBT8zs0arj3q8JxWAX8pyyPAcaNxO8ExQ+bnwPGlr4P/2Gkfi/KL5+JHNuIuLVKROyNiMey/GNgJ8UfzjyK/5GQ79dkeR6wOgpbgHGSzgHmAJsiYn9EHAA2AXObtydDJ6kduAr4Wi4LuAxYl036H4e+47MOmJXt5wFrIuKNiHgO6Kb4LrUMSWcCHwdWAETEmxHxKiPwO5FGA2MljQZOA/YyAr8X/TmJHNtAt1aZNEx9aYo89f4osBWYGBF7s+olYGKWBzsuJ8Px+gLwe8A/5vJZwKsRcTiXy/v09v5m/cFsfzIch6lAL/AXObT3NUmnMwK/ExGxB/hT4EWK5HEQeJSR+b14BycRewdJPwN8G/hMRBwq10VxPn5SzwmX9GvAvoh4dLj7cgIYDcwA7oqIjwKvUQxfvW0kfCcA8rrPPIrE+kHgdFrzbOq4cxI5thFzaxVJ76dIIF+PiO9k+OUckiDf92V8sOPS6sfrUuBqSc9TDF1eBnyRYmim78e55X16e3+z/kzgFVr/OEDxr+SeiNiay+sokspI+04A/CrwXET0RsRPge9QfFdG4vfiHZxEjm1E3Folx2tXADsj4vOlqvVA32yaTuC+UnxBzsiZCRzMIY6NwGxJ4/Nfb7Mz1hIi4uaIaI+IKRT/rR+KiE8BDwPXZrP+x6Hv+Fyb7SPj83OWzlRgGvBIk3bjuIiIl4Ddks7L0CxgByPsO5FeBGZKOi3/VvqOxYj7XrzLcF/Zb4UXxayTv6WYSfGHw92fmvbxFymGJZ4EnsjXlRTjuJuBXcCDwIRsL4oHgv0QeAroKG3rP1JcMOwGrh/ufRvCMflljszOOpfij70b+BZwSsZPzeXurD+3tP4f5vF5BrhiuPen4jG4EOjK78X3KGZXjcjvBPDHwA+Ap4F7KGZYjcjvRfnl256YmVllHs4yM7PKnETMzKwyJxEzM6vMScTMzCpzEjEzs8qcRMzMrDInETMzq+z/A3zg+ytDvr8kAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "Age upon Outcome Days\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZEAAAD4CAYAAAAtrdtxAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8QVMy6AAAACXBIWXMAAAsTAAALEwEAmpwYAAAaCElEQVR4nO3df7QX9X3n8ecrEBRtFNC7rOFiwQ1rlthq8FbJsc22UgG1K+6usWTT5a7LSltxN2n2nBbbPUsb4x7d040J3cSEDVRwkyAhP2QjlkX0tGf/ALn+qArEcuMvLkG5FYREWw32vX/M+8p4uRe+zmW+l6/39Thnzvcz7/nMfD8zzvXNfObznVFEYGZmVsX7hrsBZmbWupxEzMysMicRMzOrzEnEzMwqcxIxM7PKRg93A5rt7LPPjilTpgx3M8zMWsajjz76txHRNtCyEZdEpkyZQldX13A3w8ysZUh6YbBl7s4yM7PKnETMzKwyJxEzM6vMScTMzCpzEjEzs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzykbcL9aHYsqS+4e7CU33/O1XD3cTzOwk5isRMzOrzEnEzMwqqzWJSPo9SdslPS3pW5JOlTRV0lZJ3ZLulTQm656S8925fEppO7dk/BlJc0rxuRnrlrSkzn0xM7Oj1ZZEJE0C/hPQEREXAKOA+cAdwJ0R8SHgALAwV1kIHMj4nVkPSdNzvY8Ac4GvSBolaRTwZeBKYDrwyaxrZmZNUnd31mhgrKTRwGnAXuByYF0uXwVcm+V5OU8unyVJGV8TEW9ExHNAN3BJTt0R8WxEvAmsybpmZtYktSWRiNgD/CnwIkXyOAg8CrwaEYezWg8wKcuTgN257uGsf1Y53m+dweJHkbRIUpekrt7e3qHvnJmZAfV2Z42nuDKYCnwQOJ2iO6rpImJ5RHREREdb24Av5zIzswrq7M76deC5iOiNiJ8B3wUuA8Zl9xZAO7Any3uAyQC5/EzglXK83zqDxc3MrEnqTCIvAjMlnZb3NmYBO4CHgeuyTidwX5bX5zy5/KGIiIzPz9FbU4FpwCPANmBajvYaQ3HzfX2N+2NmZv3U9ov1iNgqaR3wGHAYeBxYDtwPrJH0+YytyFVWAPdI6gb2UyQFImK7pLUUCegwsDgi3gKQdDOwkWLk18qI2F7X/piZ2dFqfexJRCwFlvYLP0sxsqp/3b8HPjHIdm4DbhsgvgHYMPSWmplZFf7FupmZVeYkYmZmlTmJmJlZZU4iZmZWmZOImZlV5iRiZmaVOYmYmVllTiJmZlaZk4iZmVXmJGJmZpU5iZiZWWVOImZmVpmTiJmZVeYkYmZmlTmJmJlZZU4iZmZWWW1JRNL5kp4oTYckfUbSBEmbJO3Kz/FZX5KWSeqW9KSkGaVtdWb9XZI6S/GLJT2V6yzL1/CamVmT1JZEIuKZiLgoIi4CLgZeB74HLAE2R8Q0YHPOA1xJ8f70acAi4C4ASRMo3o54KcUbEZf2JZ6sc2Npvbl17Y+ZmR2tWd1Zs4AfRcQLwDxgVcZXAddmeR6wOgpbgHGSzgHmAJsiYn9EHAA2AXNz2RkRsSUiAlhd2paZmTVBs5LIfOBbWZ4YEXuz/BIwMcuTgN2ldXoydqx4zwDxo0haJKlLUldvb+9Q9sPMzEpqTyKSxgDXAN/uvyyvIKLuNkTE8ojoiIiOtra2ur/OzGzEaMaVyJXAYxHxcs6/nF1R5Oe+jO8BJpfWa8/YseLtA8TNzKxJmpFEPsmRriyA9UDfCKtO4L5SfEGO0poJHMxur43AbEnj84b6bGBjLjskaWaOylpQ2paZmTXB6Do3Lul04Argt0vh24G1khYCLwDXZ3wDcBXQTTGS6waAiNgv6VZgW9b7XETsz/JNwN3AWOCBnMzMrElqTSIR8RpwVr/YKxSjtfrXDWDxINtZCawcIN4FXHBCGmtmZu+af7FuZmaVOYmYmVllTiJmZlaZk4iZmVXmJGJmZpU5iZiZWWVOImZmVpmTiJmZVeYkYmZmlTmJmJlZZU4iZmZWmZOImZlV5iRiZmaVOYmYmVllTiJmZlaZk4iZmVVWaxKRNE7SOkk/lLRT0sckTZC0SdKu/ByfdSVpmaRuSU9KmlHaTmfW3yWpsxS/WNJTuc6yfE2umZk1Sd1XIl8C/iIiPgxcCOwElgCbI2IasDnnAa4EpuW0CLgLQNIEYClwKXAJsLQv8WSdG0vrza15f8zMrKS2JCLpTODjwAqAiHgzIl4F5gGrstoq4NoszwNWR2ELME7SOcAcYFNE7I+IA8AmYG4uOyMituSrdVeXtmVmZk1Q55XIVKAX+HNJj0v6uqTTgYkRsTfrvARMzPIkYHdp/Z6MHSveM0D8KJIWSeqS1NXb2zvE3TIzsz51JpHRwAzgroj4KPAaR7quAMgriKixDX3fszwiOiKio62tre6vMzMbMepMIj1AT0Rszfl1FEnl5eyKIj/35fI9wOTS+u0ZO1a8fYC4mZk1SW1JJCJeAnZLOj9Ds4AdwHqgb4RVJ3BfltcDC3KU1kzgYHZ7bQRmSxqfN9RnAxtz2SFJM3NU1oLStszMrAlG17z9/wh8Q9IY4FngBorEtVbSQuAF4PqsuwG4CugGXs+6RMR+SbcC27Le5yJif5ZvAu4GxgIP5GRmZk1SaxKJiCeAjgEWzRqgbgCLB9nOSmDlAPEu4IKhtdLMzKryL9bNzKwyJxEzM6vMScTMzCpzEjEzs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzypxEzMysMicRMzOrzEnEzMwqcxIxM7PKnETMzKyyhpKIpF+ouyFmZtZ6Gr0S+YqkRyTdJOnMWltkZmYto6EkEhG/AnyK4jW1j0r6pqQram2ZmZmd9Bq+JxIRu4D/AvwB8M+BZZJ+KOlfDbaOpOclPSXpCUldGZsgaZOkXfk5PuOStExSt6QnJc0obacz6++S1FmKX5zb78519e4PgZmZVdXoPZFflHQnsBO4HPgXEfHPsnzncVb/tYi4KCL63nC4BNgcEdOAzTkPcCUwLadFwF353ROApcClwCXA0r7Ek3VuLK03t5H9MTOzE6PRK5E/Ax4DLoyIxRHxGEBE/Jji6uTdmAesyvIq4NpSfHUUtgDjJJ0DzAE2RcT+iDgAbALm5rIzImJLvlp3dWlbZmbWBI2+Y/1q4O8i4i0ASe8DTo2I1yPinmOsF8D/lRTA1yJiOTAxIvbm8peAiVmeBOwurduTsWPFewaIH0XSIoqrG84999zj7KqZmTWq0SuRB4GxpfnTMnY8vxwRMyi6qhZL+nh5YV5BRINtqCwilkdER0R0tLW11f11ZmYjRqNJ5NSI+GnfTJZPO95KEbEnP/cB36O4p/FydkWRn/uy+h6K0V992jN2rHj7AHEzM2uSRpPIa/1GS10M/N2xVpB0uqQP9JWB2cDTwHqgb4RVJ3BfltcDC3KU1kzgYHZ7bQRmSxqfN9RnAxtz2SFJM3NU1oLStszMrAkavSfyGeDbkn4MCPjHwG8eZ52JwPdy1O1o4JsR8ReStgFrJS0EXgCuz/obgKuAbuB14AaAiNgv6VZgW9b7XETsz/JNwN0UXW0P5GRmZk3SUBKJiG2SPgycn6FnIuJnx1nnWeDCAeKvALMGiAeweJBtrQRWDhDvAi447g6YmVktGr0SAfglYEquM0MSEbG6llaZmVlLaCiJSLoH+CfAE8BbGe77bYaZmY1QjV6JdADTs8vJzMwMaHx01tMUN9PNzMze1uiVyNnADkmPAG/0BSPimlpaZWZmLaHRJPLHdTbCzMxaU6NDfP9S0s8D0yLiQUmnAaPqbZqZmZ3sGn0U/I3AOuBrGZoEfL+mNpmZWYto9Mb6YuAy4BC8/YKqf1RXo8zMrDU0mkTeiIg3+2YkjaYJT981M7OTW6NJ5C8l/SEwNt+t/m3g/9TXLDMzawWNJpElQC/wFPDbFA9LfLdvNDQzs/eYRkdn/QPwv3IyMzMDGn921nMMcA8kIs474S0yM7OW8W6endXnVOATwIQT3xwzM2slDd0TiYhXStOeiPgicHW9TTMzs5Ndoz82nFGaOiT9Do13hY2S9LikH+T8VElbJXVLulfSmIyfkvPduXxKaRu3ZPwZSXNK8bkZ65a05N3suJmZDV2j3Vn/o1Q+DDzPkdfaHs+ngZ3AGTl/B3BnRKyR9FVgIXBXfh6IiA9Jmp/1flPSdGA+8BHgg8CDkv5pbuvLwBVAD7BN0vqI2NFgu8zMbIgaHZ31a1U2LqmdotvrNuCzKl64fjnwb7LKKoqHO94FzOPIgx7XAf8z688D1kTEG8BzkrqBS7Jed76GF0lrsq6TiJlZkzTaJfXZYy2PiC8MsuiLwO8DH8j5s4BXI+JwzvdQPIeL/Nyd2zss6WDWnwRsKW2zvM7ufvFLB2n/ImARwLnnnnusXTEzs3eh0R8bdgC/S/E/70nA7wAzKJLDBwZaQdJvAPsi4tET0M4hiYjlEdERER1tbW3D3Rwzs/eMRu+JtAMzIuInAJL+GLg/In7rGOtcBlwj6SqKYcFnAF8CxkkanVcj7cCerL8HmAz05LO5zgReKcXLbelbZ7C4mZk1QaNXIhOBN0vzb2ZsUBFxS0S0R8QUihvjD0XEp4CHgeuyWidwX5bX5zy5/KF8p/t6YH6O3poKTAMeAbYB03K015j8jvUN7o+ZmZ0AjV6JrAYekfS9nL+W4qZ4FX8ArJH0eeBxYEXGVwD35I3z/RRJgYjYLmktxQ3zw8DiiHgLQNLNwEaKF2StjIjtFdtkZmYVqPjHfgMVpRnAr+TsX0XE47W1qkYdHR3R1dVVad0pS+4/wa05+T1/u39TajbSSXo0IjoGWtZodxbAacChiPgSxX2LqSekdWZm1rIa/cX6UopuqFsy9H7gf9fVKDMzaw2NXon8S+Aa4DWAiPgxgwztNTOzkaPRJPJmjpQKAEmn19ckMzNrFY0mkbWSvkbxG48bgQfxC6rMzEa84w7xzedX3Qt8GDgEnA/814jYVHPbzMzsJHfcJBIRIWlDRPwC4MRhZmZva7Q76zFJv1RrS8zMrOU0+ov1S4HfkvQ8xQgtUVyk/GJdDTMzs5PfMZOIpHMj4kVgzrHqmZnZyHS8K5HvUzy99wVJ34mIf92ENpmZWYs43j0Rlcrn1dkQMzNrPcdLIjFI2czM7LjdWRdKOkRxRTI2y3DkxvoZtbbOzMxOasdMIhExqlkNMTOz1vNuHgVvZmb2DrUlEUmnSnpE0l9L2i7pTzI+VdJWSd2S7s1X25Kvv70341slTSlt65aMPyNpTik+N2PdkpbUtS9mZjawOq9E3gAuj4gLgYuAuZJmAncAd0bEh4ADwMKsvxA4kPE7sx6SplO8KvcjwFzgK5JGSRoFfBm4EpgOfDLrmplZk9SWRKLw05x9f04BXA6sy/gqive1A8zjyHvb1wGz8uGP84A1EfFGRDwHdAOX5NQdEc9GxJvAmqxrZmZNUus9kbxieALYR/Hwxh8Br0bE4azSA0zK8iRgN0AuPwicVY73W2ewuJmZNUmtSSQi3oqIi4B2iiuHD9f5fYORtEhSl6Su3t7e4WiCmdl7UlNGZ0XEq8DDwMcoXmzVN7S4HdiT5T3AZIBcfibwSjneb53B4gN9//KI6IiIjra2thOxS2ZmRr2js9okjcvyWOAKYCdFMrkuq3UC92V5fc6Tyx/KV/KuB+bn6K2pwDTgEWAbMC1He42huPm+vq79MTOzozX6KPgqzgFW5Siq9wFrI+IHknYAayR9HngcWJH1VwD3SOoG9lMkBSJiu6S1wA7gMLA4It4CkHQzsBEYBayMiO017o+ZmfVTWxKJiCeBjw4Qf5bi/kj/+N8DnxhkW7cBtw0Q3wBsGHJjzcysEv9i3czMKnMSMTOzypxEzMysMicRMzOrzEnEzMwqcxIxM7PKnETMzKwyJxEzM6vMScTMzCpzEjEzs8qcRMzMrDInETMzq8xJxMzMKnMSMTOzypxEzMysMicRMzOrrM7X406W9LCkHZK2S/p0xidI2iRpV36Oz7gkLZPULelJSTNK2+rM+rskdZbiF0t6KtdZJkl17Y+ZmR2tziuRw8B/jojpwExgsaTpwBJgc0RMAzbnPMCVFO9PnwYsAu6CIukAS4FLKd6IuLQv8WSdG0vrza1xf8zMrJ/akkhE7I2Ix7L8E2AnMAmYB6zKaquAa7M8D1gdhS3AOEnnAHOATRGxPyIOAJuAubnsjIjYEhEBrC5ty8zMmqAp90QkTaF43/pWYGJE7M1FLwETszwJ2F1arSdjx4r3DBA3M7MmqT2JSPo54DvAZyLiUHlZXkFEE9qwSFKXpK7e3t66v87MbMSoNYlIej9FAvlGRHw3wy9nVxT5uS/je4DJpdXbM3asePsA8aNExPKI6IiIjra2tqHtlJmZva3O0VkCVgA7I+ILpUXrgb4RVp3AfaX4ghylNRM4mN1eG4HZksbnDfXZwMZcdkjSzPyuBaVtmZlZE4yucduXAf8WeErSExn7Q+B2YK2khcALwPW5bANwFdANvA7cABAR+yXdCmzLep+LiP1Zvgm4GxgLPJCTmZk1SW1JJCL+HzDY7zZmDVA/gMWDbGslsHKAeBdwwRCaaWZmQ+BfrJuZWWVOImZmVpmTiJmZVeYkYmZmlTmJmJlZZU4iZmZWmZOImZlV5iRiZmaVOYmYmVllTiJmZlaZk4iZmVXmJGJmZpU5iZiZWWVOImZmVpmTiJmZVeYkYmZmlTmJmJlZZXW+Y32lpH2Sni7FJkjaJGlXfo7PuCQtk9Qt6UlJM0rrdGb9XZI6S/GLJT2V6yzL96ybmVkT1Xklcjcwt19sCbA5IqYBm3Me4EpgWk6LgLugSDrAUuBS4BJgaV/iyTo3ltbr/11mZlaz2pJIRPwVsL9feB6wKsurgGtL8dVR2AKMk3QOMAfYFBH7I+IAsAmYm8vOiIgt+W721aVtmZlZkzT7nsjEiNib5ZeAiVmeBOwu1evJ2LHiPQPEByRpkaQuSV29vb1D2wMzM3vbsN1YzyuIaNJ3LY+IjojoaGtra8ZXmpmNCM1OIi9nVxT5uS/je4DJpXrtGTtWvH2AuJmZNdHoJn/feqATuD0/7yvFb5a0huIm+sGI2CtpI/DfSjfTZwO3RMR+SYckzQS2AguAP2vmjowUU5bcP9xNaLrnb796uJtg1jJqSyKSvgX8KnC2pB6KUVa3A2slLQReAK7P6huAq4Bu4HXgBoBMFrcC27Le5yKi72b9TRQjwMYCD+RkZmZNVFsSiYhPDrJo1gB1A1g8yHZWAisHiHcBFwyljWZmNjT+xbqZmVXmJGJmZpU5iZiZWWVOImZmVpmTiJmZVeYkYmZmlTmJmJlZZU4iZmZWmZOImZlV5iRiZmaVNfsBjGYnvZH20Ek/cNKGwlciZmZWmZOImZlV5iRiZmaVOYmYmVllTiJmZlZZyycRSXMlPSOpW9KS4W6PmdlI0tJDfCWNAr4MXAH0ANskrY+IHcPbMrPWMdKGNIOHNZ9IrX4lcgnQHRHPRsSbwBpg3jC3ycxsxGjpKxFgErC7NN8DXNq/kqRFwKKc/amkZyp+39nA31Zc973Gx6Lg43BEyxwL3VH7V7TMsWjQzw+2oNWTSEMiYjmwfKjbkdQVER0noEktz8ei4ONwhI/FESPpWLR6d9YeYHJpvj1jZmbWBK2eRLYB0yRNlTQGmA+sH+Y2mZmNGC3dnRURhyXdDGwERgErI2J7jV855C6x9xAfi4KPwxE+FkeMmGOhiBjuNpiZWYtq9e4sMzMbRk4iZmZWmZNIA0bCo1UkTZb0sKQdkrZL+nTGJ0jaJGlXfo7PuCQty2PypKQZpW11Zv1dkjqHa5+GQtIoSY9L+kHOT5W0Nff33hzIgaRTcr47l08pbeOWjD8jac4w7cqQSBonaZ2kH0raKeljI/ic+L3823ha0rcknTpSz4t3iAhPx5gobtj/CDgPGAP8NTB9uNtVw36eA8zI8geAvwGmA/8dWJLxJcAdWb4KeAAQMBPYmvEJwLP5OT7L44d7/yocj88C3wR+kPNrgflZ/irwu1m+CfhqlucD92Z5ep4rpwBT8xwaNdz7VeE4rAL+Q5bHAONG4jlB8cPm54CxpfPh343U86I8+Urk+EbEo1UiYm9EPJblnwA7Kf5w5lH8j4T8vDbL84DVUdgCjJN0DjAH2BQR+yPiALAJmNu8PRk6Se3A1cDXc17A5cC6rNL/OPQdn3XArKw/D1gTEW9ExHNAN8W51DIknQl8HFgBEBFvRsSrjMBzIo0GxkoaDZwG7GUEnhf9OYkc30CPVpk0TG1pirz0/iiwFZgYEXtz0UvAxCwPdlzeC8fri8DvA/+Q82cBr0bE4Zwv79Pb+5vLD2b998JxmAr0An+eXXtfl3Q6I/CciIg9wJ8CL1Ikj4PAo4zM8+IdnETsHST9HPAd4DMRcai8LIrr8ff0mHBJvwHsi4hHh7stJ4HRwAzgroj4KPAaRffV20bCOQGQ933mUSTWDwKn05pXUyeck8jxjZhHq0h6P0UC+UZEfDfDL2eXBPm5L+ODHZdWP16XAddIep6i6/Jy4EsUXTN9P84t79Pb+5vLzwReofWPAxT/Su6JiK05v44iqYy0cwLg14HnIqI3In4GfJfiXBmJ58U7OIkc34h4tEr2164AdkbEF0qL1gN9o2k6gftK8QU5ImcmcDC7ODYCsyWNz3+9zc5YS4iIWyKiPSKmUPy3figiPgU8DFyX1fofh77jc13Wj4zPz1E6U4FpwCNN2o0TIiJeAnZLOj9Ds4AdjLBzIr0IzJR0Wv6t9B2LEXdeHGW47+y3wkQx6uRvKEZS/NFwt6emffxlim6JJ4EncrqKoh93M7ALeBCYkPVF8UKwHwFPAR2lbf17ihuG3cANw71vQzgmv8qR0VnnUfyxdwPfBk7J+Kk5353Lzyut/0d5fJ4Brhzu/al4DC4CuvK8+D7F6KoReU4AfwL8EHgauIdihNWIPC/Kkx97YmZmlbk7y8zMKnMSMTOzypxEzMysMicRMzOrzEnEzMwqcxIxM7PKnETMzKyy/w8EA/hZrUEU8QAAAABJRU5ErkJggg==\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "\n", "for c in numerical_features:\n", " print(c)\n", " df[c].plot.hist(bins=5)\n", " plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If for some histograms the values are heavily placed in the first bin, it is good to check for outliers, either checking the min-max values of those particular features and/or explore value ranges." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Age upon Intake Days\n", "min: 0 max: 9125\n", "Age upon Outcome Days\n", "min: 0 max: 9125\n" ] } ], "source": [ "for c in numerical_features:\n", " print(c)\n", " print('min:', df[c].min(), 'max:', df[c].max())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With __value_counts()__ function, we can increase the number of histogram bins to 10 for more bins for a more refined view of the numerical features." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Age upon Intake Days\n", "(-9.126, 912.5] 74835\n", "(912.5, 1825.0] 10647\n", "(1825.0, 2737.5] 3471\n", "(2737.5, 3650.0] 3998\n", "(3650.0, 4562.5] 1234\n", "(4562.5, 5475.0] 1031\n", "(5475.0, 6387.5] 183\n", "(6387.5, 7300.0] 79\n", "(7300.0, 8212.5] 5\n", "(8212.5, 9125.0] 2\n", "Name: Age upon Intake Days, dtype: int64\n", "Age upon Outcome Days\n", "(-9.126, 912.5] 74642\n", "(912.5, 1825.0] 10699\n", "(1825.0, 2737.5] 3465\n", "(2737.5, 3650.0] 4080\n", "(3650.0, 4562.5] 1263\n", "(4562.5, 5475.0] 1061\n", "(5475.0, 6387.5] 187\n", "(6387.5, 7300.0] 81\n", "(7300.0, 8212.5] 5\n", "(8212.5, 9125.0] 2\n", "Name: Age upon Outcome Days, dtype: int64\n" ] } ], "source": [ "for c in numerical_features: \n", " print(c)\n", " print(df[c].value_counts(bins=10, sort=False))\n", " plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If any outliers are identified as very likely wrong values, dropping them could improve the numerical values histograms, and later overall model performance. While a good rule of thumb is that anything not in the range of (Q1 - 1.5 IQR) and (Q3 + 1.5 IQR) is an outlier, other rules for removing 'outliers' should be considered as well. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's check missing values for these numerical features." ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Age upon Intake Days 0\n", "Age upon Outcome Days 0\n", "dtype: int64\n" ] } ], "source": [ "print(df[numerical_features].isna().sum())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If any missing values, as a quick fix, we can apply mean imputation. This will replace the missing values with the mean value of the corresponding column.\n", "\n", "__Note__: The statistically correct way to perform mean/mode imputation before training an ML model is to compute the column-wise means on the training data only, and then use these values to impute missing data in both the train and test sets. So, you'll need to split your dataset first. Same goes for any other transformations we would like to apply to these numerical features, such as scaling. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Cleaning categorical features \n", "\n", "Let's also examine the categorical features." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sex upon Outcome\n", "['Neutered Male' 'Intact Male' 'Intact Female' 'Unknown' 'Spayed Female'\n", " nan]\n", "Intake Type\n", "['Owner Surrender' 'Stray' 'Wildlife' 'Public Assist' 'Euthanasia Request'\n", " 'Abandoned']\n", "Intake Condition\n", "['Normal' 'Nursing' 'Sick' 'Injured' 'Aged' 'Feral' 'Pregnant' 'Other'\n", " 'Behavior' 'Medical']\n", "Pet Type\n", "['Cat' 'Dog' 'Other' 'Bird' 'Livestock']\n", "Sex upon Intake\n", "['Neutered Male' 'Intact Male' 'Intact Female' 'Unknown' 'Spayed Female'\n", " nan]\n" ] } ], "source": [ "for c in categorical_features:\n", " print(c)\n", " print(df[c].unique()) #value_counts())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Note on boolean type features__: Some categories might be of boolean type, like __False__ and __True__. The booleans will raise errors when trying to encode the categoricals with sklearn encoders, none of which accept boolean types. If using pandas get_dummies to one-hot encode the categoricals, there's no need to convert the booleans. However, get_dummies is trickier to use with sklearn's Pipeline and GridSearch. \n", "\n", "One way to deal with the booleans is to convert them to strings, by using a mask and a map changing only the booleans. Another way to handle the booleans is to convert them to strings by changing the type of all categoricals to 'str'. This will also affect the nans, basically performing imputation of the nans with a 'nans' placeholder value! \n", "\n", "Applying the type conversion to both categoricals and text features, takes care of the nans in the text fields as well. In case other imputations are planned for the categoricals and/or test fields, notice that the masking shown above leaves the nans unchanged." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "df[categorical_features + text_features] = df[categorical_features + text_features].astype('str')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's have a check on missing values for the categorical features (and text features here)." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sex upon Outcome 0\n", "Intake Type 0\n", "Intake Condition 0\n", "Pet Type 0\n", "Sex upon Intake 0\n", "Name 0\n", "Found Location 0\n", "Breed 0\n", "Color 0\n", "dtype: int64\n" ] } ], "source": [ "print(df[categorical_features + text_features].isna().sum())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Converting categoricals into useful numerical features will also have to wait until after the train/test split." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Cleaning text features \n", "\n", "Also a good idea to look at the text fields. Text cleaning can be performed here, before train/test split, with less code. " ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Name\n", "['Chunk' 'Gizmo' 'nan' ... '*Lingonberry' 'Guawp' '*Squanchy']\n", "Found Location\n", "['Austin (TX)' '7201 Levander Loop in Austin (TX)'\n", " '12034 Research in Austin (TX)' ... '4612 Sherwyn Drive in Austin (TX)'\n", " '16010 Voelker Ln in Austin (TX)' '2211 Santa Rita Street in Austin (TX)']\n", "Breed\n", "['Domestic Shorthair Mix' 'Chihuahua Shorthair Mix' 'Domestic Shorthair'\n", " ... 'Unknown' 'Bichon Frise/Lhasa Apso' 'Treeing Cur']\n", "Color\n", "['Brown Tabby/White' 'White/Brown' 'Orange Tabby' 'Black'\n", " 'White/Orange Tabby' 'Blue/White' 'Brown Tabby' 'Gray' 'Calico'\n", " 'Brown/Black' 'White/Tan' 'White' 'Brown' 'Black/White' 'Brown/White'\n", " 'Black/Brown' 'Chocolate/White' 'Red' 'White/White' 'Brown Brindle/White'\n", " 'Gray/Black' 'Tortie' 'Tan' 'White/Blue Tabby' 'Brown/Brown' 'Black/Gray'\n", " 'Blue' 'Cream Tabby' 'Brown/Gray' 'Blue Tabby/White' 'Red/White'\n", " 'Orange Tabby/White' 'Brown Merle/White' 'Tricolor' 'Apricot' 'Black/Tan'\n", " 'Tortie Point' 'Tan/Black' 'Torbie/Brown Tabby' 'White/Black'\n", " 'Blue Tabby' 'Blue Tick' 'White/Gray' 'Black/Tricolor' 'Chocolate/Tan'\n", " 'White/Brown Tabby' 'White/Brown Brindle' 'Lynx Point' 'Buff' 'Torbie'\n", " 'White/Buff' 'Brown Brindle' 'Cream' 'White/Blue' 'Blue/Tan'\n", " 'Black Brindle/White' 'Black/Yellow Brindle' 'Chocolate/Black'\n", " 'Black/Red' 'Fawn/White' 'Chocolate' 'Blue/Brown Brindle' 'Tan/White'\n", " 'Cream Tabby/White' 'Tan/Gray' 'Sable' 'Red/Buff' 'Blue Merle'\n", " 'Lynx Point/White' 'Yellow' 'Black/Brown Brindle' 'Brown/Tan'\n", " 'Silver Tabby' 'White/Red' 'Brown/Orange' 'Tricolor/White' 'Sable/Black'\n", " 'Gray/White' 'Orange/White' 'Brown Tiger/Brown' 'Brown Tabby/Black'\n", " 'Torbie/White' 'Yellow Brindle' 'Cream/White' 'Brown Brindle/Black'\n", " 'Black/Chocolate' 'Gold/White' 'White/Orange' 'Black Tabby'\n", " 'Tricolor/Brown' 'Seal Point/Gray' 'White/Tricolor' 'Silver/Tan'\n", " 'Gray Tabby/White' 'Black Brindle' 'Brown/Tricolor' 'Black Tabby/White'\n", " 'Yellow/White' 'Cream/Black' 'Gray/Tortie' 'Flame Point' 'Sable/White'\n", " 'Seal Point' 'Chocolate Point' 'Red/Tan' 'Gray/Tan' 'Calico Point/Gray'\n", " 'Black/Black' 'Red/Cream' 'White/Red Merle' 'Tortie/White' 'Red/Black'\n", " 'Green/Silver' 'Brown/Red' 'Gray Tabby' 'Green/Gray' 'Gray/Brown'\n", " 'Gray/Blue Merle' 'White/Blue Merle' 'Blue Cream' 'Silver Tabby/White'\n", " 'Black/Cream' 'Lilac Point' 'Gray Tabby/Black' 'Brown Merle' 'Gold/Cream'\n", " 'Gold' 'Blue Merle/Tricolor' 'Buff/White' 'White/Cream' 'Red Merle/White'\n", " 'Fawn/Black' 'Yellow/Yellow' 'Sable/Brown' 'Black/Black Tabby'\n", " 'White/Gray Tabby' 'Calico/White' 'Tricolor/Black' 'Fawn' 'Blue/Blue'\n", " 'White/Chocolate' 'Tan/Fawn' 'Blue Merle/White' 'Lynx Point/Blue'\n", " 'Blue Merle/Brown' 'Blue Merle/Tan' 'White/Seal Point' 'Liver/Tan'\n", " 'Blue Point/White' 'Liver/White' 'Chocolate Point/White' 'Gray/Pink'\n", " 'Black Brindle/Blue' 'Black Smoke' 'Brown Brindle/Red Tick' 'Cream/Brown'\n", " 'Black/Blue Tick' 'Red Tick/Blue Tick' 'White/Yellow' 'Orange'\n", " 'Torbie/Brown' 'Sable/Tan' 'Yellow Brindle/White' 'Gray/Orange'\n", " 'Calico Point' 'Red Tick' 'White/Red Tick' 'Blue Tick/Tan' 'Brown/Cream'\n", " 'Cream/Tan' 'Tan/Brown' 'Buff/Brown' 'Tan/Tan' 'Chocolate/Tricolor'\n", " 'Calico Point/White' 'Brown Brindle/Brown Brindle' 'Green/Brown'\n", " 'Black/Silver' 'White/Calico' 'Brown/Chocolate' 'Cream/Silver'\n", " 'White/Brown Merle' 'Red/Brown' 'White/Fawn' 'White/Gray Tiger'\n", " 'Tan/Gold' 'Tan/Red' 'Tan/Silver' 'Lilac Point/White' 'Buff/Black'\n", " 'Cream/Brown Merle' 'White/Black Brindle' 'Silver' 'Lilac Point/Gray'\n", " 'Black Smoke/White' 'Pink' 'Blue Tick/Tricolor' 'Blue Tick/Black'\n", " 'Seal Point/White' 'Blue Point' 'Silver/Brown' 'Fawn/Brown'\n", " 'Black Brindle/Brown' 'Blue Merle/Black' 'Blue Cream/White'\n", " 'White/Blue Cream' 'Gray/Gray' 'Tortie/Tortie' 'Green' 'Brown/Buff'\n", " 'Chocolate/Brown Tabby' 'Tortie/Blue Cream' 'Brown/Fawn' 'White/Tortie'\n", " 'Orange Tabby/Orange Tabby' 'Tortie/Black' 'White/Cream Tabby'\n", " 'Tan/Cream' 'Red/Yellow' 'Blue/Tortie' 'Lynx Point/Brown Tabby'\n", " 'Black Tabby/Orange' 'Blue/Tricolor' 'Black/Blue' 'White/Agouti'\n", " 'Gold/Yellow' 'Chocolate/Fawn' 'Orange/Orange Tabby'\n", " 'Tricolor/Blue Merle' 'Brown/Brown Tabby' 'Black/Orange'\n", " 'Cream/Blue Point' 'Calico/Tricolor' 'Agouti' 'Calico/Black'\n", " 'Brown Brindle/Brown' 'Lilac Point/Black' 'Tan/Blue Merle'\n", " 'Blue Tabby/Black' 'Silver/Black' 'Tan/Blue' 'Black/Yellow'\n", " 'Yellow/Green' 'Flame Point/Cream' 'Agouti/White' 'Brown/Green'\n", " 'Yellow/Black' 'Torbie/Blue Tabby' 'White/Black Tabby'\n", " 'Blue Merle/Red Merle' 'Cream/Orange' 'Gray/Cream' 'Silver/Chocolate'\n", " 'Tan/Tricolor' 'Red Merle' 'Chocolate/Cream' 'Tan/Buff'\n", " 'Brown Tiger/White' 'Blue/Gray' 'Gray Tabby/Gray' 'Chocolate/Red'\n", " 'Black Brindle/Black' 'Gray/Blue' 'Tricolor/Blue' 'Red Tick/Brown'\n", " 'Yellow/Blue' 'Gray/Yellow' 'Brown/Liver' 'Red/Red' 'Silver/Orange'\n", " 'Black/Pink' 'Tricolor/Tan' 'Calico/Brown' 'Tortie Point/Lynx Point'\n", " 'Cream/Blue' 'Green/Red' 'Tortie/Orange' 'Gray/Red' 'Black/Black Smoke'\n", " 'Buff/Tan' 'Brown/Yellow' 'Fawn/Blue' 'Red/Tricolor' 'Fawn/Cream'\n", " 'Silver/Red' 'Brown Tabby/Calico' 'Black/Blue Merle' 'Yellow/Orange'\n", " 'Brown Tabby/Brown' 'Blue Tick/Red' 'Seal Point/Cream'\n", " 'Cream Tabby/Orange' 'Chocolate/Blue Tick' 'Red/Blue'\n", " 'Blue Cream/Blue Tiger' 'Blue/Black' 'Brown Tiger' 'Brown Merle/Tan'\n", " 'Fawn/Tan' 'Brown Tabby/Orange' 'Blue Smoke' 'Red Tick/White'\n", " 'White/Apricot' 'White/Liver' 'Red/Gold' 'Green/Yellow' 'Red/Red Merle'\n", " 'Gold/Tan' 'Fawn/Tricolor' 'Blue Merle/Blue Merle' 'Red/Silver'\n", " 'Calico/Orange' 'Brown Tabby/Silver' 'Blue Tick/Red Tick'\n", " 'Orange Tabby/Brown' 'Apricot/Brown' 'Red/Red Tick' 'Gray/Tricolor'\n", " 'Black/Buff' 'Black/Green' 'Black/Black Brindle' 'Orange/Black'\n", " 'Tortie Point/White' 'White/Liver Tick' 'Gold/Brown' 'Red Merle/Black'\n", " 'Apricot/White' 'Brown Tabby/Brown Tabby' 'Cream/Seal Point'\n", " 'Tan/Red Merle' 'Gray/Green' 'Chocolate/Brown' 'Buff/Cream' 'Buff/Red'\n", " 'Brown/Silver' 'Blue Tick/Brown' 'Brown Tabby/Tortie' 'Blue Tiger/White'\n", " 'Tan/Chocolate Point' 'Black/Fawn' 'Blue/Brown' 'White/Blue Tick'\n", " 'Blue Tick/White' 'Orange Tabby/Orange' 'Blue Tick/Brown Brindle'\n", " 'White/Gold' 'Black Smoke/Brown Tabby' 'Red/Gray'\n", " 'Brown Brindle/Tricolor' 'Orange/Brown' 'Gray/Gold' 'Liver Tick/White'\n", " 'Black Smoke/Black Tiger' 'Brown/Red Merle' 'Sable/Buff'\n", " 'Gray Tabby/Brown Tabby' 'Gold/Silver' 'Seal Point/Brown'\n", " 'Silver Lynx Point' 'Black/Gold' 'Liver' 'Yellow/Tan' 'Blue Tiger'\n", " 'Tan/Yellow' 'Orange/Tan' 'Lynx Point/Tortie Point' 'Brown Tabby/Gray'\n", " 'Sable/Cream' 'White/Chocolate Point' 'White/Yellow Brindle'\n", " 'Black Tiger/White' 'Calico/Gray Tabby' 'Buff/Gray' 'Tricolor/Silver'\n", " 'Cream/Red' 'Gold/Buff' 'Liver Tick' 'Brown Brindle/Tan'\n", " 'Tricolor/Blue Tick' 'White/Pink' 'Sable/Gray' 'Brown/Brown Brindle'\n", " 'Orange Tabby/Tortie Point' 'Chocolate/Chocolate' 'Yellow/Gray'\n", " 'Chocolate Point/Cream' 'Black Brindle/Brown Brindle' 'Yellow/Cream'\n", " 'Gold/Black' 'Tan/Yellow Brindle' 'Red Tick/Tricolor'\n", " 'Brown Merle/Brown Tabby' 'Flame Point/White' 'Calico/Calico'\n", " 'Orange Tabby/Apricot' 'Blue/Calico' 'Brown/Black Smoke' 'Green/Black'\n", " 'Calico Point/Lynx Point' 'Torbie/Gray' 'Tortie/Calico'\n", " 'Brown Tabby/Black Tabby' 'Tortie/Blue' 'White/Lynx Point'\n", " 'Red Tick/Brown Brindle' 'Gold/Gray' 'Silver/White' 'Blue/Cream'\n", " 'Blue Tabby/Cream' 'Black Smoke/Blue Tick' 'Tricolor/Cream'\n", " 'Gray Tabby/Orange' 'Brown Tabby/Blue' 'Blue Tabby/Buff' 'Tricolor/Red'\n", " 'Chocolate/Gold' 'Brown Merle/Brown' 'Cream/Gray' 'Torbie/Calico'\n", " 'Yellow/Red' 'Tricolor/Gray' 'White/Silver Tabby' 'Red Tick/Tan'\n", " 'Orange/Gray' 'Cream Tabby/Cream Tabby' 'Black Tabby/Black'\n", " 'Cream/Tricolor' 'Yellow/Orange Tabby' 'Orange Tabby/Cream'\n", " 'Green/Orange' 'Gray/Silver' 'Tricolor/Brown Brindle' 'Black/Tortie'\n", " 'White/Lilac Point' 'Black/Brown Merle' 'Blue Tabby/Tan' 'Fawn/Chocolate'\n", " 'Gold/Gold' 'White/Silver' 'Blue/Green' 'Blue Merle/Gray'\n", " 'Black Smoke/Black' 'Tan/Red Tick' 'Tan/Brown Brindle' 'Orange Tiger'\n", " 'Green/Blue' 'Gray/Gray Tabby' 'Blue Cream/Tortie' 'Blue Merle/Cream'\n", " 'Silver Lynx Point/White' 'Brown/Pink' 'Tricolor/Chocolate'\n", " 'Red Merle/Tricolor' 'Calico/Blue Cream' 'Red Tick/Red'\n", " 'Lilac Point/Cream' 'Tan/Apricot' 'Calico/Brown Tabby' 'Blue Smoke/Brown'\n", " 'Brown Tabby/Gray Tabby' 'Brown Brindle/Blue Tick' 'Brown/Red Tick'\n", " 'Blue Point/Cream' 'Agouti/Gray' 'Blue Smoke/White' 'Agouti/Brown Tabby'\n", " 'Blue/Silver' 'Yellow Brindle/Blue' 'Seal Point/Buff'\n", " 'Tortie/Black Smoke' 'Torbie/Black' 'Red Merle/Brown Merle' 'Silver/Gray'\n", " 'Green/White' 'Brown Brindle/Blue' 'Black Tiger' 'Black/Brown Tabby'\n", " 'Sable/Red' 'White/Black Smoke' 'Lynx Point/Tan' 'Black/Gray Tabby'\n", " 'Black Smoke/Brown' 'Chocolate/Brown Merle' 'Red/Green' 'Tricolor/Calico'\n", " 'Chocolate/Yellow' 'Black Brindle/Blue Tick' 'Gray/Buff'\n", " 'Brown/Blue Merle' 'Brown/Blue' 'Black Brindle/Tan' 'Brown/Black Tabby'\n", " 'Brown Merle/Black' 'Cream/Red Tick' 'Blue/Yellow' 'Chocolate/Gray'\n", " 'Brown Merle/Chocolate' 'White/Brown Tiger' 'Gray/Fawn'\n", " 'Red Merle/Red Merle' 'Tricolor/Orange' 'Yellow/Brown' 'Red Tick/Black'\n", " 'Red Tick/Brown Merle' 'Silver/Blue' 'Ruddy/Cream' 'Orange/Blue'\n", " 'Lynx Point/Gray' 'Fawn/Gray' 'Blue Merle/Brown Brindle'\n", " 'Black Smoke/Chocolate' 'Black Tabby/Gray Tabby' 'Blue Tabby/Orange'\n", " 'Brown/Brown Merle' 'Tricolor/Tricolor' 'Chocolate/Red Tick'\n", " 'Chocolate/Liver Tick' 'Tortie/Brown' 'Silver Tabby/Black'\n", " 'Tan/Cream Tabby' 'Tortie Point/Cream' 'Liver/Liver Tick' 'Cream/Cream'\n", " 'Brown Brindle/Brown Merle' 'Tan/Brown Merle' 'Blue/Orange' 'Liver/Buff'\n", " 'Brown Tabby/Orange Tabby' 'Tricolor/Brown Merle' 'Lynx Point/Cream'\n", " 'Torbie/Blue Cream' 'Blue Smoke/Gray' 'White/Black Tiger'\n", " 'Lynx Point/Gray Tabby' 'White/Calico Point' 'Brown Tabby/Black Brindle'\n", " 'Tricolor/Red Tick' 'Blue/Yellow Brindle' 'Silver/Cream'\n", " 'Brown/Black Brindle' 'Brown Brindle/Blue Cream'\n", " 'Cream Tabby/Orange Tabby' 'Brown/Apricot' 'Tortie Point/Blue'\n", " 'Blue Cream/Buff' 'Tortie/Blue Tabby' 'Sable/Red Merle'\n", " 'Black/Seal Point' 'Agouti/Cream' 'Blue Tabby/Blue Cream' 'White/Green'\n", " 'Blue Cream/Blue Tabby' 'Brown Brindle/Gray' 'Torbie/Silver Tabby'\n", " 'Red Merle/Tan' 'Buff/Yellow' 'Brown Tabby/Lynx Point' 'Black Tabby/Gray'\n", " 'Black/Silver Tabby' 'Chocolate/Brown Brindle' 'Red/Brown Brindle'\n", " 'Cream Tiger' 'Orange Tabby/Black' 'Brown Brindle/Liver Tick'\n", " 'Blue Tabby/Tortie' 'White/Flame Point' 'Tortie Point/Seal Point']\n" ] } ], "source": [ "for c in text_features:\n", " print(c)\n", " print(df[c].unique()) #value_counts())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We re-use the helper functions from the 'Text processing' notebook above.\n", "\n", "__Warning__: cleaning stage can take a few minutes, depending on how much text is there to process." ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Text cleaning: Name\n", "Text cleaning: Found Location\n", "Text cleaning: Breed\n", "Text cleaning: Color\n" ] } ], "source": [ "# Prepare cleaning functions\n", "import re, string\n", "import nltk\n", "from nltk.stem import SnowballStemmer\n", "\n", "stop_words = [\"a\", \"an\", \"the\", \"this\", \"that\", \"is\", \"it\", \"to\", \"and\"]\n", "\n", "stemmer = SnowballStemmer('english')\n", "\n", "def preProcessText(text):\n", " # lowercase and strip leading/trailing white space\n", " text = text.lower().strip()\n", " \n", " # remove HTML tags\n", " text = re.compile('<.*?>').sub('', text)\n", " \n", " # remove punctuation\n", " text = re.compile('[%s]' % re.escape(string.punctuation)).sub(' ', text)\n", " \n", " # remove extra white space\n", " text = re.sub('\\s+', ' ', text)\n", " \n", " return text\n", "\n", "def lexiconProcess(text, stop_words, stemmer):\n", " filtered_sentence = []\n", " words = text.split(\" \")\n", " for w in words:\n", " if w not in stop_words:\n", " filtered_sentence.append(stemmer.stem(w))\n", " text = \" \".join(filtered_sentence)\n", " \n", " return text\n", "\n", "def cleanSentence(text, stop_words, stemmer):\n", " return lexiconProcess(preProcessText(text), stop_words, stemmer)\n", "\n", "# Clean the text features\n", "for c in text_features:\n", " print('Text cleaning: ', c)\n", " df[c] = [cleanSentence(item, stop_words, stemmer) for item in df[c].values]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The cleaned text features are ready to be vectorized after the train/test split." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Note__: more exploratory data analysis might reveal other important hidden atributes and/or relationships of the model features considered. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Training and test datasets\n", "(Go to top)\n", "\n", "We split our dataset into training (90%) and test (10%) subsets using sklearn's [__train_test_split()__](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "from sklearn.model_selection import train_test_split\n", "\n", "train_data, test_data = train_test_split(df, test_size=0.1, shuffle=True, random_state=23)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Target balancing" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Training set shape: (85936, 13)\n", "Class 0 samples in the training set: 37499\n", "Class 1 samples in the training set: 48437\n", "Class 0 samples in the test set: 4132\n", "Class 1 samples in the test set: 5417\n" ] } ], "source": [ "print('Training set shape:', train_data.shape)\n", "\n", "print('Class 0 samples in the training set:', sum(train_data[model_target] == 0))\n", "print('Class 1 samples in the training set:', sum(train_data[model_target] == 1))\n", "\n", "print('Class 0 samples in the test set:', sum(test_data[model_target] == 0))\n", "print('Class 1 samples in the test set:', sum(test_data[model_target] == 1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "__Important note:__ We want to fix the imbalance only in training set. We shouldn't change the validation and test sets, as these should follow the original distribution." ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [], "source": [ "from sklearn.utils import shuffle\n", "\n", "class_0_no = train_data[train_data[model_target] == 0]\n", "class_1_no = train_data[train_data[model_target] == 1]\n", "\n", "upsampled_class_0_no = class_0_no.sample(n=len(class_1_no), replace=True, random_state=42)\n", "\n", "train_data = pd.concat([class_1_no, upsampled_class_0_no])\n", "train_data = shuffle(train_data)" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Training set shape: (96874, 13)\n", "Class 1 samples in the training set: 48437\n", "Class 0 samples in the training set: 48437\n" ] } ], "source": [ "print('Training set shape:', train_data.shape)\n", "\n", "print('Class 1 samples in the training set:', sum(train_data[model_target] == 1))\n", "print('Class 0 samples in the training set:', sum(train_data[model_target] == 0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Data processing with Pipeline and ColumnTransformer\n", "(Go to top)\n", "\n", "Let's build a more complex pipeline today. We first build separate pipelines to handle the numerical, categorical, and text features, and then combine them into a composite pipeline along with an estimator, a [Decision Tree Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) here.\n", "\n", " * For the numerical features pipeline, the __numerical_processor__ below, we impute missing values with the mean using sklearn's SimpleImputer, followed by a MinMaxScaler (don't have to scale features when using Decision Trees, but it's a good idea to see how to use more data transforms). If different processing is desired for different numerical features, different pipelines should be built - just like shown below for the two text features.\n", " \n", " \n", " * In the categoricals pipeline, the __categorical_processor__ below, we impute with a placeholder value (no effect here as we already encoded the 'nan's), and encode with sklearn's OneHotEncoder. If computing memory is an issue, it is a good idea to check categoricals' unique values, to get an estimate of many dummy features will be created by one-hot encoding. Note the __handle_unknown__ parameter that tells the encoder to ignore (rather than throw an error for) any unique value that might show in the validation/and or test set that was not present in the initial training set.\n", " \n", " \n", " * And, finally, also with memory usage in mind, we build two more pipelines, one for each of our text features, trying different vocabulary sizes.\n", " \n", "The selective preparations of the dataset features are then put together into a collective __ColumnTransformer__, to be finally used in a __Pipeline__ along with an estimator. This ensures that the transforms are performed automatically on the raw data when fitting the model and when making predictions, such as when evaluating the model on a validation dataset via cross-validation or making predictions on a test dataset in the future." ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
Pipeline(steps=[('data_preprocessing',\n",
       "                 ColumnTransformer(transformers=[('numerical_pre',\n",
       "                                                  Pipeline(steps=[('num_imputer',\n",
       "                                                                   SimpleImputer()),\n",
       "                                                                  ('num_scaler',\n",
       "                                                                   MinMaxScaler())]),\n",
       "                                                  ['Age upon Intake Days',\n",
       "                                                   'Age upon Outcome Days']),\n",
       "                                                 ('categorical_pre',\n",
       "                                                  Pipeline(steps=[('cat_imputer',\n",
       "                                                                   SimpleImputer(fill_value='missing',\n",
       "                                                                                 strategy='constant')),\n",
       "                                                                  ('cat_encoder',\n",
       "                                                                   OneHotEncoder(handle_unknown='ignore'))]),\n",
       "                                                  ['Sex upon Outcome',\n",
       "                                                   'Intake Type',\n",
       "                                                   'Intake Condition',\n",
       "                                                   'Pet Type',\n",
       "                                                   'Sex upon Intake']),\n",
       "                                                 ('text_pre_0',\n",
       "                                                  Pipeline(steps=[('text_vect_0',\n",
       "                                                                   CountVectorizer(binary=True,\n",
       "                                                                                   max_features=50))]),\n",
       "                                                  'Name'),\n",
       "                                                 ('text_pre_1',\n",
       "                                                  Pipeline(steps=[('text_vect_1',\n",
       "                                                                   CountVectorizer(binary=True,\n",
       "                                                                                   max_features=150))]),\n",
       "                                                  'Found Location')])),\n",
       "                ('dt', DecisionTreeClassifier())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
" ], "text/plain": [ "Pipeline(steps=[('data_preprocessing',\n", " ColumnTransformer(transformers=[('numerical_pre',\n", " Pipeline(steps=[('num_imputer',\n", " SimpleImputer()),\n", " ('num_scaler',\n", " MinMaxScaler())]),\n", " ['Age upon Intake Days',\n", " 'Age upon Outcome Days']),\n", " ('categorical_pre',\n", " Pipeline(steps=[('cat_imputer',\n", " SimpleImputer(fill_value='missing',\n", " strategy='constant')),\n", " ('cat_encoder',\n", " OneHotEncoder(handle_unknown='ignore'))]),\n", " ['Sex upon Outcome',\n", " 'Intake Type',\n", " 'Intake Condition',\n", " 'Pet Type',\n", " 'Sex upon Intake']),\n", " ('text_pre_0',\n", " Pipeline(steps=[('text_vect_0',\n", " CountVectorizer(binary=True,\n", " max_features=50))]),\n", " 'Name'),\n", " ('text_pre_1',\n", " Pipeline(steps=[('text_vect_1',\n", " CountVectorizer(binary=True,\n", " max_features=150))]),\n", " 'Found Location')])),\n", " ('dt', DecisionTreeClassifier())])" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.impute import SimpleImputer\n", "from sklearn.preprocessing import OneHotEncoder, MinMaxScaler\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "from sklearn.pipeline import Pipeline\n", "from sklearn.compose import ColumnTransformer\n", "from sklearn.tree import DecisionTreeClassifier\n", "from sklearn.metrics import classification_report\n", "from sklearn.metrics import accuracy_score\n", "\n", "\n", "### COLUMN_TRANSFORMER ###\n", "##########################\n", "\n", "# Preprocess the numerical features\n", "numerical_processor = Pipeline([\n", " ('num_imputer', SimpleImputer(strategy='mean')),\n", " ('num_scaler', MinMaxScaler()) # Shown in case is needed, not a must with Decision Trees\n", " ])\n", " \n", "# Preprocess the categorical features\n", "categorical_processor = Pipeline([\n", " ('cat_imputer', SimpleImputer(strategy='constant', fill_value='missing')), # Shown in case is needed, no effect here as we already imputed with 'nan' strings\n", " ('cat_encoder', OneHotEncoder(handle_unknown='ignore')) # handle_unknown tells it to ignore (rather than throw an error for) any value that was not present in the initial training set.\n", " ])\n", "\n", "# Preprocess 1st text feature\n", "text_processor_0 = Pipeline([\n", " ('text_vect_0', CountVectorizer(binary=True, max_features=50))\n", " ])\n", "\n", "# Preprocess 2nd text feature (larger vocabulary)\n", "text_precessor_1 = Pipeline([\n", " ('text_vect_1', CountVectorizer(binary=True, max_features=150))\n", " ])\n", "\n", "# Combine all data preprocessors from above (add more, if you choose to define more!)\n", "# For each processor/step specify: a name, the actual process, and finally the features to be processed\n", "data_preprocessor = ColumnTransformer([\n", " ('numerical_pre', numerical_processor, numerical_features),\n", " ('categorical_pre', categorical_processor, categorical_features),\n", " ('text_pre_0', text_processor_0, text_features[0]),\n", " ('text_pre_1', text_precessor_1, text_features[1])\n", " ]) \n", "\n", "### PIPELINE ###\n", "################\n", "\n", "# Pipeline desired all data transformers, along with an estimator at the end\n", "# Later you can set/reach the parameters using the names issued - for hyperparameter tuning, for example\n", "pipeline = Pipeline([\n", " ('data_preprocessing', data_preprocessor),\n", " ('dt', DecisionTreeClassifier())\n", " ])\n", "\n", "# Visualize the pipeline\n", "# This will come in handy especially when building more complex pipelines, stringing together multiple preprocessing steps\n", "from sklearn import set_config\n", "set_config(display='diagram')\n", "pipeline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Train and tune a classifier\n", "(Go to top)\n", "\n", "Let's first train and test the above composite pipeline on the train and the test sets. " ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[45605 2832]\n", " [ 1996 46441]]\n", " precision recall f1-score support\n", "\n", " 0.0 0.96 0.94 0.95 48437\n", " 1.0 0.94 0.96 0.95 48437\n", "\n", " accuracy 0.95 96874\n", " macro avg 0.95 0.95 0.95 96874\n", "weighted avg 0.95 0.95 0.95 96874\n", "\n", "Accuracy (training): 0.9501620661890703\n", "[[3169 963]\n", " [ 864 4553]]\n", " precision recall f1-score support\n", "\n", " 0.0 0.79 0.77 0.78 4132\n", " 1.0 0.83 0.84 0.83 5417\n", "\n", " accuracy 0.81 9549\n", " macro avg 0.81 0.80 0.80 9549\n", "weighted avg 0.81 0.81 0.81 9549\n", "\n", "Accuracy (test): 0.8086710650329878\n" ] } ], "source": [ "from sklearn.metrics import confusion_matrix, classification_report, accuracy_score\n", "\n", "# Get train data to train the pipeline\n", "X_train = train_data[model_features]\n", "y_train = train_data[model_target]\n", "\n", "# Fit the Pipeline to training data\n", "pipeline.fit(X_train, y_train)\n", "\n", "# Use the fitted pipeline to make predictions on the train dataset\n", "train_predictions = pipeline.predict(X_train)\n", "print(confusion_matrix(y_train, train_predictions))\n", "print(classification_report(y_train, train_predictions))\n", "print(\"Accuracy (training):\", accuracy_score(y_train, train_predictions))\n", "\n", "# Get test data to test the pipeline\n", "X_test = test_data[model_features]\n", "y_test = test_data[model_target]\n", "\n", "# Use the fitted pipeline to make predictions on the test dataset\n", "test_predictions = pipeline.predict(X_test)\n", "print(confusion_matrix(y_test, test_predictions))\n", "print(classification_report(y_test, test_predictions))\n", "print(\"Accuracy (test):\", accuracy_score(y_test, test_predictions))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Hyperparameter Tuning\n", "\n", "We next use sklearn's [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) to look for hyperparameter combinations to improve the accuracy on the test set (and reduce the generalization gap). As GridSearchCV does cross-validation train-validation split internally, \n", "our data transformers inside the Pipeline context will force the correct behavior of learning data transformations on the training set, and applying the transformations to the validation set when cross-validating, as well as on the test set later when running test predictions.\n", "\n", "Also, Pipeline's steps names give easy access to hyperparameters for hyperparameter tuning while cross-validating. Parameters of the estimators in the pipeline can be accessed using the __estimator__ __ __parameter__ syntax. Note the __double underscores__ connecting the __estimator__ and __parameter__!" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Fitting 5 folds for each of 27 candidates, totalling 135 fits\n" ] }, { "data": { "text/html": [ "
GridSearchCV(cv=5,\n",
       "             estimator=Pipeline(steps=[('data_preprocessing',\n",
       "                                        ColumnTransformer(transformers=[('numerical_pre',\n",
       "                                                                         Pipeline(steps=[('num_imputer',\n",
       "                                                                                          SimpleImputer()),\n",
       "                                                                                         ('num_scaler',\n",
       "                                                                                          MinMaxScaler())]),\n",
       "                                                                         ['Age '\n",
       "                                                                          'upon '\n",
       "                                                                          'Intake '\n",
       "                                                                          'Days',\n",
       "                                                                          'Age '\n",
       "                                                                          'upon '\n",
       "                                                                          'Outcome '\n",
       "                                                                          'Days']),\n",
       "                                                                        ('categorical_pre',\n",
       "                                                                         Pipeline(steps=[('cat_imputer',\n",
       "                                                                                          SimpleImputer(fill_value='missing',\n",
       "                                                                                                        strategy='cons...\n",
       "                                                                         Pipeline(steps=[('text_vect_0',\n",
       "                                                                                          CountVectorizer(binary=True,\n",
       "                                                                                                          max_features=50))]),\n",
       "                                                                         'Name'),\n",
       "                                                                        ('text_pre_1',\n",
       "                                                                         Pipeline(steps=[('text_vect_1',\n",
       "                                                                                          CountVectorizer(binary=True,\n",
       "                                                                                                          max_features=150))]),\n",
       "                                                                         'Found '\n",
       "                                                                         'Location')])),\n",
       "                                       ('dt', DecisionTreeClassifier())]),\n",
       "             n_jobs=-1,\n",
       "             param_grid={'dt__max_depth': [100, 200, 300],\n",
       "                         'dt__min_samples_leaf': [5, 10, 15],\n",
       "                         'dt__min_samples_split': [2, 5, 15]},\n",
       "             verbose=1)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
" ], "text/plain": [ "GridSearchCV(cv=5,\n", " estimator=Pipeline(steps=[('data_preprocessing',\n", " ColumnTransformer(transformers=[('numerical_pre',\n", " Pipeline(steps=[('num_imputer',\n", " SimpleImputer()),\n", " ('num_scaler',\n", " MinMaxScaler())]),\n", " ['Age '\n", " 'upon '\n", " 'Intake '\n", " 'Days',\n", " 'Age '\n", " 'upon '\n", " 'Outcome '\n", " 'Days']),\n", " ('categorical_pre',\n", " Pipeline(steps=[('cat_imputer',\n", " SimpleImputer(fill_value='missing',\n", " strategy='cons...\n", " Pipeline(steps=[('text_vect_0',\n", " CountVectorizer(binary=True,\n", " max_features=50))]),\n", " 'Name'),\n", " ('text_pre_1',\n", " Pipeline(steps=[('text_vect_1',\n", " CountVectorizer(binary=True,\n", " max_features=150))]),\n", " 'Found '\n", " 'Location')])),\n", " ('dt', DecisionTreeClassifier())]),\n", " n_jobs=-1,\n", " param_grid={'dt__max_depth': [100, 200, 300],\n", " 'dt__min_samples_leaf': [5, 10, 15],\n", " 'dt__min_samples_split': [2, 5, 15]},\n", " verbose=1)" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from sklearn.model_selection import GridSearchCV, RandomizedSearchCV\n", "\n", "### PIPELINE GRID_SEARCH ###\n", "############################\n", "\n", "# Parameter grid for GridSearch\n", "param_grid={'dt__max_depth': [100, 200, 300],#, 50, 75, 100, 125, 150, 200, 250], \n", " 'dt__min_samples_leaf': [5, 10, 15],#, 25, 30],\n", " 'dt__min_samples_split': [2, 5, 15]#, 25, 30, 45, 50]\n", " }\n", "\n", "grid_search = GridSearchCV(pipeline, # Base model\n", " param_grid, # Parameters to try\n", " cv = 5, # Apply 5-fold cross validation\n", " verbose = 1, # Print summary\n", " n_jobs = -1 # Use all available processors\n", " )\n", "\n", "# Fit the GridSearch to our training data\n", "grid_search.fit(X_train, y_train)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'dt__max_depth': 100, 'dt__min_samples_leaf': 5, 'dt__min_samples_split': 5}\n", "0.8399983605563826\n" ] } ], "source": [ "print(grid_search.best_params_)\n", "print(grid_search.best_score_)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
Pipeline(steps=[('data_preprocessing',\n",
       "                 ColumnTransformer(transformers=[('numerical_pre',\n",
       "                                                  Pipeline(steps=[('num_imputer',\n",
       "                                                                   SimpleImputer()),\n",
       "                                                                  ('num_scaler',\n",
       "                                                                   MinMaxScaler())]),\n",
       "                                                  ['Age upon Intake Days',\n",
       "                                                   'Age upon Outcome Days']),\n",
       "                                                 ('categorical_pre',\n",
       "                                                  Pipeline(steps=[('cat_imputer',\n",
       "                                                                   SimpleImputer(fill_value='missing',\n",
       "                                                                                 strategy='constant')),\n",
       "                                                                  ('cat_encoder',\n",
       "                                                                   OneHotEncoder(han...\n",
       "                                                   'Intake Type',\n",
       "                                                   'Intake Condition',\n",
       "                                                   'Pet Type',\n",
       "                                                   'Sex upon Intake']),\n",
       "                                                 ('text_pre_0',\n",
       "                                                  Pipeline(steps=[('text_vect_0',\n",
       "                                                                   CountVectorizer(binary=True,\n",
       "                                                                                   max_features=50))]),\n",
       "                                                  'Name'),\n",
       "                                                 ('text_pre_1',\n",
       "                                                  Pipeline(steps=[('text_vect_1',\n",
       "                                                                   CountVectorizer(binary=True,\n",
       "                                                                                   max_features=150))]),\n",
       "                                                  'Found Location')])),\n",
       "                ('dt',\n",
       "                 DecisionTreeClassifier(max_depth=100, min_samples_leaf=5,\n",
       "                                        min_samples_split=5))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
" ], "text/plain": [ "Pipeline(steps=[('data_preprocessing',\n", " ColumnTransformer(transformers=[('numerical_pre',\n", " Pipeline(steps=[('num_imputer',\n", " SimpleImputer()),\n", " ('num_scaler',\n", " MinMaxScaler())]),\n", " ['Age upon Intake Days',\n", " 'Age upon Outcome Days']),\n", " ('categorical_pre',\n", " Pipeline(steps=[('cat_imputer',\n", " SimpleImputer(fill_value='missing',\n", " strategy='constant')),\n", " ('cat_encoder',\n", " OneHotEncoder(han...\n", " 'Intake Type',\n", " 'Intake Condition',\n", " 'Pet Type',\n", " 'Sex upon Intake']),\n", " ('text_pre_0',\n", " Pipeline(steps=[('text_vect_0',\n", " CountVectorizer(binary=True,\n", " max_features=50))]),\n", " 'Name'),\n", " ('text_pre_1',\n", " Pipeline(steps=[('text_vect_1',\n", " CountVectorizer(binary=True,\n", " max_features=150))]),\n", " 'Found Location')])),\n", " ('dt',\n", " DecisionTreeClassifier(max_depth=100, min_samples_leaf=5,\n", " min_samples_split=5))])" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Get the best model out of GridSearchCV\n", "classifier = grid_search.best_estimator_\n", "\n", "# Fit the best model to the train data once more\n", "classifier.fit(X_train, y_train)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7. Test the classifier\n", "(Go to top)\n", "\n", "Now we test the best model with the best parameters on \"unseen\" data (our test data).\n", "\n", "Before that, let's first see how the model works on the training dataset." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model performance on the train set:\n", "[[41294 7143]\n", " [ 4468 43969]]\n", " precision recall f1-score support\n", "\n", " 0.0 0.90 0.85 0.88 48437\n", " 1.0 0.86 0.91 0.88 48437\n", "\n", " accuracy 0.88 96874\n", " macro avg 0.88 0.88 0.88 96874\n", "weighted avg 0.88 0.88 0.88 96874\n", "\n", "Train accuracy: 0.8801432788983629\n" ] } ], "source": [ "from sklearn.metrics import confusion_matrix, classification_report, accuracy_score\n", "\n", "# Use the fitted model to make predictions on the train dataset\n", "train_predictions = classifier.predict(X_train)\n", "\n", "print('Model performance on the train set:')\n", "print(confusion_matrix(y_train, train_predictions))\n", "print(classification_report(y_train, train_predictions))\n", "print(\"Train accuracy:\", accuracy_score(y_train, train_predictions)) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "And now, let's evaluate the performance of the classifier on the test set." ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Model performance on the test set:\n", "[[3214 918]\n", " [ 699 4718]]\n", " precision recall f1-score support\n", "\n", " 0.0 0.82 0.78 0.80 4132\n", " 1.0 0.84 0.87 0.85 5417\n", "\n", " accuracy 0.83 9549\n", " macro avg 0.83 0.82 0.83 9549\n", "weighted avg 0.83 0.83 0.83 9549\n", "\n", "Test accuracy: 0.8306628966383914\n" ] } ], "source": [ "from sklearn.metrics import confusion_matrix, classification_report, accuracy_score\n", "\n", "# Get test data to test the classifier\n", "X_test = test_data[model_features]\n", "y_test = test_data[model_target]\n", "\n", "# Use the fitted model to make predictions on the test dataset\n", "# Test data going through the Pipeline it's first imputed (with means from the train), scaled (with the min/max from the train data), and finally used to make predictions\n", "test_predictions = classifier.predict(X_test)\n", "\n", "print('Model performance on the test set:')\n", "print(confusion_matrix(y_test, test_predictions))\n", "print(classification_report(y_test, test_predictions))\n", "print(\"Test accuracy:\", accuracy_score(y_test, test_predictions))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 8. Improvement Ideas\n", "(Go to top)\n", "\n", "* Try target enconding for categorical variables. (check slides)\n", "* Feature selection: Not all features may be important, we can achieve higher performance by selecting a subset of our features.\n", "* Try ensemble models such as Random Forests, along with RandomizedSearchCV to speed up the hyperparameter tuning task. (check slides)\n" ] } ], "metadata": { "kernelspec": { "display_name": "conda_pytorch_p39", "language": "python", "name": "conda_pytorch_p39" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 4 }