{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Aprendiendo AWS: AutoML con SageMaker Autopilot\n",
"\n",
"Nota: este notebook debe ser corrido con el kernel `Python 3 (Data Science)` de Amazon SageMaker."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Contenido\n",
"\n",
"1. [Introducción](#Introduction)\n",
"2. [Pre-requisitos](#Pre-requisites)\n",
"3. [Entrenamiento del modelo con Autopilot](#Settingup)\n",
"4. [Evaluación del modelo: inferencias *online*](#OnlineInference)\n",
"5. [Evaluación del modelo: inferencias *batch*](#BatchInference)\n",
"6. [Eliminación/borrado de recursos](#Cleanup)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Introducción\n",
"\n",
"Amazon SageMaker Autopilot es una solución que permite automatizar tareas de machine learning para juegos de datos en formato tabular (este tipo de soluciones también se conocen comúnmente como AutoML). En este notebook, vamos a usar los SDKs de AWS para acompañar el tutorial de video en el sitio de Aprendiendo AWS, y de esa manera generar un caso que te permita comprender el funcionamiento de SageMaker Autopilot: descargar los datos, crear el modelo, desplegarlo, y realizar inferencias sobre el mismo.\n",
"\n",
"A lo largo de este ejemplo, vamos a usar un dataset para entrenar un modelo que nos permita predecir si un cliente determinado aceptará una propuesta del equipo de televentas para suscribir a un nuevo producto financiero (depósito o plazo fijo) que está siendo lanzado por el banco. Vamos a usar una colección de datos denominada [Bank Marketing Data Set](https://archive.ics.uci.edu/ml/datasets/bank+marketing), que nos permitirá modelar los datos que típicamente están disponibles en el momento de planificar y ejecutar este tipo de campañas. Para más información sobre el dataset, visitar este [enlace](https://archive.ics.uci.edu/ml/datasets/bank+marketing) .\n",
"\n",
"Las campañas de marketing directo a través de correo, llamadas telefóncas, etc., son una táctica común que permite capturar nuevos clientes. Debido a que los recursos que posee el equipo de televentas son limitados, el objetivo de nuestro análisis será el de identificar un subconjunto de potenciales clientes que tengan la mayor posibilidad de aceptar la oferta ofrecida por el equipo de televentas. Para esto, vamos a entrenar un modelo que permita realizar la predicción a partir de información ya disponible en la organización, como ser por ejemplo información demográfica, interacciones anteriores e información de contexto.\n",
"\n",
"Este *notebook* demuestra cómo puedes aplicar Autopilot y así producir un *pipeline* de procesamiento de machine learning para explorar una gran cantidad de potenciales modelos o \"candidatos\". Cada candidato generado por Autopilot consta de dos grandes pasos: el primer paso se encarga de realizar la ingeniería de features sobre el juego de datos, mientras que el segundo paso es donde se entrena un algoritmo específico para producir el modelo. Al desplegar el modelo podremos realizar inferencias siguiendo estos pasos de manera análoga.\n",
"\n",
"Este notebook contiene todas las instrucciones necesarias para entrenar el modelo, así como también desplegar el mismo para realizar predicciones sobre un conjunto de datos y calcular la matriz de confusión. Usaremos el SDK de Python para AWS ([boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html)), un SDK de alto nivel que nos permitirá interactuar con Autopilot y el resto de los servicios que vamos a necesitar."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Pre-requisitos\n",
"\n",
"Comencemos creando los siguientes objetos en la cuenta:\n",
"\n",
"- El repositorio de Amazon S3 junto con el prefijo a utilizar donde almacenaremos la información de entrenamiento y los artefactos del modelo. El mismo debería pertenecer a la misma región donde estamos corriendo el proceso de entrenamiento en SageMaker y Autopilot. El código de más abajo se encarga entonces de crear el bucket, o (si existe), usar el objeto por defecto.\n",
"\n",
"- El rol de ejecución a asignarle a Autopilot para que pueda correr con un nivel de privilegios suficiente que le permita acceder a nuestros datos en S3. Para más información, visitar la documentación de Amazon SageMaker referente a roles de IAM: https://docs.aws.amazon.com/sagemaker/latest/dg/security-iam.html"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import boto3\n",
"import sagemaker\n",
"from time import gmtime, strftime, sleep\n",
"from sagemaker import get_execution_role\n",
"\n",
"region = boto3.Session().region_name\n",
"\n",
"session = sagemaker.Session()\n",
"bucket = session.default_bucket()\n",
"prefix = \"sagemaker/autopilot-dm\"\n",
"\n",
"role = get_execution_role()\n",
"\n",
"sm = boto3.Session().client(service_name=\"sagemaker\", region_name=region)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"auto_ml_job_name: AutoPilotExperiment-22-01-56-30\n"
]
}
],
"source": [
"auto_ml_job_time = strftime(\"%d-%H-%M-%S\", gmtime())\n",
"auto_ml_job_name = \"AutoPilotExperiment-{}\".format(auto_ml_job_time)\n",
"print(\"auto_ml_job_name: \" + auto_ml_job_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 Descarga de datos\n",
"\n",
"Descargamos el [dataset de marketing directo](https://archive.ics.uci.edu/ml/datasets/bank+marketing) descargando una copia disponible desde el [repositorio S3 de origen](https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip). Para más información sobre esta colección de datos, visitar \\[Moro et al., 2014\\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Reading package lists... Done\n",
"Building dependency tree \n",
"Reading state information... Done\n",
"unzip is already the newest version (6.0-23+deb10u2).\n",
"0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.\n",
"--2022-02-22 01:56:30-- https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip\n",
"Resolving sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)... 52.92.196.34\n",
"Connecting to sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com (sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com)|52.92.196.34|:443... connected.\n",
"HTTP request sent, awaiting response... 304 Not Modified\n",
"File ‘bank-additional.zip’ not modified on server. Omitting download.\n",
"\n",
"Archive: bank-additional.zip\n",
" inflating: bank-additional/bank-additional-names.txt \n",
" inflating: bank-additional/bank-additional.csv \n",
" inflating: bank-additional/bank-additional-full.csv \n"
]
}
],
"source": [
"!apt-get install unzip\n",
"!wget -N https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip\n",
"!unzip -o bank-additional.zip\n",
"\n",
"local_data_path = \"./bank-additional/bank-additional-full.csv\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.2 Carga de datos en Amazon S3\n",
"\n",
"Para poder entrenar el modelo con Autopilot, necesitamos colocar los datos de entrenamiento en S3.\n",
"\n",
"Primero, vamos a realizar una verificación de los datos para estar seguros de que no contenga determinados errores. En este caso, como el juego de datos es particularmente pequeño, también puede ser una opción realizar una inspección visual del mismo. Cuando los datos a procesar resultan más grandes (pudiendo incluso tener un tamaño mayor a la cantidad de memoria disponible en la instancia del bloc), se puede realizar la inspección fuera de línea usando herramientas como Apache Spark: [Deequ](https://github.com/awslabs/deequ) es un componente construido sobre Apache Spark que puede resultar útil para realizar validación de grandes colecciones de datos.\n",
"\n",
"Nota: Autopilot es capaz de manejar juegos de datos de hasta 5 GB de tamaño. Para más información, visite [Cuotas de Autopilot de Amazon SageMaker](https://docs.aws.amazon.com/es_es/sagemaker/latest/dg/autopilot-quotas.html)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Realicemos una inspección visual de los datos cargando el dataset en un dataframe de Pandas en memoria."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" age | \n",
" job | \n",
" marital | \n",
" education | \n",
" default | \n",
" housing | \n",
" loan | \n",
" contact | \n",
" month | \n",
" day_of_week | \n",
" duration | \n",
" campaign | \n",
" pdays | \n",
" previous | \n",
" poutcome | \n",
" emp.var.rate | \n",
" cons.price.idx | \n",
" cons.conf.idx | \n",
" euribor3m | \n",
" nr.employed | \n",
" y | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 56 | \n",
" housemaid | \n",
" married | \n",
" basic.4y | \n",
" no | \n",
" no | \n",
" no | \n",
" telephone | \n",
" may | \n",
" mon | \n",
" 261 | \n",
" 1 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" 1.1 | \n",
" 93.994 | \n",
" -36.4 | \n",
" 4.857 | \n",
" 5191.0 | \n",
" no | \n",
"
\n",
" \n",
" 1 | \n",
" 57 | \n",
" services | \n",
" married | \n",
" high.school | \n",
" unknown | \n",
" no | \n",
" no | \n",
" telephone | \n",
" may | \n",
" mon | \n",
" 149 | \n",
" 1 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" 1.1 | \n",
" 93.994 | \n",
" -36.4 | \n",
" 4.857 | \n",
" 5191.0 | \n",
" no | \n",
"
\n",
" \n",
" 2 | \n",
" 37 | \n",
" services | \n",
" married | \n",
" high.school | \n",
" no | \n",
" yes | \n",
" no | \n",
" telephone | \n",
" may | \n",
" mon | \n",
" 226 | \n",
" 1 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" 1.1 | \n",
" 93.994 | \n",
" -36.4 | \n",
" 4.857 | \n",
" 5191.0 | \n",
" no | \n",
"
\n",
" \n",
" 3 | \n",
" 40 | \n",
" admin. | \n",
" married | \n",
" basic.6y | \n",
" no | \n",
" no | \n",
" no | \n",
" telephone | \n",
" may | \n",
" mon | \n",
" 151 | \n",
" 1 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" 1.1 | \n",
" 93.994 | \n",
" -36.4 | \n",
" 4.857 | \n",
" 5191.0 | \n",
" no | \n",
"
\n",
" \n",
" 4 | \n",
" 56 | \n",
" services | \n",
" married | \n",
" high.school | \n",
" no | \n",
" no | \n",
" yes | \n",
" telephone | \n",
" may | \n",
" mon | \n",
" 307 | \n",
" 1 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" 1.1 | \n",
" 93.994 | \n",
" -36.4 | \n",
" 4.857 | \n",
" 5191.0 | \n",
" no | \n",
"
\n",
" \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
" ... | \n",
"
\n",
" \n",
" 41183 | \n",
" 73 | \n",
" retired | \n",
" married | \n",
" professional.course | \n",
" no | \n",
" yes | \n",
" no | \n",
" cellular | \n",
" nov | \n",
" fri | \n",
" 334 | \n",
" 1 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" -1.1 | \n",
" 94.767 | \n",
" -50.8 | \n",
" 1.028 | \n",
" 4963.6 | \n",
" yes | \n",
"
\n",
" \n",
" 41184 | \n",
" 46 | \n",
" blue-collar | \n",
" married | \n",
" professional.course | \n",
" no | \n",
" no | \n",
" no | \n",
" cellular | \n",
" nov | \n",
" fri | \n",
" 383 | \n",
" 1 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" -1.1 | \n",
" 94.767 | \n",
" -50.8 | \n",
" 1.028 | \n",
" 4963.6 | \n",
" no | \n",
"
\n",
" \n",
" 41185 | \n",
" 56 | \n",
" retired | \n",
" married | \n",
" university.degree | \n",
" no | \n",
" yes | \n",
" no | \n",
" cellular | \n",
" nov | \n",
" fri | \n",
" 189 | \n",
" 2 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" -1.1 | \n",
" 94.767 | \n",
" -50.8 | \n",
" 1.028 | \n",
" 4963.6 | \n",
" no | \n",
"
\n",
" \n",
" 41186 | \n",
" 44 | \n",
" technician | \n",
" married | \n",
" professional.course | \n",
" no | \n",
" no | \n",
" no | \n",
" cellular | \n",
" nov | \n",
" fri | \n",
" 442 | \n",
" 1 | \n",
" 999 | \n",
" 0 | \n",
" nonexistent | \n",
" -1.1 | \n",
" 94.767 | \n",
" -50.8 | \n",
" 1.028 | \n",
" 4963.6 | \n",
" yes | \n",
"
\n",
" \n",
" 41187 | \n",
" 74 | \n",
" retired | \n",
" married | \n",
" professional.course | \n",
" no | \n",
" yes | \n",
" no | \n",
" cellular | \n",
" nov | \n",
" fri | \n",
" 239 | \n",
" 3 | \n",
" 999 | \n",
" 1 | \n",
" failure | \n",
" -1.1 | \n",
" 94.767 | \n",
" -50.8 | \n",
" 1.028 | \n",
" 4963.6 | \n",
" no | \n",
"
\n",
" \n",
"
\n",
"
41188 rows × 21 columns
\n",
"
"
],
"text/plain": [
" age job marital education default housing loan \\\n",
"0 56 housemaid married basic.4y no no no \n",
"1 57 services married high.school unknown no no \n",
"2 37 services married high.school no yes no \n",
"3 40 admin. married basic.6y no no no \n",
"4 56 services married high.school no no yes \n",
"... ... ... ... ... ... ... ... \n",
"41183 73 retired married professional.course no yes no \n",
"41184 46 blue-collar married professional.course no no no \n",
"41185 56 retired married university.degree no yes no \n",
"41186 44 technician married professional.course no no no \n",
"41187 74 retired married professional.course no yes no \n",
"\n",
" contact month day_of_week duration campaign pdays previous \\\n",
"0 telephone may mon 261 1 999 0 \n",
"1 telephone may mon 149 1 999 0 \n",
"2 telephone may mon 226 1 999 0 \n",
"3 telephone may mon 151 1 999 0 \n",
"4 telephone may mon 307 1 999 0 \n",
"... ... ... ... ... ... ... ... \n",
"41183 cellular nov fri 334 1 999 0 \n",
"41184 cellular nov fri 383 1 999 0 \n",
"41185 cellular nov fri 189 2 999 0 \n",
"41186 cellular nov fri 442 1 999 0 \n",
"41187 cellular nov fri 239 3 999 1 \n",
"\n",
" poutcome emp.var.rate cons.price.idx cons.conf.idx euribor3m \\\n",
"0 nonexistent 1.1 93.994 -36.4 4.857 \n",
"1 nonexistent 1.1 93.994 -36.4 4.857 \n",
"2 nonexistent 1.1 93.994 -36.4 4.857 \n",
"3 nonexistent 1.1 93.994 -36.4 4.857 \n",
"4 nonexistent 1.1 93.994 -36.4 4.857 \n",
"... ... ... ... ... ... \n",
"41183 nonexistent -1.1 94.767 -50.8 1.028 \n",
"41184 nonexistent -1.1 94.767 -50.8 1.028 \n",
"41185 nonexistent -1.1 94.767 -50.8 1.028 \n",
"41186 nonexistent -1.1 94.767 -50.8 1.028 \n",
"41187 failure -1.1 94.767 -50.8 1.028 \n",
"\n",
" nr.employed y \n",
"0 5191.0 no \n",
"1 5191.0 no \n",
"2 5191.0 no \n",
"3 5191.0 no \n",
"4 5191.0 no \n",
"... ... ... \n",
"41183 4963.6 yes \n",
"41184 4963.6 no \n",
"41185 4963.6 no \n",
"41186 4963.6 yes \n",
"41187 4963.6 no \n",
"\n",
"[41188 rows x 21 columns]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"\n",
"data = pd.read_csv(local_data_path)\n",
"pd.set_option(\"display.max_columns\", 500) # Make sure we can see all of the columns\n",
"pd.set_option(\"display.max_rows\", 10) # Keep the output on one page\n",
"data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Observar que tenemos 20 características o *features* para explotar en la predicción de la variable \"y\".\n",
"\n",
"Amazon SageMaker Autopilot se encargará de realizar el pre-procesamiento por nosotros: no es necesario implementar técnicas convencionales de pre-procesamiento como por ejemplo manejo de valores faltantes, conversión de variables categóricas en numéricas, re-escalamiento o manejo de tipos de datos más complejos.\n",
"\n",
"Por el mismo motivo, tampoco necesitamos particionar el conjunto de datos para entrenamiento y validación del modelo. Sin embargo, vamos a dejar una porción de los datos fuera del alcance de Autopilot para realizar una prueba del modelo más abajo."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.2.1 Reserva de algunos datos para realizar la validación del modelo\n",
"\n",
"Vamos a dividir los datos en 2 conjuntos mutuamente excluyentes y complementarios: entrenamiento y validación. El subconjunto de entrenamiento representa el 80% del juego de datos, y es usado para entrenar el modelo con Autopilot. En cambio, el segmento de validación (20%), será reservado para realizar inferencias de prueba sobre el modelo generado por Autopilot, más abajo."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"train_data = data.sample(frac=0.8, random_state=200)\n",
"test_data = data.drop(train_data.index)\n",
"test_data_no_target = test_data.drop(columns=[\"y\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.2.2 Carga de datos en Amazon S3\n",
"\n",
"Aquí copiamos los datos CSV en S3, para que puedan ser usados por los procesos de entrenamiento de modelos de SageMaker."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Train data uploaded to: s3://sagemaker-us-east-1-082349764434/sagemaker/autopilot-dm/train/train_data.csv\n",
"Test data uploaded to: s3://sagemaker-us-east-1-082349764434/sagemaker/autopilot-dm/test/test_data.csv\n"
]
}
],
"source": [
"train_file = \"train_data.csv\"\n",
"train_data.to_csv(train_file, index=False, header=True)\n",
"train_data_s3_path = session.upload_data(path=train_file, key_prefix=prefix + \"/train\")\n",
"print(\"Train data uploaded to: \" + train_data_s3_path)\n",
"\n",
"test_file = \"test_data.csv\"\n",
"test_data_no_target.to_csv(test_file, index=False, header=False)\n",
"test_data_s3_path = session.upload_data(path=test_file, key_prefix=prefix + \"/test\")\n",
"print(\"Test data uploaded to: \" + test_data_s3_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Entrenamiento del modelo con Autopilot"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.1 Configuración\n",
"\n",
"Ahora que ya subimos los datos en Amazon S3, podemos invocar a Autopilot para entrenar el mejor modelo usando el subconjunto de datos de entrenamiento. Como vimos en el tutorial de video, las entradas requeridas para Autopilot son:\n",
"\n",
"* La ubicación de los datos en S3 (datos de entrenamiento y artefactos de salida)\n",
"* El nombre de la columna a predecir (en nuestro caso es la columna \"y\")\n",
"* El rol de IAM, que nos permite elevar los privilegios de Autopilot para que pueda acceder a los datos alojados en S3.\n",
"\n",
"En este caso los datos de entrenamiento se encuentran en formato CSV, pero Autopilot también permite entradas en formatos columnares como por ejemplo Parquet. Para más información, visite en enlace [conjuntos de datos de Autopilot de Amazon SageMaker](https://docs.aws.amazon.com/es_es/sagemaker/latest/dg/autopilot-datasets-problem-types.html#autopilot-datasets) en la [documentación de Amazon SageMaker](https://docs.aws.amazon.com/es_es/sagemaker/index.html).\n",
"\n",
"*Nota: al entrenar con archivos CSV, Autopilot necesita o bien que todos los archivos posean un encabezado en la primera línea; o que el primer archivo de la colección en orden lexicográfico posea el encabezado.*"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"auto_ml_job_config = {\"CompletionCriteria\": {\"MaxCandidates\": 5}}\n",
"auto_ml_endpoint_name = \"\"\n",
"\n",
"input_data_config = [\n",
" {\n",
" \"DataSource\": {\n",
" \"S3DataSource\": {\n",
" \"S3DataType\": \"S3Prefix\",\n",
" \"S3Uri\": \"s3://{}/{}/train\".format(bucket, prefix),\n",
" }\n",
" },\n",
" \"TargetAttributeName\": \"y\",\n",
" }\n",
"]\n",
"\n",
"output_data_config = {\"S3OutputPath\": \"s3://{}/{}/output\".format(bucket, prefix)}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Aquí también puedes también especificar el tipo de problema a resolver: (`Regression, MulticlassClassification, BinaryClassification`). En aquellos casos en el cual esto no esté especificado (como aquí ocurre), Autopilot va a inferir el tipo de problema a partir de las estadísticas generadas sobre la columna *target* del modelo.\n",
"\n",
"También tienes la opción de limintar el tiempo de corrida del ciclo de entrenamiento de Autopilot: esto puede hacerse especificando la cantidad máxima de modelos/candidatos, o bien acotando la cantidad total de tiempo del trabajo de entrenamiento. Asimismo, ten en cuenta que la cantidad de tiempo del proceso de entrenamiento puede variar entre una corrida y otra debido a la naturaleza misma del proceso. Para más información, visite la documentación del [SDK de SageMaker para Python](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.create_auto_ml_job)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.2 Lanzamiento del proceso de entrenamiento\n",
"\n",
"Para lanzar el proceso, invocamos la operación de la API `create_auto_ml_job`. "
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'AutoMLJobArn': 'arn:aws:sagemaker:us-east-1:082349764434:automl-job/autopilotexperiment-22-01-56-30',\n",
" 'ResponseMetadata': {'RequestId': 'bf4beb36-44f4-40e6-b169-8fe7795d490a',\n",
" 'HTTPStatusCode': 200,\n",
" 'HTTPHeaders': {'x-amzn-requestid': 'bf4beb36-44f4-40e6-b169-8fe7795d490a',\n",
" 'content-type': 'application/x-amz-json-1.1',\n",
" 'content-length': '102',\n",
" 'date': 'Tue, 22 Feb 2022 01:56:32 GMT'},\n",
" 'RetryAttempts': 0}}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sm.create_auto_ml_job(\n",
" AutoMLJobName=auto_ml_job_name,\n",
" InputDataConfig=input_data_config,\n",
" OutputDataConfig=output_data_config,\n",
" AutoMLJobConfig=auto_ml_job_config,\n",
" RoleArn=role,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.3 Seguimiento del progreso del proceso de entrenamiento\n",
"\n",
"A alto nivel, el proceso consta de los siguientes pasos:\n",
"\n",
"* Análisis de datos. Aquí, el juego de datos de entrenamiento es usado por Autopilot para construir una lista de *pipelines* de procesamiento y diferentes algoritmos para aplicar a los datos de entrada. Asimismo, los datos son particionados en colecciones para entrenamiento y validación.\n",
"* Ingeniería de *features*, en donde Autopilot va a realizar transformaciones tanto de manera individual como así también en forma agregada.\n",
"* Ajustes del modelo, para encontrar el conjunto óptimo de hiperparámetros específicos para cada modelo.\n",
"* Identificación/selección del modelo con mejor desempeño a partir de la métrica objetivo especificada."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"JobStatus - Secondary Status\n",
"----------------------------\n",
"InProgress - Starting\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - AnalyzingData\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - FeatureEngineering\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - ModelTuning\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingExplainabilityReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"InProgress - GeneratingModelInsightsReport\n",
"Completed - Completed\n"
]
}
],
"source": [
"print(\"JobStatus - Secondary Status\")\n",
"print(\"----------------------------\")\n",
"\n",
"describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)\n",
"\n",
"while describe_response[\"AutoMLJobStatus\"] not in (\"Failed\", \"Completed\", \"Stopped\"):\n",
" print(\"{} - {}\".format(describe_response[\"AutoMLJobStatus\"], describe_response[\"AutoMLJobSecondaryStatus\"]))\n",
" sleep(30)\n",
" describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)\n",
"\n",
"print(\"{} - {}\".format(describe_response[\"AutoMLJobStatus\"], describe_response[\"AutoMLJobSecondaryStatus\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.4 Cantidados explorados por SageMaker Autopilot\n",
"\n",
"Puedes enumerar la lista completa de candidatos explorados Autopilot, ordenados según la métrica objetivo de performance definida en el momento de la configuración del trabajo. Aquí, entendemos por *candidato*, a una combinación de *pipeline* de procesamiento de datos, algoritmo y conjunto específico de hiperparámetros con sus valores respectivos."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1 AutoPilotExperiment-22-01-56-3T5-001-edc1815b 0.6068300008773804\n",
"2 AutoPilotExperiment-22-01-56-3T5-002-0796d645 0.6040700078010559\n",
"3 AutoPilotExperiment-22-01-56-3T5-005-289f0c15 0.5935699939727783\n",
"4 AutoPilotExperiment-22-01-56-3T5-004-3cd99a51 0.5883200168609619\n",
"5 AutoPilotExperiment-22-01-56-3T5-003-f5a7714a 0.5880500078201294\n"
]
}
],
"source": [
"candidates = sm.list_candidates_for_auto_ml_job(\n",
" AutoMLJobName=auto_ml_job_name, SortBy=\"FinalObjectiveMetricValue\"\n",
")[\"Candidates\"]\n",
"index = 1\n",
"for candidate in candidates:\n",
" print(\n",
" str(index)\n",
" + \" \"\n",
" + candidate[\"CandidateName\"]\n",
" + \" \"\n",
" + str(candidate[\"FinalAutoMLJobObjectiveMetric\"][\"Value\"])\n",
" )\n",
" index += 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.4.1 Selección del mejor candidato"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Best candidate: AutoPilotExperiment-22-01-56-3T5-001-edc1815b\n"
]
}
],
"source": [
"best_candidate = describe_response[\"BestCandidate\"]\n",
"model_containers = best_candidate[\"InferenceContainers\"]\n",
"\n",
"print(\"Best candidate: {}\".format(best_candidate[\"CandidateName\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.5 Creación del modelo"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model ARN: arn:aws:sagemaker:us-east-1:082349764434:model/autopilotexperiment-22-01-56-30\n"
]
}
],
"source": [
"model_name = auto_ml_job_name\n",
"model = sm.create_model(ModelName=model_name, Containers=model_containers, ExecutionRoleArn=role)\n",
"print(\"Model ARN: {}\".format(model[\"ModelArn\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.6 *Notebook* generado para el candidato\n",
" \n",
"Como explicamos en el tutorial de video, Sagemaker AutoPilot también genera un bloc de notas para especificar las definiciones del candidato. El mismo puede ejecutarse en forma interactiva, paso a paso para lograr una idea detallada de cómo Autopilot llega a identificar el mejor candidato; como así también para realizar todo tipo de ajustes o personalizaciones sobre el mismo: paralelismo, tipo de *hardware*, algoritmos a explorar, lógicas de extracción de *features*, etc.\n",
" \n",
"El *notebook* puede descargarse tanto en la UI de SageMaker Studio como también de la siguiente ubicación en S3:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'s3://sagemaker-us-east-1-082349764434/sagemaker/autopilot-dm/output/AutoPilotExperiment-22-01-56-30/sagemaker-automl-candidates/AutoPilotExperiment-22-01-56-30-pr-1-6614cc44f08c4d9bb35f61fd52/notebooks/SageMakerAutopilotCandidateDefinitionNotebook.ipynb'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)[\"AutoMLJobArtifacts\"][\n",
" \"CandidateDefinitionNotebookLocation\"\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.7 *Notebook* de exploración de datos\n",
"\n",
"SageMaker Autopilot también genera un bloc de notas para exploración de datos, que también está accesible tanto a través de la UI como en la siguiente ubicación de S3:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'s3://sagemaker-us-east-1-082349764434/sagemaker/autopilot-dm/output/AutoPilotExperiment-22-01-56-30/sagemaker-automl-candidates/AutoPilotExperiment-22-01-56-30-pr-1-6614cc44f08c4d9bb35f61fd52/notebooks/SageMakerAutopilotDataExplorationNotebook.ipynb'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)[\"AutoMLJobArtifacts\"][\n",
" \"DataExplorationNotebookLocation\"\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4 Evaluación del modelo: inferencias *online*\n",
"\n",
"En esta sección, vamos a desplegar el modelo de mejor desempeño generado por SageMaker Autopilot en la sección 3. Para ello, vamos a tomar el conjunto de datos de validación, realizar inferencias en línea sobre el mismo, y calcular la matriz de confusión resultante.\n",
"\n",
"Comencemos desplegando el *endpoint*: el mismo nos proveerá de infraestructura de cómputo administrada donde correr el modelo, junto con una API para realizar las inferencias en línea."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.1 Despliegue del *endpoint* para inferencias en línea"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"auto_ml_endpoint_name = auto_ml_job_name\n",
"auto_ml_endpoint_config_name = auto_ml_endpoint_name"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'EndpointConfigArn': 'arn:aws:sagemaker:us-east-1:082349764434:endpoint-config/autopilotexperiment-22-01-56-30', 'ResponseMetadata': {'RequestId': '851065a6-f773-4fbb-9642-5d6bced05fb2', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '851065a6-f773-4fbb-9642-5d6bced05fb2', 'content-type': 'application/x-amz-json-1.1', 'content-length': '112', 'date': 'Tue, 22 Feb 2022 02:47:15 GMT'}, 'RetryAttempts': 0}}\n"
]
}
],
"source": [
"variant_name = auto_ml_job_name\n",
"\n",
"production_variants = [\n",
" {\n",
" \"InstanceType\": \"ml.t2.medium\",\n",
" \"InitialInstanceCount\": 1,\n",
" \"ModelName\": model_name,\n",
" \"VariantName\": variant_name\n",
" }\n",
"]\n",
"\n",
"ep_config = sm.create_endpoint_config(EndpointConfigName=auto_ml_endpoint_config_name, ProductionVariants=production_variants)\n",
"print(str(ep_config))"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"ep = sm.create_endpoint(EndpointName=auto_ml_endpoint_name, EndpointConfigName=auto_ml_endpoint_config_name)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: Creating\n",
"EndpointStatus: InService\n"
]
}
],
"source": [
"ep_desc = sm.describe_endpoint(EndpointName=auto_ml_endpoint_name)\n",
"\n",
"while ep_desc[\"EndpointStatus\"] in (\"Creating\"):\n",
" print(\"EndpointStatus: {}\".format(ep_desc[\"EndpointStatus\"]))\n",
" sleep(30)\n",
" ep_desc = sm.describe_endpoint(EndpointName=auto_ml_endpoint_name)\n",
"\n",
"print(\"EndpointStatus: {}\".format(ep_desc[\"EndpointStatus\"]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.2 Inferencia *online* usando el subconjunto de validación"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import pandas as pd\n",
"import sagemaker\n",
"\n",
"from sagemaker.serializers import CSVSerializer\n",
"from sagemaker.deserializers import CSVDeserializer"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"predictor = sagemaker.predictor.Predictor(endpoint_name=auto_ml_endpoint_name,\n",
" sagemaker_session=session,\n",
" serializer=CSVSerializer(),\n",
" deserializer=CSVDeserializer())\n",
"predictor.serializer = sagemaker.serializers.CSVSerializer()"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"predictions = predictor.predict(test_data.drop([\"y\"], axis=1).to_csv(sep=\",\", header=False, index=False))\n",
"predictions_df = pd.DataFrame(predictions)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.3 Cálculo de la matriz de confusión"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Calculamos la matriz de confusión. Si todo salió bien, el resultado debería promediar:\n",
" \n",
" predictions 0 1\n",
" actuals\n",
" 0 6595 680\n",
" 1 160 803"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" online predictions | \n",
" 0 | \n",
" 1 | \n",
"
\n",
" \n",
" actuals | \n",
" | \n",
" | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 6858 | \n",
" 417 | \n",
"
\n",
" \n",
" 1 | \n",
" 323 | \n",
" 640 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
"online predictions 0 1\n",
"actuals \n",
"0 6858 417\n",
"1 323 640"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pd.crosstab(index=pd.DataFrame(pd.get_dummies(test_data)[\"y_yes\"].to_numpy())[0], \n",
" columns=pd.DataFrame(pd.get_dummies(predictions_df[0])[\"yes\"].to_numpy())[0], \n",
" rownames=[\"actuals\"], \n",
" colnames=[\"online predictions\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.4 Eliminación del endpoint de inferencia en línea\n",
"\n",
"El *endpoint* de inferencia en línea posee infraestructura dedicada, teniendo [costos](https://aws.amazon.com/es/sagemaker/pricing/?nc1=h_ls) asociados que dependen del tipo de instancia, cantidad de almacenamiento y tiempo durante el cual estuvo activo. Es por esto que el mismo debe ser eliminado en cuanto ya no sea necesario."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Endpoint deleted: AutoPilotExperiment-22-01-56-30\n"
]
}
],
"source": [
"# Endpoint config\n",
"desc = sm.describe_endpoint(EndpointName=auto_ml_endpoint_name)\n",
"sm.delete_endpoint_config(EndpointConfigName=desc[\"EndpointConfigName\"])\n",
"\n",
"# Endpoint\n",
"sm.delete_endpoint(EndpointName=auto_ml_endpoint_name)\n",
"print(\"Endpoint deleted: {}\".format(auto_ml_endpoint_name))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5 Evaluación del modelo: inferencias *batch*\n",
"\n",
"Repitamos ahora el proceso de validación del modelo, pero usando un esquema de procesamiento fuera de línea o *batch* usando la función de [pipelines de inferencia](https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html) de SageMaker. Para esto, vamos a generar un proceso de transformación apuntando a los datos de validación que tenemos en S3."
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'TransformJobArn': 'arn:aws:sagemaker:us-east-1:082349764434:transform-job/autopilotexperiment-22-01-56-3022-01-56-30',\n",
" 'ResponseMetadata': {'RequestId': 'fcf4ae59-0c3d-4a91-b03c-c19ff724c769',\n",
" 'HTTPStatusCode': 200,\n",
" 'HTTPHeaders': {'x-amzn-requestid': 'fcf4ae59-0c3d-4a91-b03c-c19ff724c769',\n",
" 'content-type': 'application/x-amz-json-1.1',\n",
" 'content-length': '119',\n",
" 'date': 'Tue, 22 Feb 2022 03:00:53 GMT'},\n",
" 'RetryAttempts': 0}}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"transform_job_name = auto_ml_job_name + auto_ml_job_time\n",
"\n",
"transform_input = {\n",
" \"DataSource\": {\n",
" \"S3DataSource\": {\n",
" \"S3DataType\": \"S3Prefix\", \n",
" \"S3Uri\": test_data_s3_path}\n",
" },\n",
" \"ContentType\": \"text/csv\",\n",
" \"CompressionType\": \"None\",\n",
" \"SplitType\": \"Line\",\n",
"}\n",
"\n",
"transform_output = {\n",
" \"S3OutputPath\": \"s3://{}/{}/inference-results\".format(bucket, prefix),\n",
"}\n",
"\n",
"transform_resources = {\n",
" \"InstanceType\": \"ml.m5.large\", \n",
" \"InstanceCount\": 1,\n",
"}\n",
"\n",
"sm.create_transform_job(\n",
" ModelName=model_name,\n",
" TransformJobName=transform_job_name,\n",
" TransformInput=transform_input,\n",
" TransformOutput=transform_output,\n",
" TransformResources=transform_resources,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Inference JobStatus\n",
"-------------------\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"InProgress\n",
"Completed\n"
]
}
],
"source": [
"print(\"Inference JobStatus\")\n",
"print(\"-------------------\")\n",
"\n",
"describe_response = sm.describe_transform_job(TransformJobName=transform_job_name)\n",
"\n",
"while describe_response[\"TransformJobStatus\"] not in (\"Failed\", \"Completed\", \"Stopped\"):\n",
" print(describe_response[\"TransformJobStatus\"])\n",
" sleep(30)\n",
" describe_response = sm.describe_transform_job(TransformJobName=transform_job_name)\n",
"\n",
"print(describe_response[\"TransformJobStatus\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.1 Carga y visualización de resultados"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" | \n",
" 0 | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" no | \n",
"
\n",
" \n",
" 1 | \n",
" no | \n",
"
\n",
" \n",
" 2 | \n",
" no | \n",
"
\n",
" \n",
" 3 | \n",
" no | \n",
"
\n",
" \n",
" 4 | \n",
" no | \n",
"
\n",
" \n",
" ... | \n",
" ... | \n",
"
\n",
" \n",
" 8233 | \n",
" yes | \n",
"
\n",
" \n",
" 8234 | \n",
" yes | \n",
"
\n",
" \n",
" 8235 | \n",
" no | \n",
"
\n",
" \n",
" 8236 | \n",
" yes | \n",
"
\n",
" \n",
" 8237 | \n",
" yes | \n",
"
\n",
" \n",
"
\n",
"
8238 rows × 1 columns
\n",
"
"
],
"text/plain": [
" 0\n",
"0 no\n",
"1 no\n",
"2 no\n",
"3 no\n",
"4 no\n",
"... ...\n",
"8233 yes\n",
"8234 yes\n",
"8235 no\n",
"8236 yes\n",
"8237 yes\n",
"\n",
"[8238 rows x 1 columns]"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"s3_output_key = \"{}/inference-results/test_data.csv.out\".format(prefix)\n",
"local_inference_results_path = \"inference_results.csv\"\n",
"\n",
"s3 = boto3.resource(\"s3\")\n",
"inference_results_bucket = s3.Bucket(session.default_bucket())\n",
"\n",
"inference_results_bucket.download_file(s3_output_key, local_inference_results_path)\n",
"\n",
"offline_df = pd.read_csv(local_inference_results_path, header=None, sep=\";\")\n",
"pd.set_option(\"display.max_rows\", 10) # Mantiene la salida en una única página\n",
"offline_df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.2 Cálculo de la matriz de confusión\n",
"\n",
"En esta sección, volemos a calcular la matriz de confusión, pero esta vez usando los resultados de la inferencia fuera de línea. Si todo salió bien, el resultado debería coincidir con las predicciones en línea realizadas en la sección 4.3. Es decir, el resultado debería promediar:\n",
" \n",
" predictions 0 1\n",
" actuals\n",
" 0 6595 680\n",
" 1 160 803"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" offline predictions | \n",
" 0 | \n",
" 1 | \n",
"
\n",
" \n",
" actuals | \n",
" | \n",
" | \n",
"
\n",
" \n",
" \n",
" \n",
" 0 | \n",
" 6858 | \n",
" 417 | \n",
"
\n",
" \n",
" 1 | \n",
" 323 | \n",
" 640 | \n",
"
\n",
" \n",
"
\n",
"
"
],
"text/plain": [
"offline predictions 0 1\n",
"actuals \n",
"0 6858 417\n",
"1 323 640"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pd.crosstab(index=pd.DataFrame(pd.get_dummies(test_data[\"y\"])[\"yes\"].to_numpy())[0], \n",
" columns=pd.DataFrame(pd.get_dummies(offline_df)[\"0_yes\"].to_numpy())[0], \n",
" rownames=['actuals'], \n",
" colnames=['offline predictions'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Eliminación/borrado de recursos"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 6.1 Borrado del modelo en SageMaker\n",
"\n",
"Aquí borramos el modelo que fue creado en SageMaker cuando llamamos a create_model() más arriba; es decir, no estamos borrando los artefactos del modelo, código de inferencias, o el rol asociado. Para más información, visitar la documentación de [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html#SageMaker.Client.delete_model)."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"# sm.delete_model(ModelName=model_name)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 6.2 Borrado de artefactos en S3.\n",
"\n",
"La corrida de Autopilot crea una cantidad de artefactos como ser por ejemplo particiones del conjunto de datos, scripts de procesamiento, etc. Activando este código, podemos eliminar estos archivos. Asimismo esta operación elimina todos los modelos y notebooks generados durante la corrida."
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"# s3 = boto3.resource(\"s3\")\n",
"# job_outputs_bucket = s3.Bucket(bucket)\n",
"# \n",
"# job_outputs_prefix = \"{}/output/{}\".format(prefix,auto_ml_job_name)\n",
"# job_outputs_bucket.objects.filter(Prefix=job_outputs_prefix).delete()"
]
}
],
"metadata": {
"instance_type": "ml.t3.medium",
"kernelspec": {
"display_name": "Python 3 (Data Science)",
"language": "python",
"name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/datascience-1.0"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}