{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# Inference Pipeline with Scikit-learn and Linear Learner\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"\n",
"This notebook's CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook. \n",
"\n",
"\n",
"\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"Typically a Machine Learning (ML) process consists of few steps: data gathering with various ETL jobs, pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm. \n",
"In many cases, when the trained model is used for processing real time or batch prediction requests, the model receives data in a format which needs to pre-processed (e.g. featurized) before it can be passed to the algorithm. In the following notebook, we will demonstrate how you can build your ML Pipeline leveraging the Sagemaker Scikit-learn container and SageMaker Linear Learner algorithm & after the model is trained, deploy the Pipeline (Data preprocessing and Lineara Learner) as an Inference Pipeline behind a single Endpoint for real time inference and for batch inferences using Amazon SageMaker Batch Transform.\n",
"\n",
"We will demonstrate this using the Abalone Dataset to guess the age of Abalone with physical features. The dataset is available from [UCI Machine Learning](https://archive.ics.uci.edu/ml/datasets/abalone); the aim for this task is to determine age of an Abalone (a kind of shellfish) from its physical measurements. We'll use Sagemaker's Scikit-learn container to featurize the dataset so that it can be used for training with Linear Learner.\n",
"\n",
"### Table of contents\n",
"* [Preprocessing data and training the model](#training)\n",
" * [Upload the data for training](#upload_data)\n",
" * [Create a Scikit-learn script to train with](#create_sklearn_script)\n",
" * [Create SageMaker Scikit Estimator](#create_sklearn_estimator)\n",
" * [Batch transform our training data](#preprocess_train_data)\n",
" * [Fit a LinearLearner Model with the preprocessed data](#training_model)\n",
"* [Inference Pipeline with Scikit preprocessor and Linear Learner](#inference_pipeline)\n",
" * [Set up the inference pipeline](#pipeline_setup)\n",
" * [Make a request to our pipeline endpoint](#pipeline_inference_request)\n",
" * [Delete Endpoint](#delete_endpoint)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"Let's first create our Sagemaker session and role, and create a S3 prefix to use for the notebook example."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"!pip install -U sagemaker"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"import sagemaker\n",
"from sagemaker import get_execution_role\n",
"\n",
"sagemaker_session = sagemaker.Session()\n",
"region = sagemaker_session.boto_region_name\n",
"\n",
"# Get a SageMaker-compatible role used by this Notebook Instance.\n",
"role = get_execution_role()\n",
"\n",
"# S3 prefix\n",
"bucket = sagemaker_session.default_bucket()\n",
"prefix = \"Scikit-LinearLearner-pipeline-abalone-example\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# Preprocessing data and training the model \n",
"## Downloading dataset \n",
"SageMaker team has downloaded the dataset from UCI and uploaded to one of the S3 buckets in our account."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"! mkdir abalone_data\n",
"\n",
"import boto3\n",
"\n",
"s3 = boto3.client(\"s3\")\n",
"s3.download_file(\n",
" f\"sagemaker-example-files-prod-{region}\",\n",
" \"datasets/tabular/uci_abalone/abalone.csv\",\n",
" \"abalone_data/abalone.csv\",\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Upload the data for training \n",
"\n",
"When training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. We can use the tools provided by the SageMaker Python SDK to upload the data to a default bucket. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"WORK_DIRECTORY = \"abalone_data\"\n",
"\n",
"train_input = sagemaker_session.upload_data(\n",
" path=\"{}/{}\".format(WORK_DIRECTORY, \"abalone.csv\"),\n",
" bucket=bucket,\n",
" key_prefix=\"{}/{}\".format(prefix, \"train\"),\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Create a Scikit-learn script to train with \n",
"To run Scikit-learn on Sagemaker `SKLearn` Estimator with a script as an entry point. The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:\n",
"\n",
"* SM_MODEL_DIR: A string representing the path to the directory to write model artifacts to. These artifacts are uploaded to S3 for model hosting.\n",
"* SM_OUTPUT_DIR: A string representing the filesystem path to write output artifacts to. Output artifacts may include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed and uploaded to S3 to the same S3 prefix as the model artifacts.\n",
"\n",
"Supposing two input channels, 'train' and 'test', were used in the call to the Chainer estimator's fit() method, the following will be set, following the format SM_CHANNEL_[channel_name]:\n",
"\n",
"* SM_CHANNEL_TRAIN: A string representing the path to the directory containing data in the 'train' channel\n",
"* SM_CHANNEL_TEST: Same as above, but for the 'test' channel.\n",
"\n",
"A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to model_dir so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an argparse.ArgumentParser instance. For example, the script run by this notebook:\n",
"\n",
"```python\n",
"from __future__ import print_function\n",
"\n",
"import time\n",
"import sys\n",
"from io import StringIO\n",
"import os\n",
"import shutil\n",
"\n",
"import argparse\n",
"import csv\n",
"import json\n",
"import joblib\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"from sklearn.compose import ColumnTransformer, make_column_selector\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.pipeline import make_pipeline\n",
"from sklearn.preprocessing import Binarizer, StandardScaler, OneHotEncoder\n",
"\n",
"from sagemaker_containers.beta.framework import (\n",
" content_types, encoders, env, modules, transformer, worker)\n",
"\n",
"# Since we get a headerless CSV file we specify the column names here.\n",
"feature_columns_names = [\n",
" 'sex', # M, F, and I (infant)\n",
" 'length', # Longest shell measurement\n",
" 'diameter', # perpendicular to length\n",
" 'height', # with meat in shell\n",
" 'whole_weight', # whole abalone\n",
" 'shucked_weight', # weight of meat\n",
" 'viscera_weight', # gut weight (after bleeding)\n",
" 'shell_weight'] # after being dried\n",
"\n",
"label_column = 'rings'\n",
"\n",
"feature_columns_dtype = {\n",
" 'sex': \"category\",\n",
" 'length': \"float64\",\n",
" 'diameter': \"float64\",\n",
" 'height': \"float64\",\n",
" 'whole_weight': \"float64\",\n",
" 'shucked_weight': \"float64\",\n",
" 'viscera_weight': \"float64\",\n",
" 'shell_weight': \"float64\"}\n",
"\n",
"label_column_dtype = {'rings': \"float64\"} # +1.5 gives the age in years\n",
"\n",
"def merge_two_dicts(x, y):\n",
" z = x.copy() # start with x's keys and values\n",
" z.update(y) # modifies z with y's keys and values & returns None\n",
" return z\n",
"\n",
"if __name__ == '__main__':\n",
"\n",
" parser = argparse.ArgumentParser()\n",
"\n",
" # Sagemaker specific arguments. Defaults are set in the environment variables.\n",
" parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])\n",
" parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])\n",
" parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])\n",
"\n",
" args = parser.parse_args()\n",
"\n",
" # Take the set of files and read them all into a single pandas dataframe\n",
" input_files = [ os.path.join(args.train, file) for file in os.listdir(args.train) ]\n",
" if len(input_files) == 0:\n",
" raise ValueError(('There are no files in {}.\\n' +\n",
" 'This usually indicates that the channel ({}) was incorrectly specified,\\n' +\n",
" 'the data specification in S3 was incorrectly specified or the role specified\\n' +\n",
" 'does not have permission to access the data.').format(args.train, \"train\"))\n",
" \n",
" raw_data = [ pd.read_csv(\n",
" file, \n",
" header=None, \n",
" names=feature_columns_names + [label_column],\n",
" dtype=merge_two_dicts(feature_columns_dtype, label_column_dtype)) for file in input_files ]\n",
" concat_data = pd.concat(raw_data)\n",
"\n",
" # Labels should not be preprocessed. predict_fn will reinsert the labels after featurizing.\n",
" concat_data.drop(label_column, axis=1, inplace=True)\n",
"\n",
" # This section is adapted from the scikit-learn example of using preprocessing pipelines:\n",
" #\n",
" # https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html\n",
" #\n",
" # We will train our classifier with the following features:\n",
" # Numeric Features:\n",
" # - length: Longest shell measurement\n",
" # - diameter: Diameter perpendicular to length\n",
" # - height: Height with meat in shell\n",
" # - whole_weight: Weight of whole abalone\n",
" # - shucked_weight: Weight of meat\n",
" # - viscera_weight: Gut weight (after bleeding)\n",
" # - shell_weight: Weight after being dried\n",
" # Categorical Features:\n",
" # - sex: categories encoded as strings {'M', 'F', 'I'} where 'I' is Infant\n",
" numeric_transformer = make_pipeline(\n",
" SimpleImputer(strategy='median'),\n",
" StandardScaler())\n",
"\n",
" categorical_transformer = make_pipeline(\n",
" SimpleImputer(strategy='constant', fill_value='missing'),\n",
" OneHotEncoder(handle_unknown='ignore'))\n",
"\n",
" preprocessor = ColumnTransformer(transformers=[\n",
" (\"num\", numeric_transformer, make_column_selector(dtype_exclude=\"category\")),\n",
" (\"cat\", categorical_transformer, make_column_selector(dtype_include=\"category\"))])\n",
" \n",
" preprocessor.fit(concat_data)\n",
"\n",
" joblib.dump(preprocessor, os.path.join(args.model_dir, \"model.joblib\"))\n",
"\n",
" print(\"saved model!\")\n",
" \n",
" \n",
"def input_fn(input_data, content_type):\n",
" \"\"\"Parse input data payload\n",
" \n",
" We currently only take csv input. Since we need to process both labelled\n",
" and unlabelled data we first determine whether the label column is present\n",
" by looking at how many columns were provided.\n",
" \"\"\"\n",
" if content_type == 'text/csv':\n",
" # Read the raw input data as CSV.\n",
" df = pd.read_csv(StringIO(input_data), \n",
" header=None)\n",
" \n",
" if len(df.columns) == len(feature_columns_names) + 1:\n",
" # This is a labelled example, includes the ring label\n",
" df.columns = feature_columns_names + [label_column]\n",
" elif len(df.columns) == len(feature_columns_names):\n",
" # This is an unlabelled example.\n",
" df.columns = feature_columns_names\n",
" \n",
" return df\n",
" else:\n",
" raise ValueError(\"{} not supported by script!\".format(content_type))\n",
" \n",
"\n",
"def output_fn(prediction, accept):\n",
" \"\"\"Format prediction output\n",
" \n",
" The default accept/content-type between containers for serial inference is JSON.\n",
" We also want to set the ContentType or mimetype as the same value as accept so the next\n",
" container can read the response payload correctly.\n",
" \"\"\"\n",
" if accept == \"application/json\":\n",
" instances = []\n",
" for row in prediction.tolist():\n",
" instances.append({\"features\": row})\n",
"\n",
" json_output = {\"instances\": instances}\n",
"\n",
" return worker.Response(json.dumps(json_output), mimetype=accept)\n",
" elif accept == 'text/csv':\n",
" return worker.Response(encoders.encode(prediction, accept), mimetype=accept)\n",
" else:\n",
" raise RuntimeException(\"{} accept type is not supported by this script.\".format(accept))\n",
"\n",
"\n",
"def predict_fn(input_data, model):\n",
" \"\"\"Preprocess input data\n",
" \n",
" We implement this because the default predict_fn uses .predict(), but our model is a preprocessor\n",
" so we want to use .transform().\n",
"\n",
" The output is returned in the following order:\n",
" \n",
" rest of features either one hot encoded or standardized\n",
" \"\"\"\n",
" features = model.transform(input_data)\n",
"\n",
" if label_column in input_data:\n",
" # Return the label (as the first column) and the set of features.\n",
" return np.insert(features, 0, input_data[label_column], axis=1)\n",
" else:\n",
" # Return only the set of features\n",
" return features\n",
" \n",
"\n",
"def model_fn(model_dir):\n",
" \"\"\"Deserialize fitted model\n",
" \"\"\"\n",
" preprocessor = joblib.load(os.path.join(model_dir, \"model.joblib\"))\n",
" return preprocessor\n",
"```"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Create SageMaker Scikit Estimator \n",
"\n",
"To run our Scikit-learn training script on SageMaker, we construct a `sagemaker.sklearn.estimator.sklearn` estimator, which accepts several constructor arguments:\n",
"\n",
"* __entry_point__: The path to the Python script SageMaker runs for training and prediction.\n",
"* __role__: Role ARN\n",
"* __framework_version__: Scikit-learn version you want to use for executing your model training code.\n",
"* __train_instance_type__ *(optional)*: The type of SageMaker instances for training. __Note__: Because Scikit-learn does not natively support GPU training, Sagemaker Scikit-learn does not currently support training on GPU instance types.\n",
"* __sagemaker_session__ *(optional)*: The session used to train on Sagemaker.\n",
"\n",
"To see the code for the SKLearn Estimator, see here: https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/sklearn"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from sagemaker.sklearn.estimator import SKLearn\n",
"\n",
"FRAMEWORK_VERSION = \"1.2-1\"\n",
"script_path = \"sklearn_abalone_featurizer.py\"\n",
"\n",
"sklearn_preprocessor = SKLearn(\n",
" entry_point=script_path,\n",
" role=role,\n",
" framework_version=FRAMEWORK_VERSION,\n",
" instance_type=\"ml.c4.xlarge\",\n",
" sagemaker_session=sagemaker_session,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"sklearn_preprocessor.fit({\"train\": train_input})"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Batch transform our training data \n",
"Now that our proprocessor is properly fitted, let's go ahead and preprocess our training data. Let's use batch transform to directly preprocess the raw data and store right back into s3."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# Define a SKLearn Transformer from the trained SKLearn Estimator\n",
"transformer = sklearn_preprocessor.transformer(\n",
" instance_count=1, instance_type=\"ml.m5.xlarge\", assemble_with=\"Line\", accept=\"text/csv\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"# Preprocess training input\n",
"transformer.transform(train_input, content_type=\"text/csv\")\n",
"print(\"Waiting for transform job: \" + transformer.latest_transform_job.job_name)\n",
"transformer.wait()\n",
"preprocessed_train = transformer.output_path"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Fit a LinearLearner Model with the preprocessed data \n",
"Let's take the preprocessed training data and fit a LinearLearner Model. Sagemaker provides prebuilt algorithm containers that can be used with the Python SDK. The previous Scikit-learn job preprocessed the raw Titanic dataset into labeled, useable data that we can now use to fit a binary classifier Linear Learner model.\n",
"\n",
"For more on Linear Learner see: https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"import boto3\n",
"from sagemaker.image_uris import retrieve\n",
"\n",
"ll_image = retrieve(\"linear-learner\", boto3.Session().region_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"s3_ll_output_key_prefix = \"ll_training_output\"\n",
"s3_ll_output_location = \"s3://{}/{}/{}/{}\".format(\n",
" bucket, prefix, s3_ll_output_key_prefix, \"ll_model\"\n",
")\n",
"\n",
"ll_estimator = sagemaker.estimator.Estimator(\n",
" ll_image,\n",
" role,\n",
" instance_count=1,\n",
" instance_type=\"ml.m4.2xlarge\",\n",
" volume_size=20,\n",
" max_run=3600,\n",
" input_mode=\"File\",\n",
" output_path=s3_ll_output_location,\n",
" sagemaker_session=sagemaker_session,\n",
")\n",
"\n",
"ll_estimator.set_hyperparameters(feature_dim=10, predictor_type=\"regressor\", mini_batch_size=32)\n",
"\n",
"ll_train_data = sagemaker.inputs.TrainingInput(\n",
" preprocessed_train,\n",
" distribution=\"FullyReplicated\",\n",
" content_type=\"text/csv\",\n",
" s3_data_type=\"S3Prefix\",\n",
")\n",
"\n",
"data_channels = {\"train\": ll_train_data}\n",
"ll_estimator.fit(inputs=data_channels, logs=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"# Serial Inference Pipeline with Scikit preprocessor and Linear Learner \n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Set up the inference pipeline \n",
"Setting up a Machine Learning pipeline can be done with the Pipeline Model. This sets up a list of models in a single endpoint; in this example, we configure our pipeline model with the fitted Scikit-learn inference model and the fitted Linear Learner model. Deploying the model follows the same ```deploy``` pattern in the SDK."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from sagemaker.model import Model\n",
"from sagemaker.pipeline import PipelineModel\n",
"import boto3\n",
"from time import gmtime, strftime\n",
"\n",
"timestamp_prefix = strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n",
"\n",
"scikit_learn_inferencee_model = sklearn_preprocessor.create_model()\n",
"linear_learner_model = ll_estimator.create_model()\n",
"\n",
"model_name = \"inference-pipeline-\" + timestamp_prefix\n",
"endpoint_name = \"inference-pipeline-ep-\" + timestamp_prefix\n",
"sm_model = PipelineModel(\n",
" name=model_name, role=role, models=[scikit_learn_inferencee_model, linear_learner_model]\n",
")\n",
"\n",
"sm_model.deploy(initial_instance_count=1, instance_type=\"ml.c4.xlarge\", endpoint_name=endpoint_name)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Make a request to our pipeline endpoint \n",
"\n",
"Here we just grab the first line from the test data (you'll notice that the inference python script is very particular about the ordering of the inference request data). The ```ContentType``` field configures the first container, while the ```Accept``` field configures the last container. You can also specify each container's ```Accept``` and ```ContentType``` values using environment variables.\n",
"\n",
"We make our request with the payload in ```'text/csv'``` format, since that is what our script currently supports. If other formats need to be supported, this would have to be added to the ```output_fn()``` method in our entry point. Note that we set the ```Accept``` to ```application/json```, since Linear Learner does not support ```text/csv``` ```Accept```. The prediction output in this case is trying to guess the number of rings the abalone specimen would have given its other physical features; the actual number of rings is 10."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"from sagemaker.predictor import Predictor\n",
"from sagemaker.serializers import CSVSerializer\n",
"\n",
"payload = \"M, 0.44, 0.365, 0.125, 0.516, 0.2155, 0.114, 0.155\"\n",
"actual_rings = 10\n",
"predictor = Predictor(\n",
" endpoint_name=endpoint_name, sagemaker_session=sagemaker_session, serializer=CSVSerializer()\n",
")\n",
"\n",
"print(predictor.predict(payload))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": [
"## Delete Endpoint \n",
"Once we are finished with the endpoint, we clean up the resources!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"sm_client = sagemaker_session.boto_session.client(\"sagemaker\")\n",
"predictor.delete_model()\n",
"sm_client.delete_endpoint(EndpointName=endpoint_name)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": [
"transformer.delete_model()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Notebook CI Test Results\n",
"\n",
"This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
]
}
],
"metadata": {
"celltoolbar": "Tags",
"kernelspec": {
"display_name": "Python 3 (Data Science 3.0)",
"language": "python",
"name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-east-1:081325390199:image/sagemaker-data-science-310-v1"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}