{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Loading Word Embeddings in SageMaker for Text Classification with TensorFlow 2\n", "\n", "In this notebook, two aspects of Amazon SageMaker will be demonstrated. First, we'll use SageMaker Script Mode with a prebuilt TensorFlow 2 framework container, which enables you to use a training script similar to one you would use outside SageMaker. Second, we'll see how to use the concept of SageMaker input channels to load word embeddings into the container for training. The word embeddings will be used with a Convolutional Neural Net (CNN) in TensorFlow 2 to perform text classification. \n", "\n", "We'll begin with some necessary imports." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import sys\n", "import numpy as np\n", "import tensorflow as tf\n", "\n", "from tensorflow.keras.preprocessing.text import Tokenizer\n", "from tensorflow.keras.preprocessing.sequence import pad_sequences\n", "from tensorflow.keras.utils import to_categorical" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Prepare Dataset and Embeddings" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Initially, we download the 20 Newsgroups dataset. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!mkdir ./20_newsgroup\n", "!wget -O ./20_newsgroup/news20.tar.gz http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.tar.gz\n", "!tar -xvzf ./20_newsgroup/news20.tar.gz" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The next step is to download the GloVe word embeddings that we will load in the neural net." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!mkdir ./glove.6B\n", "!wget https://nlp.stanford.edu/data/glove.6B.zip\n", "!unzip glove.6B.zip -d ./glove.6B" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have to map the GloVe embedding vectors into an index." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "BASE_DIR = ''\n", "GLOVE_DIR = os.path.join(BASE_DIR, 'glove.6B')\n", "TEXT_DATA_DIR = os.path.join(BASE_DIR, '20_newsgroup')\n", "MAX_SEQUENCE_LENGTH = 1000\n", "MAX_NUM_WORDS = 20000\n", "EMBEDDING_DIM = 100\n", "VALIDATION_SPLIT = 0.2\n", "\n", "embeddings_index = {}\n", "with open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt')) as f:\n", " for line in f:\n", " values = line.split()\n", " word = values[0]\n", " coefs = np.asarray(values[1:], dtype='float32')\n", " embeddings_index[word] = coefs\n", "\n", "print('Found %s word vectors.' % len(embeddings_index))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The 20 Newsgroups text also must be preprocessed. For example, the labels for each sample must be extracted and mapped to a numeric index." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "texts = [] # list of text samples\n", "labels_index = {} # dictionary mapping label name to numeric id\n", "labels = [] # list of label ids\n", "for name in sorted(os.listdir(TEXT_DATA_DIR)):\n", " path = os.path.join(TEXT_DATA_DIR, name)\n", " if os.path.isdir(path):\n", " label_id = len(labels_index)\n", " labels_index[name] = label_id\n", " for fname in sorted(os.listdir(path)):\n", " if fname.isdigit():\n", " fpath = os.path.join(path, fname)\n", " args = {} if sys.version_info < (3,) else {'encoding': 'latin-1'}\n", " with open(fpath, **args) as f:\n", " t = f.read()\n", " i = t.find('\\n\\n') # skip header\n", " if 0 < i:\n", " t = t[i:]\n", " texts.append(t)\n", " labels.append(label_id)\n", "\n", "print('Found %s texts.' % len(texts))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use Keras text preprocessing functions to tokenize the text, limit the sequence length of the samples, and pad shorter sequences as necessary. Additionally, the preprocessed dataset must be split into training and validation sets." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "tokenizer = Tokenizer(num_words=MAX_NUM_WORDS)\n", "tokenizer.fit_on_texts(texts)\n", "sequences = tokenizer.texts_to_sequences(texts)\n", "\n", "word_index = tokenizer.word_index\n", "print('Found %s unique tokens.' % len(word_index))\n", "\n", "data = pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)\n", "\n", "labels = to_categorical(np.asarray(labels))\n", "print('Shape of data tensor:', data.shape)\n", "print('Shape of label tensor:', labels.shape)\n", "\n", "# split the data into a training set and a validation set\n", "indices = np.arange(data.shape[0])\n", "np.random.shuffle(indices)\n", "data = data[indices]\n", "labels = labels[indices]\n", "num_validation_samples = int(VALIDATION_SPLIT * data.shape[0])\n", "\n", "x_train = data[:-num_validation_samples]\n", "y_train = labels[:-num_validation_samples]\n", "x_val = data[-num_validation_samples:]\n", "y_val = labels[-num_validation_samples:]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After the dataset text preprocessing is complete, we can now map the 20 Newsgroup vocabulary words to their GloVe embedding vectors for use in an embedding matrix. This matrix will be loaded in an Embedding layer of the neural net." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "num_words = min(MAX_NUM_WORDS, len(word_index)) + 1\n", "embedding_matrix = np.zeros((num_words, EMBEDDING_DIM))\n", "for word, i in word_index.items():\n", " if i > MAX_NUM_WORDS:\n", " continue\n", " embedding_vector = embeddings_index.get(word)\n", " if embedding_vector is not None:\n", " # words not found in embedding index will be all-zeros.\n", " embedding_matrix[i] = embedding_vector\n", "\n", "print('Number of words:', num_words)\n", "print('Shape of embeddings:', embedding_matrix.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now the data AND embeddings are saved to file to prepare for training.\n", "\n", "Note that we will not be loading the original, unprocessed set of embeddings into the training container — instead, to save loading time, we just save the embedding matrix, which at 16MB is much smaller than the original set of embeddings at 892MB. Depending on how large of a set of embeddings you need for other use cases, you might save further space by saving the embeddings with joblib (more efficient than the original Python pickle), and/or save the embeddings with half precision (fp16) instead of full precision and then restore them to full precision after they are loaded." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data_dir = os.path.join(os.getcwd(), 'data')\n", "os.makedirs(data_dir, exist_ok=True)\n", "\n", "train_dir = os.path.join(os.getcwd(), 'data/train')\n", "os.makedirs(train_dir, exist_ok=True)\n", "\n", "val_dir = os.path.join(os.getcwd(), 'data/val')\n", "os.makedirs(val_dir, exist_ok=True)\n", "\n", "embedding_dir = os.path.join(os.getcwd(), 'data/embedding')\n", "os.makedirs(embedding_dir, exist_ok=True)\n", "\n", "np.save(os.path.join(train_dir, 'x_train.npy'), x_train)\n", "np.save(os.path.join(train_dir, 'y_train.npy'), y_train)\n", "np.save(os.path.join(val_dir, 'x_val.npy'), x_val)\n", "np.save(os.path.join(val_dir, 'y_val.npy'), y_val)\n", "np.save(os.path.join(embedding_dir, 'embedding.npy'), embedding_matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# SageMaker Hosted Training\n", "\n", "Now that we've prepared our embedding matrix, we can move on to use SageMaker's hosted training functionality. SageMaker hosted training is preferred for doing actual training in place of local notebook prototyping, especially for large-scale, distributed training. Before starting hosted training, the data must be uploaded to S3. The word embedding matrix also will be uploaded. We'll do that now, and confirm the upload was successful." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3_prefix = 'tf-20-newsgroups'\n", "\n", "traindata_s3_prefix = '{}/data/train'.format(s3_prefix)\n", "valdata_s3_prefix = '{}/data/val'.format(s3_prefix)\n", "embeddingdata_s3_prefix = '{}/data/embedding'.format(s3_prefix)\n", "\n", "train_s3 = sagemaker.Session().upload_data(path='./data/train/', key_prefix=traindata_s3_prefix)\n", "val_s3 = sagemaker.Session().upload_data(path='./data/val/', key_prefix=valdata_s3_prefix)\n", "embedding_s3 = sagemaker.Session().upload_data(path='./data/embedding/', key_prefix=embeddingdata_s3_prefix)\n", "\n", "inputs = {'train':train_s3, 'val': val_s3, 'embedding': embedding_s3}\n", "print(inputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We're now ready to set up an Estimator object for hosted training. Hyperparameters are passed in as a dictionary. Importantly, for the case of a model such as this one that takes word embeddings as an input, various aspects of the embeddings can be passed in with the dictionary so the embedding layer can be constructed in a flexible manner and not hardcoded. This allows easier tuning without having to make code modifications. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "from sagemaker.tensorflow import TensorFlow\n", "\n", "train_instance_type = 'ml.p3.2xlarge'\n", "hyperparameters = {'epochs': 20, \n", " 'batch_size': 128, \n", " 'num_words': num_words,\n", " 'word_index_len': len(word_index),\n", " 'labels_index_len': len(labels_index),\n", " 'embedding_dim': EMBEDDING_DIM,\n", " 'max_sequence_len': MAX_SEQUENCE_LENGTH\n", " }\n", "\n", "estimator = TensorFlow(entry_point='train.py',\n", " source_dir='code',\n", " model_dir=model_dir,\n", " instance_type=train_instance_type,\n", " instance_count=1,\n", " hyperparameters=hyperparameters,\n", " role=sagemaker.get_execution_role(),\n", " base_job_name='tf-20-newsgroups',\n", " framework_version='2.1',\n", " py_version='py3',\n", " script_mode=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To start the training job, simply call the `fit` method of the `Estimator` object. The `inputs` parameter is the dictionary we created above, which defines three channels. Besides the usual channels for the training and validation datasets, there is a channel for the embedding matrix. This illustrates one aspect of the flexibility of SageMaker for setting up training jobs: in addition to data, you can pass in arbitrary files needed for training. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "estimator.fit(inputs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# SageMaker hosted endpoint\n", "\n", "If we wish to deploy the model to production, the next step is to create a SageMaker hosted endpoint. The endpoint will retrieve the TensorFlow SavedModel created during training and deploy it within a TensorFlow Serving container. This all can be accomplished with one line of code, an invocation of the Estimator's deploy method." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor = estimator.deploy(initial_instance_count=1,instance_type='ml.m5.xlarge')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now compare the predictions generated by the endpoint with a sample of the validation data. The results are shown as integer labels from 0 to 19 corresponding to the 20 different newsgroups." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "results = predictor.predict(x_val[:10])['predictions'] \n", "\n", "print('predictions: \\t{}'.format(np.argmax(results, axis=1)))\n", "print('target values: \\t{}'.format(np.argmax(y_val[:10], axis=1)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When you're finished with your review of this notebook, you can delete the prediction endpoint to release the instance(s) associated with it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sagemaker.Session().delete_endpoint(predictor.endpoint_name)" ] } ], "metadata": { "kernelspec": { "display_name": "conda_tensorflow2_p36", "language": "python", "name": "conda_tensorflow2_p36" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" } }, "nbformat": 4, "nbformat_minor": 2 }