{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Fine-tune and deploy Wav2Vec2 model for speech recognition with Hugging Face and SageMaker\n", "\n", "## Background\n", "\n", "Wav2Vec2 is a transformer-based architecture for ASR tasks and was released in September 2020. We show its simplified architecture diagram below. For more details, see the [original paper](https://arxiv.org/abs/2006.11477). The model is composed of a multi-layer convolutional network (CNN) as feature extractor, which takes input audio signal and outputs audio representations, also considered as features. They are fed into a transformer network to generate contextualized representations. This part of training can be self-supervised, it means that the transformer can be trained with a mass of unlabeled speech and learn from them. Then the model is fine-tuned on labeled data with Connectionist Temporal Classification (CTC) algorithm for specific ASR tasks. The base model we use in this post is [Wav2Vec2-Base-960h](https://huggingface.co/facebook/wav2vec2-base-960h), it is fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. \n", "<img src=\"images/wav2vec2.png\">\n", "\n", "Connectionist Temporal Classification (CTC) is character-based algorithm. During the training, it’s able to demarcate each character of the transcription in the speech automatically, so the timeframe alignment is not required between audio signal and transcription. For example, one audio clip says “Hello Worldâ€, we don’t need to know in which second word “hello†is located. It saves a lot of labeling effort for ASR use cases. If you are interested in how the algorithm works underneath, see [this article](https://distill.pub/2017/ctc/) for more information. \n", "\n", "\n", "## Notebook Overview \n", "\n", "In this notebook, we use [SUPERB \n", "(Speech processing Universal PERformance Benchmark) dataset](https://huggingface.co/datasets/superb) that available from Hugging Face Datasets library, and fine-tune the Wav2Vec2 model and deploy it as SageMaker endpoint for real-time inference for an ASR task. \n", "<img src=\"images/solution_overview.png\">\n", "\n", "First of all, we show how to load and preprocess the SUPERB dataset in SageMaker environment in order to obtain tokenizer and feature extractor, which are required for fine-tuning the Wav2Vec2 model. Then we use SageMaker Script Mode for training and inference steps, that allows you to define and use custom training and inference scripts and SageMaker provides supported Hugging Face framework Docker containers. For more information about training and serving Hugging Face models on SageMaker, see Use [Hugging Face with Amazon SageMaker](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html). This functionality is available through the development of Hugging Face [AWS Deep Learning Container (DLC)](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/what-is-dlc.html). \n", "\n", "This notebook is tested in both SageMaker Studio and SageMaker Notebook environments. Below shows detailed setup. \n", "- SageMaker Studio: **ml.m5.xlarge** instance with **Data Science** kernel.\n", "- SageMaker Notebook: **ml.m5.xlarge** instance with **conda_python3** kernel. \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set up \n", "First, install the dependencies." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!pip install sagemaker --upgrade\n", "!pip install \"transformers>=4.4.2\" \n", "!pip install s3fs --upgrade\n", "!pip install datasets --upgrade \n", "!pip install librosa\n", "!pip install torch # framework is required for transformer " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**soundfile** library will be used to read raw audio files and convert them into arrays. Before installing **soundfile** python library, package **libsndfile** needs to be installed. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!conda install -c conda-forge libsndfile -y\n", "!pip install soundfile" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Following let's import common python libraries. Create a S3 bucket in AWS console for this project, and replace **[BUCKET_NAME]** with your bucket. \n", "Get the execution role which allows training and servering jobs to access your data. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import json\n", "import time\n", "import boto3\n", "import numpy as np\n", "import random\n", "import soundfile \n", "import sagemaker\n", "import sagemaker.huggingface\n", "\n", "BUCKET=\"[BUCKET_NAME]\" # please use your bucket name\n", "PREFIX = \"huggingface-blog\" \n", "ROLE = sagemaker.get_execution_role()\n", "sess = sagemaker.Session(default_bucket=BUCKET)\n", "\n", "print(f\"sagemaker role arn: {ROLE}\")\n", "print(f\"sagemaker bucket: {sess.default_bucket()}\")\n", "print(f\"sagemaker session region: {sess.boto_region_name}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Data Pre-processing\n", "We are using SUPERB dataset for this notebook, which can be loaded from Hugging Face [dataset library](https://huggingface.co/datasets/superb) directly using `load_dataset` function. SUPERB is a leaderboard to benchmark the performance of a shared model across a wide range of speech processing tasks with minimal architecture changes and labeled data. It also includes speaker_id and chapter_id etc., these columns are removed from the dataset, and we only keep audio files and transcriptions to fine-tune the Wav2Vec2 model for an audio recognition task, which transcribes speech to text. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from datasets import load_dataset, DatasetDict\n", "data = load_dataset(\"superb\", 'asr', ignore_verifications=True) \n", "data = data.remove_columns(['speaker_id', 'chapter_id', 'id'])\n", "# reduce the data volume for this example. only take the test data from the original dataset for fine-tune\n", "data = data['test'] \n", "\n", "train_test = data.train_test_split(test_size=0.2)\n", "dataset = DatasetDict({\n", " 'train': train_test['train'],\n", " 'test': train_test['test']})\n", "\n", "# helper function to remove special characters and convert texts to lower case\n", "def remove_special_characters(batch):\n", " import re\n", " chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\\"]'\n", " \n", " batch[\"text\"] = re.sub(chars_to_ignore_regex, '', batch[\"text\"]).lower()\n", " return batch\n", "\n", "dataset = dataset.map(remove_special_characters)\n", "print(dataset)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dataset['train'][0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Build vocabulary file \n", "Wav2Vec2 model is using [CTC](https://en.wikipedia.org/wiki/Connectionist_temporal_classification) algorithm to train deep neural networks in sequence problems, and its output is a single letter or blank. It uses a character-based tokenizer. Hence, we extract distinct letters from the dataset and build the vocabulary file. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def extract_characters(batch):\n", " texts = \" \".join(batch[\"text\"])\n", " vocab = list(set(texts))\n", " return {\"vocab\": [vocab], \"texts\": [texts]}\n", "\n", "vocabs = dataset.map(extract_characters, batched=True, batch_size=-1, \n", " keep_in_memory=True, remove_columns=dataset.column_names[\"train\"])\n", "\n", "vocab_list = list(set(vocabs[\"train\"][\"vocab\"][0]) | set(vocabs[\"test\"][\"vocab\"][0]))\n", "\n", "vocab_dict = {v: k for k, v in enumerate(vocab_list)}\n", "\n", "vocab_dict[\"|\"] = vocab_dict[\" \"]\n", "del vocab_dict[\" \"]\n", "\n", "vocab_dict[\"[UNK]\"] = len(vocab_dict) # add \"unknown\" token \n", "vocab_dict[\"[PAD]\"] = len(vocab_dict) # add a padding token that corresponds to CTC's \"blank token\"\n", "\n", "with open('vocab.json', 'w') as vocab_file:\n", " json.dump(vocab_dict, vocab_file)\n", " \n", "# vocab.json file will be used in training container, hence upload it to s3 bucket for later steps \n", "s3 = boto3.client('s3')\n", "s3.upload_file('vocab.json', BUCKET, f'{PREFIX}/vocab.json')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create tokenizer with vocabulary file and feature extractor \n", "Wav2Vec2 model contains tokenizer and feature extractor. We can use vocab.json that created from previous step to create the Wav2Vec2CTCTokenizer. Wav2Vec2FeatureExtractor is to make sure that the dataset used in fine-tune has the same audio sampling rate as the dataset used for pretraining. Finally, create a Wav2Vec2 processor can wrap the feature extractor and the tokenizer into one single processor.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from transformers import Wav2Vec2CTCTokenizer,Wav2Vec2FeatureExtractor, Wav2Vec2Processor\n", "\n", "# create Wav2Vec2 tokenizer\n", "tokenizer = Wav2Vec2CTCTokenizer(\"vocab.json\", unk_token=\"[UNK]\", pad_token=\"[PAD]\", word_delimiter_token=\"|\")\n", "\n", "# create Wav2Vec2 feature extractor\n", "feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, \n", " padding_value=0.0, do_normalize=True, return_attention_mask=False)\n", "# create a processor pipeline \n", "processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Prepare train and test datasets" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def extract_array_samplingrate(batch):\n", " batch[\"speech\"] = batch['audio']['array'].tolist()\n", " batch[\"sampling_rate\"] = batch['audio']['sampling_rate']\n", " batch[\"target_text\"] = batch[\"text\"]\n", " return batch\n", "\n", "dataset = dataset.map(extract_array_samplingrate, remove_columns=dataset.column_names[\"train\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# check one audio file from the training dataset\n", "import IPython.display as ipd\n", "\n", "rand_int = random.randint(0, len(dataset[\"train\"]))\n", "print(dataset[\"train\"][rand_int][\"target_text\"])\n", "ipd.Audio(data=np.asarray(dataset[\"train\"][rand_int][\"speech\"]), autoplay=True, rate=16000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# process the dataset with processor pipeline that created above\n", "def process_dataset(batch): \n", " batch[\"input_values\"] = processor(batch[\"speech\"], sampling_rate=batch[\"sampling_rate\"][0]).input_values\n", "\n", " with processor.as_target_processor():\n", " batch[\"labels\"] = processor(batch[\"target_text\"]).input_ids\n", " return batch\n", "\n", "data_processed = dataset.map(process_dataset, remove_columns=dataset.column_names[\"train\"], batch_size=8, batched=True)\n", "\n", "train_dataset = data_processed['train']\n", "test_dataset = data_processed['test']" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we upload train and test data to S3. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from datasets.filesystems import S3FileSystem\n", "s3 = S3FileSystem()\n", "\n", "# save train_dataset to s3\n", "training_input_path = f's3://{BUCKET}/{PREFIX}/train'\n", "train_dataset.save_to_disk(training_input_path,fs=s3)\n", "\n", "# save test_dataset to s3\n", "test_input_path = f's3://{BUCKET}/{PREFIX}/test'\n", "test_dataset.save_to_disk(test_input_path,fs=s3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Fine-tune the HuggingFace model (Wav2Vec2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training script\n", "\n", "Here we are using SageMaker HuggingFace DLC (Deep Learning Container) script mode to construct the training and inference job, which allows you to write custom trianing and serving code and using HuggingFace framework containers that maintained and supported by AWS. \n", "\n", "When we create a training job using the script mode, the `entry_point` script, hyperparameters, its dependencies (inside requirements.txt) and input data (train and test datasets) will be copied into the container. Then it invokes the `entry_point` training script, where the train and test datasets will be loaded, training steps will be executed and model artifacts will be saved in `/opt/ml/model` in the container. After training, artifacts in this directory are uploaded to S3 for later model hosting.\n", "\n", "This script is saved in directory `scripts`, and you can inspect the training script by running the next cell. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize scripts/train.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Creating an Estimator and start a training job\n", "\n", "Worth to highlight that, when you create a Hugging Face Estimator, you can configure hyperparameters and provide a custom parameter into the training script, such as `vocab_url` in this example. Also you can specify the metrics in the Estimator, and parse the logs of metrics and send them to CloudWatch to monitor and track the training performance. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.huggingface import HuggingFace\n", "\n", "#create an unique id to tag training job, model name and endpoint name. \n", "id = int(time.time())\n", "\n", "TRAINING_JOB_NAME = f\"huggingface-wav2vec2-training-{id}\"\n", "print('Training job name: ', TRAINING_JOB_NAME)\n", "\n", "vocab_url = f\"s3://{BUCKET}/{PREFIX}/vocab.json\"\n", "hyperparameters = {'epochs':10, # you can increase the epoch number to improve model accuracy\n", " 'train_batch_size': 8,\n", " 'model_name': \"facebook/wav2vec2-base\",\n", " 'vocab_url': vocab_url\n", " }\n", "\n", "# define metrics definitions\n", "metric_definitions=[\n", " {'Name': 'eval_loss', 'Regex': \"'eval_loss': ([0-9]+(.|e\\-)[0-9]+),?\"},\n", " {'Name': 'eval_wer', 'Regex': \"'eval_wer': ([0-9]+(.|e\\-)[0-9]+),?\"},\n", " {'Name': 'eval_runtime', 'Regex': \"'eval_runtime': ([0-9]+(.|e\\-)[0-9]+),?\"},\n", " {'Name': 'eval_samples_per_second', 'Regex': \"'eval_samples_per_second': ([0-9]+(.|e\\-)[0-9]+),?\"},\n", " {'Name': 'epoch', 'Regex': \"'epoch': ([0-9]+(.|e\\-)[0-9]+),?\"}]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use the [HuggingFace estimator class](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html) to train our model. When creating the estimator, the following parameters need to specify. \n", "\n", "* **entry_point**: the name of the training script. It loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model. \n", "* **source_dir**: the location of the training scripts. \n", "* **transformers_version**: the Hugging Face transformers library version we want to use.\n", "* **pytorch_version**: the pytorch version that compatible with transformers library. \n", "\n", "**Instance Selection**: For this use case and dataset, we use one ml.p3.2xlarge instance and the training job is able to finish within two hours. You can select a more powerful instance to reduce the training time, however it will generate more cost. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "OUTPUT_PATH= f's3://{BUCKET}/{PREFIX}/{TRAINING_JOB_NAME}/output/'\n", "\n", "huggingface_estimator = HuggingFace(entry_point='train.py',\n", " source_dir='./scripts',\n", " output_path= OUTPUT_PATH, \n", " instance_type='ml.p3.2xlarge',\n", " instance_count=1,\n", " transformers_version='4.6.1',\n", " pytorch_version='1.7.1',\n", " py_version='py36',\n", " role=ROLE,\n", " hyperparameters = hyperparameters,\n", " metric_definitions = metric_definitions,\n", " )\n", "\n", "#Starts the training job using the fit function, training takes approximately 2 hours to complete.\n", "huggingface_estimator.fit({'train': training_input_path, 'test': test_input_path},\n", " job_name=TRAINING_JOB_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From the training logs you can see that, after 10 epochs of training, and model evaluation metrics wer can achieve around 0.32 for the subset of SUPERB dataset. You can increase the number of epochs or use the full dataset to improve the model further. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy the model as endpoint on SageMaker and inference the model" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inference script\n", "\n", "We are using [SageMaker HuggingFace inference tool kit](https://github.com/aws/sagemaker-huggingface-inference-toolkit) to host our fine-tuned model. It provides default functions of preprocessing, predict and postprocessing for certain tasks. However, the default capabilities are not able to inference our model properly. Hence, we defined below functions in `inference.py` script to override the default settings with custom requirements.\n", "\n", "* `model_fn(model_dir)`: overrides the default method for loading the model, the return value model will be used in the predict() for predicitions. It receives argument the model_dir, the path to your unzipped model.tar.gz.\n", "* `input_fn(input_data, content_type)`: overrides the default method for prerprocessing, the return value data will be used in the predict() method for predicitions. The input is input_data, the raw body of your request and content_type, the content type form the request Header.\n", "* `predict_fn(processed_data, model)`: overrides the default method for predictions, the return value predictions will be used in the postprocess() method. The input is processed_data, the result of the preprocess() method.\n", "* `output_fn(prediction, accept)`: overrides the default method for postprocessing, the return value result will be the respond of your request(e.g.JSON). The inputs are predictions, the result of the predict() method and accept the return accept type from the HTTP Request, e.g. application/json\n", "\n", "**Note**: Inference tool kit can inference tasks from architectures that ending with: 'TapasForQuestionAnswering', 'ForQuestionAnswering', 'ForTokenClassification', 'ForSequenceClassification', 'ForMultipleChoice', 'ForMaskedLM', 'ForCausalLM', 'ForConditionalGeneration', 'MTModel', 'EncoderDecoderModel', 'GPT2LMHeadModel', 'T5WithLMHeadModel' as of Jan2022. \n", "\n", "This script is saved in directory `scripts`, you can inspect the inference script by running the next cell. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize scripts/inference.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a HuggingFaceModel from the estimator " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use the [HuggingFaceModel class](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html#hugging-face-model) to create a model object, which can be deployed to a SageMaker endpoint. When creating the model, the following parameters need to specify. \n", "\n", "* **entry_point**: the name of the inference script. The methods defined in the inference script will be implemented to the endpoint. \n", "* **source_dir**: the location of the inference scripts. \n", "* **transformers_version**: the Hugging Face transformers library version we want to use. It should be consistent with training step. \n", "* **pytorch_version**: the pytorch version that compatible with transformers library. It should be consistent with training step.\n", "* **model_data**: the Amazon S3 location of a SageMaker model data `.tar.gz` file\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sagemaker.huggingface import HuggingFaceModel\n", "\n", "huggingface_model = HuggingFaceModel(\n", " entry_point = 'inference.py',\n", " source_dir='./scripts',\n", " name = f'huggingface-wav2vec2-model-{id}',\n", " transformers_version='4.6.1', \n", " pytorch_version='1.7.1', \n", " py_version='py36',\n", " model_data=huggingface_estimator.model_data,\n", " role=ROLE,\n", " )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Deploy the model on an endpoint \n", "\n", "Next, we create a predictor by using the `model.deploy` function. You can change the instance count and instance type based on your performance requirements. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor = huggingface_model.deploy(\n", " initial_instance_count=1,\n", " instance_type=\"ml.g4dn.xlarge\", \n", " endpoint_name = f'huggingface-wav2vec2-endpoint-{id}'\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inference audio files \n", "\n", "After the endpoint is deployed, you can run below prediction tests to check the model performance. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# inference audio file that download from S3 bucket or inference local audio file \n", "import soundfile\n", "\n", "# s3.download_file(BUCKET, 'huggingface-blog/sample_audio/xxxxxx.wav', 'downloaded.wav')\n", "# file_name ='downloaded.wav'\n", "\n", "# download a sample audio file by using below link\n", "!wget https://datashare.ed.ac.uk/bitstream/handle/10283/343/MKH800_19_0001.wav\n", " \n", "file_name ='MKH800_19_0001.wav'\n", "\n", "speech_array, sampling_rate = soundfile.read(file_name)\n", "\n", "ipd.Audio(data=np.asarray(speech_array), autoplay=True, rate=16000)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "json_request_data = {\"speech_array\": speech_array.tolist(),\n", " \"sampling_rate\": sampling_rate}\n", "\n", "prediction = predictor.predict(json_request_data)\n", "print(prediction)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Please note**, as we are using real-time inference endpoint, the maximum payload size is 6MB. If you see any error message like \"Received client error (413) from primary and could not load the entire response body\", please use blow code to check your payload size. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sys\n", "sys.getsizeof(speech_array) " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Cleanup\n", "\n", "Finally, please remember to delete the Amazon SageMaker endpoint to avoid charges:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.delete_endpoint()" ] } ], "metadata": { "instance_type": "ml.m5.xlarge", "kernelspec": { "display_name": "Python 3 (Data Science)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-1:470317259841:image/datascience-1.0" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" } }, "nbformat": 4, "nbformat_minor": 4 }