{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Fine-tuning and deploying ProtBert Model for Protein Classification using Amazon SageMaker\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contents\n", "1. [Motivation](#Motivation)\n", "2. [What is ProtBert?](#What-is-ProtBert?)\n", "3. [Notebook Overview](#Notebook-Overview)\n", " - [Setup](#Setup)\n", "4. [Dataset](#Dataset)\n", " - [Download Data](#Download-Data)\n", "5. [Data Exploration](#Data-Exploration)\n", " - [Upload Data to S3](#Upload-Data-to-S3)\n", "6. [Training script](#Training-script)\n", "7. [Train on Amazon SageMaker](#Train-on-Amazon-SageMaker)\n", "8. [Deploy the Model on Amazon SageMaker](#Deploy-the-model-on-Amazon-SageMaker)\n", " - [Create a model object](#Create-a-model-object)\n", " - [Deploy the model on an endpoint](#Deploy-the-model-on-an-endpoint)\n", "9. [Predicting SubCellular Localization of Protein Sequences](#Predicting-SubCellular-Localization-of-Protein-Sequences)\n", "10. [References](#References)\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Motivation\n", "\"Protein\n", "**Proteins** are the key fundamental macromolecules governing in biological bodies. The study of protein localization is important to comprehend the function of protein and has great importance for drug design and other applications. It also plays an important role in characterizing the cellular function of hypothetical and newly discovered proteins [1]. There are several research endeavours that aim to localize whole proteomes by using high-throughput approaches [2–4]. These large datasets provide important information about protein function, and more generally global cellular processes. However, they currently do not achieve 100% coverage of proteomes, and the methodology used can in some cases cause mislocalization of subsets of proteins [5,6]. Therefore, complementary methods are necessary to address these problems. In this notebook, we will leverage Natural Language Processing (NLP) techniques for protein sequence classification. The idea is to interpret protein sequences as sentences and their constituent – amino acids –\n", "as single words [7]. More specifically we will fine tune Pytorch ProtBert model from Hugging Face library." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## What is ProtBert?\n", "\n", "ProtBert is a pretrained model on protein sequences using a masked language modeling (MLM) objective. It is based on Bert model which is pretrained on a large corpus of protein sequences in a self-supervised fashion. This means it was pretrained on the raw protein sequences only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those protein sequences [8]. For more information about ProtBert, see [`ProtTrans: Towards Cracking the Language of Life’s Code Through Self-Supervised Deep Learning and High Performance Computing`](https://www.biorxiv.org/content/10.1101/2020.07.12.199554v2.full).\n", "\n", "---\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Notebook Overview\n", "\n", "This example notebook focuses on fine-tuning the Pytorch ProtBert model and deploying it using Amazon SageMaker, which is the most comprehensive and fully managed machine learning service. With SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment. \n", "During the training, we will leverage SageMaker distributed data parallel (SDP) feature which extends SageMaker’s training capabilities on deep learning models with near-linear scaling efficiency, achieving fast time-to-train with minimal code changes.\n", "\n", "_**Note**_: Please select the Kernel as ` Python 3 (Pytorch 1.6 Python 3.6 CPU Optimized)`.\n", "\n", "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup\n", "\n", "To start, we import some Python libraries and initialize a SageMaker session, S3 bucket and prefix, and IAM role.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install --upgrade pip -q\n", "!pip install -U boto3 sagemaker -q\n", "!pip install seaborn -q" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next let us import the common libraries needed for the operations done later." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import re\n", "import json\n", "import pandas as pd\n", "from tqdm import tqdm\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import time\n", "import os\n", "import numpy as np\n", "import pandas as pd\n", "import sagemaker\n", "import torch\n", "import seaborn as sns\n", "import matplotlib.pyplot as plt\n", "\n", "from torch import nn, optim\n", "from torch.utils.data import Dataset, DataLoader" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's verify the version, create a SageMaker session and get the execution role which is the IAM role arn used to give training and hosting access to your data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "print(sagemaker.__version__)\n", "sagemaker_session = sagemaker.Session()\n", "role = sagemaker.get_execution_role()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will specify the S3 bucket and prefix where you will store your training data and model artifacts. This should be within the same region as the Notebook Instance, training, and hosting." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "bucket = sagemaker_session.default_bucket()\n", "prefix = \"sagemaker/DEMO-pytorch-bert\"\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As the last step of setting up the enviroment lets set a value to a random seed so that we can reproduce the same results later." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "RANDOM_SEED = 43\n", "np.random.seed(RANDOM_SEED)\n", "torch.manual_seed(RANDOM_SEED)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataset" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are going to use a opensource public dataset of protein sequences available [here](http://www.cbs.dtu.dk/services/DeepLoc-1.0/data.php). The dataset is a `fasta file` composed by header and protein sequence. The header is composed by the accession number from Uniprot, the annotated subcellular localization and possibly a description field indicating if the protein was part of the test set. The subcellular localization includes an additional label, where S indicates soluble, M membrane and U unknown[9].\n", "Sample of the data is as follows :" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```\n", ">Q9SMX3 Mitochondrion-M test\n", "MVKGPGLYTEIGKKARDLLYRDYQGDQKFSVTTYSSTGVAITTTGTNKGSLFLGDVATQVKNNNFTADVKVST\n", "DSSLLTTLTFDEPAPGLKVIVQAKLPDHKSGKAEVQYFHDYAGISTSVGFTATPIVNFSGVVGTNGLSLGTDV\n", "AYNTESGNFKHFNAGFNFTKDDLTASLILNDKGEKLNASYYQIVSPSTVVGAEISHNFTTKENAITVGTQHAL>\n", "DPLTTVKARVNNAGVANALIQHEWRPKSFFTVSGEVDSKAIDKSAKVGIALALKP\"\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A sequence in FASTA format begins with a single-line description, followed by lines of sequence data. The definition line (defline) is distinguished from the sequence data by a greater-than (>) symbol at the beginning. The word following the \">\" symbol is the identifier of the sequence, and the rest of the line is the description." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Download Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!wget https://services.healthtech.dtu.dk/services/DeepLoc-1.0/deeploc_data.fasta -P ./data -q" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the data is in fasta format, we can leverage `Bio.SeqIO.FastaIO` library to read the dataset. Let us install the Bio package." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip install Bio -q\n", "import Bio" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the Bio package we will read the data directly by filtering out the columns that are of interest. We will also add a space seperater between each character in the sequence field which will be useful during model training." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def read_fasta(file_path, columns) :\n", " from Bio.SeqIO.FastaIO import SimpleFastaParser\n", " with open('./data/deeploc_data.fasta') as fasta_file: # Will close handle cleanly\n", " records = []\n", " for title, sequence in SimpleFastaParser(fasta_file):\n", " record = []\n", " title_splits = title.split(None)\n", " record.append(title_splits[0]) # First word is ID\n", " sequence = \" \".join(sequence)\n", " record.append(sequence)\n", " record.append(len(sequence))\n", " location_splits = title_splits[1].split(\"-\")\n", " record.append(location_splits[0]) # Second word is Location\n", " record.append(location_splits[1]) # Second word is Membrane\n", "\n", " if(len(title_splits) > 2):\n", " record.append(0)\n", " else:\n", " record.append(1)\n", " \n", " records.append(record)\n", " return pd.DataFrame(records, columns = columns)\n", " \n", "data = read_fasta(\"./tmp/deeploc_data.fasta\", columns=[\"id\", \"sequence\", \"sequence_length\", \"location\", \"membrane\", \"is_train\"])\n", "data.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Data Exploration\n", "Dataset consists of 14K sequences and 6 columns in total. We will only use the following columns during training:\n", "\n", "* _**id**_ : Unique identifier given each sequence in the dataset.\n", "* _**sequence**_ : Protein sequence. Each character is seperated by a \"space\". Will be useful for BERT tokernizer.\n", "* _**sequence_length**_ : Character length of each protein sequence. \n", "* _**location**_ : Classification given each sequence.\n", "* _**is_train**_ : Indicates whether the record be used for training or test. Will be used to seperate the dataset for traning and validation.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's verify if there are any missing values in the dataset." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.info()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "data.isnull().values.any()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As you can see, there are **no** missing values in this dataset. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Second, we will see the number of available classes (subcellular localization), which will be used for protein classification. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "unique_classes = data.location.unique()\n", "print(\"Number of classes: \", len(unique_classes))\n", "unique_classes" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can see that there are 10 unique classes in the dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Third, lets check the sequence length." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%matplotlib inline\n", "%config InlineBackend.figure_format='retina'\n", "sns.set(style='whitegrid', palette='muted', font_scale=1.2)\n", "ax = sns.distplot(data['sequence_length'].values)\n", "ax.set_xlim(0, 3000)\n", "plt.title(f'sequence length distribution')\n", "plt.grid(True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is an important observation as PROTBERT model receives a fixed length of sentence as input. Usually the maximum length of a sentence depends on the data we are working on. For sentences that are shorter than this maximum length, we will have to add paddings (empty tokens) to the sentences to make up the length.\n", "\n", "As you can see from the above plot that most of the sequences lie under the length of around 1500, therefore, its a good idea to select the `max_length = 1536` but that will increase the training time for this sample notebook, therefore, we will use `max_length = 512`. You can experiment it with the bigger length and it does improves the accuracy as most of the subcellular localization information of protiens is stored at the end of the sequence. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next let's factorize the protein classes. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "categories = data.location.astype('category').cat\n", "data['location'] = categories.codes\n", "class_names = categories.categories\n", "num_classes = len(class_names)\n", "print(class_names)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's devide the dataset into training and test. We can leverage the `is_train` column to do the split. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_train = data[data.is_train == 1]\n", "df_train = df_train.drop([\"is_train\"], axis = 1)\n", "df_train.shape[0]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df_test = data[data.is_train == 0]\n", "df_test = df_test.drop([\"is_train\"], axis = 1)\n", "df_test.shape[0]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We got **11231** records as training set and **2773** records as the test set which is about 75:25 data split between the train and test. Also, the composition between multiple classes remains uniform between both datasets. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Upload Data to S3\n", "In order to accomodate model training on SageMaker we need to upload the data to s3 location. We are going to use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use later when we start the training job." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "train_dataset_path = './data/deeploc_per_protein_train.csv'\n", "test_dataset_path = './data/deeploc_per_protein_test.csv'\n", "df_train.to_csv(train_dataset_path)\n", "df_test.to_csv(test_dataset_path)\n", "inputs_train = sagemaker_session.upload_data(train_dataset_path, bucket=bucket, key_prefix=prefix)\n", "inputs_test = sagemaker_session.upload_data(test_dataset_path, bucket=bucket, key_prefix=prefix)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"S3 location for training data: \", inputs_train )\n", "print(\"S3 location for testing data: \", inputs_test )" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Training script\n", "We use the [PyTorch-Transformers library](https://pytorch.org/hub/huggingface_pytorch-transformers), which contains PyTorch implementations and pre-trained model weights for many NLP models, including BERT. As mentioned above, we will use `ProtBert model` which is pre-trained on protein sequences. \n", "\n", "Our training script should save model artifacts learned during training to a file path called `model_dir`, as stipulated by the SageMaker PyTorch image. Upon completion of training, model artifacts saved in `model_dir` will be uploaded to S3 by SageMaker and will be used for deployment.\n", "\n", "We save this script in a file named `train.py`, and put the file in a directory named `code/`. The full training script can be viewed under `code/`.\n", "\n", "It also has the code required for distributed data parallel (DDP) training using SMDataParallel. It is very similar to a PyTorch training script you might run outside of SageMaker, but modified to run with SMDataParallel, which is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel's PyTorch client provides an alternative to PyTorch's native DDP. For details about how to use SMDataParallel's DDP in your native PyTorch script, see the [Getting Started with SMDataParallel tutorials](https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html#distributed-training-get-started)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pygmentize code/train.py" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Train on Amazon SageMaker\n", "We use Amazon SageMaker to train and deploy a model using our custom PyTorch code. The Amazon SageMaker Python SDK makes it easier to run a PyTorch script in Amazon SageMaker using its PyTorch estimator. After that, we can use the SageMaker Python SDK to deploy the trained model and run predictions. For more information on how to use this SDK with PyTorch, see [the SageMaker Python SDK documentation](https://sagemaker.readthedocs.io/en/stable/using_pytorch.html).\n", "\n", "To start, we use the `PyTorch` estimator class to train our model. When creating our estimator, we make sure to specify a few things:\n", "\n", "* `entry_point`: the name of our PyTorch script. It contains our training script, which loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model. It also contains code to load and run the model during inference.\n", "* `source_dir`: the location of our training scripts and requirements.txt file. \"requirements.txt\" lists packages you want to use with your script.\n", "* `framework_version`: the PyTorch version we want to use.\n", "\n", "The PyTorch estimator supports both single-machine & multi-machine, distributed PyTorch training using SMDataParallel. _Our training script supports distributed training for only GPU instances_. \n", "\n", "#### Instance types\n", "\n", "SMDataParallel supports model training on SageMaker with the following instance types only:\n", "\n", "- ml.p3.16xlarge\n", "- ml.p3dn.24xlarge [Recommended]\n", "- ml.p4d.24xlarge [Recommended]\n", "\n", "#### Instance count\n", "\n", "To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example.\n", "\n", "#### Distribution strategy\n", "\n", "Note that to use DDP mode, you update the the distribution strategy, and set it to use smdistributed dataparallel.\n", "\n", "After creating the estimator, we then call fit(), which launches a training job. We use the Amazon S3 URIs where we uploaded the training data earlier." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Training job will take around 20-25 mins to execute. \n", "\n", "from sagemaker.pytorch import PyTorch\n", "\n", "\n", "TRAINING_JOB_NAME=\"protbert-training-pytorch-{}\".format(time.strftime(\"%m-%d-%Y-%H-%M-%S\")) \n", "print('Training job name: ', TRAINING_JOB_NAME)\n", "\n", "estimator = PyTorch(\n", " entry_point=\"train.py\",\n", " source_dir=\"code\",\n", " role=role,\n", " framework_version=\"1.6.0\",\n", " py_version=\"py36\",\n", " instance_count=1, # this script support distributed training for only GPU instances.\n", " instance_type=\"ml.p3.16xlarge\",\n", " distribution={'smdistributed':{\n", " 'dataparallel':{\n", " 'enabled': True\n", " }\n", " }\n", " },\n", " debugger_hook_config=False,\n", " hyperparameters={\n", " \"epochs\": 3,\n", " \"num_labels\": num_classes,\n", " \"batch-size\": 4,\n", " \"test-batch-size\": 4,\n", " \"log-interval\": 100,\n", " \"frozen_layers\": 15,\n", " },\n", " metric_definitions=[\n", " {'Name': 'train:loss', 'Regex': 'Training Loss: ([0-9\\\\.]+)'},\n", " {'Name': 'test:accuracy', 'Regex': 'Validation Accuracy: ([0-9\\\\.]+)'},\n", " {'Name': 'test:loss', 'Regex': 'Validation loss: ([0-9\\\\.]+)'},\n", " ]\n", ")\n", "estimator.fit({\"training\": inputs_train, \"testing\": inputs_test}, job_name=TRAINING_JOB_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With `max_length=512` and running the model for only 3 epochs we get the validation accuracy of around 65%, which is pretty decent. You can optimize it further by trying bigger sequence length, increasing the number of epochs and tuning other hyperparamters. For details you can refer to the research paper: \n", "[`ProtTrans: Towards Cracking the Language of Life’s Code Through Self-Supervised Deep Learning and High Performance Computing`](https://arxiv.org/pdf/2007.06225.pdf)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Before, we deploy the model to an endpoint, let's first store the model to S3." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_data = estimator.model_data\n", "print(\"Storing {} as model_data\".format(model_data))\n", "%store model_data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%store -r model_data\n", "\n", "# If no model was found, set it manually here.\n", "# model_data = 's3://sagemaker-{region}-XXX/protbert-training-pytorch-XX-XX-XXXX-XX-XX-XX/output/model.tar.gz'\n", "\n", "print(\"Using this model: {}\".format(model_data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Deploy the model on Amazon SageMaker" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After training our model, we host it on an Amazon SageMaker Endpoint. To make the endpoint load the model and serve predictions, we implement a few methods in inference.py.\n", "\n", "- `model_fn()`: function defined to load the saved model and return a model object that can be used for model serving. The SageMaker PyTorch model server loads our model by invoking model_fn.\n", "- `input_fn()`: deserializes and prepares the prediction input. In this example, our request body is first serialized to JSON and then sent to model serving endpoint. Therefore, in input_fn(), we first deserialize the JSON-formatted request body and return the input as a torch.tensor, as required for BERT.\n", "- `predict_fn()`: performs the prediction and returns the result.\n", "To deploy our endpoint, we call deploy() on our PyTorch estimator object, passing in our desired number of instances and instance type:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create a model object\n", "You define the model object by using SageMaker SDK's PyTorchModel and pass in the model from the estimator and the entry_point. The function loads the model and sets it to use a GPU, if available." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import sagemaker\n", "\n", "from sagemaker.pytorch import PyTorchModel\n", "ENDPOINT_NAME = \"protbert-inference-pytorch-1-{}\".format(time.strftime(\"%m-%d-%Y-%H-%M-%S\"))\n", "print(\"Endpoint name: \", ENDPOINT_NAME)\n", "model = PyTorchModel(model_data=model_data, source_dir='code',\n", " entry_point='inference.py', role=role, framework_version='1.6.0', py_version='py3')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Deploy the model on an endpoint\n", "You create a predictor by using the model.deploy function. You can optionally change both the instance count and instance type." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "predictor = model.deploy(initial_instance_count=1, instance_type='ml.m5.2xlarge', endpoint_name=ENDPOINT_NAME)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Predicting SubCellular Localization of Protein Sequences" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "\n", "runtime= boto3.client('runtime.sagemaker')\n", "client = boto3.client('sagemaker')\n", "\n", "endpoint_desc = client.describe_endpoint(EndpointName=ENDPOINT_NAME)\n", "print(endpoint_desc)\n", "print('---'*30)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then configure the predictor to use application/json for the content type when sending requests to our endpoint:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.serializer = sagemaker.serializers.JSONSerializer()\n", "predictor.deserializer = sagemaker.deserializers.JSONDeserializer()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we use the returned predictor object to call the endpoint:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "protein_sequence = 'M G K K D A S T T R T P V D Q Y R K Q I G R Q D Y K K N K P V L K A T R L K A E A K K A A I G I K E V I L V T I A I L V L L F A F Y A F F F L N L T K T D I Y E D S N N'\n", "prediction = predictor.predict(protein_sequence)\n", "print(prediction)\n", "print(f'Protein Sequence: {protein_sequence}')\n", "print(\"Sequence Localization Ground Truth is: {} - prediction is: {}\".format('Endoplasmic.reticulum', class_names[prediction[0]]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "protein_sequence = 'M S M T I L P L E L I D K C I G S N L W V I M K S E R E F A G T L V G F D D Y V N I V L K D V T E Y D T V T G V T E K H S E M L L N G N G M C M L I P G G K P E'\n", "prediction = predictor.predict(protein_sequence)\n", "print(prediction)\n", "print(f'Protein Sequence: {protein_sequence}')\n", "print(\"Sequence Localization Ground Truth is: {} - prediction is: {}\".format('Nucleus', class_names[prediction[0]]))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "seq = 'M G G P T R R H Q E E G S A E C L G G P S T R A A P G P G L R D F H F T T A G P S K A D R L G D A A Q I H R E R M R P V Q C G D G S G E R V F L Q S P G S I G T L Y I R L D L N S Q R S T C C C L L N A G T K G M C'\n", "prediction = predictor.predict(seq)\n", "print(prediction)\n", "print(f'Protein Sequence: {seq}')\n", "print(\"Sequence Localization Ground Truth is: {} - prediction is: {}\".format('Cytoplasm',class_names[prediction[0]]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Cleanup\n", "\n", "Lastly, please remember to delete the Amazon SageMaker endpoint to avoid charges:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "predictor.delete_endpoint()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## References\n", "- [1] Refining Protein Subcellular Localization (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1289393/)\n", "- [2] Kumar A, Agarwal S, Heyman JA, Matson S, Heidtman M, et al. Subcellular localization of the yeast proteome. Genes Dev. 2002;16:707–719. [PMC free article] [PubMed] [Google Scholar]\n", "- [3] Huh WK, Falvo JV, Gerke LC, Carroll AS, Howson RW, et al. Global analysis of protein localization in budding yeast. Nature. 2003;425:686–691. [PubMed] [Google Scholar]\n", "- [4] Wiemann S, Arlt D, Huber W, Wellenreuther R, Schleeger S, et al. From ORFeome to biology: A functional genomics pipeline. Genome Res. 2004;14:2136–2144. [PMC free article] [PubMed] [Google Scholar]\n", "- [5] Davis TN. Protein localization in proteomics. Curr Opin Chem Biol. 2004;8:49–53. [PubMed] [Google Scholar]\n", "- [6] Scott MS, Thomas DY, Hallett MT. Predicting subcellular localization via protein motif co-occurrence. Genome Res. 2004;14:1957–1966. [PMC free article] [PubMed] [Google Scholar]\n", "- [7] ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing (https://www.biorxiv.org/content/10.1101/2020.07.12.199554v2.full.pdf)\n", "- [8] ProtBert Hugging Face (https://huggingface.co/Rostlab/prot_bert)\n", "- [9] DeepLoc-1.0: Eukaryotic protein subcellular localization predictor (http://www.cbs.dtu.dk/services/DeepLoc-1.0/data.php)" ] } ], "metadata": { "instance_type": "ml.m5.4xlarge", "kernelspec": { "display_name": "Python 3 (PyTorch 1.6 Python 3.6 CPU Optimized)", "language": "python", "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:us-west-2:236514542706:image/pytorch-1.6-cpu-py36-ubuntu16.04-v1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.10" } }, "nbformat": 4, "nbformat_minor": 4 }