{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Container Basics for Research\n", "\n", "\n", "There are several container technologies available, but Docker container technology is the most popular one. In this workshop, we will start with a simple application running in a Docker container. We will take a closer look at the key components and environments that are needed. We will also explore different ways of running Docker containers in AWS with different services. \n", "\n", "Why should we use containers for research?\n", "- Repeatable and shareable tools and applications\n", "- Portable - run in different environments (develop on laptop, test on-premises, run large scale in the cloud)\n", "- Stackable - run different stages of a pipeline/applications with different OS settings and libraries without conflicts\n", "- Easier development - each part of an analysis pipeline can be developed independently by different teams and with most appropriate technologies \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Getting started\n", "\n", "In this workshop, we will use Jupyter/JupyterLab notebooks (with conda_python3 or similar kernels) to experiment with containers through the AWS SageMaker platform. Furthermore, we are going to setup the learning environment by taking advantage of the AWS Python SDK (boto3 library), which allows us to interact with the necessary AWS services via API calls. For standalone applications (e.g., scripts), we can either use the AWS SDK for the respective language (if available) or the AWS CLI. This notebook uses both to illustrate their use in the respective contexts." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import botocore\n", "import json\n", "import time\n", "import os\n", "import base64\n", "import docker\n", "import pandas as pd\n", "\n", "import project_path # path to helper methods\n", "from lib import workshop\n", "from botocore.exceptions import ClientError" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The containers we are building in this notebook will be ephemeral. Thus, we need a place to store our output files so that we can later inspect them from this notebook. We will first create a boto3 session and then an S3 bucket (command line tools would internally establish a session in a similar fashion before being able to interact with the AWS APIs). " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "session = boto3.session.Session()\n", "region = session.region_name\n", "bucket_name = workshop.create_bucket_name('sagemaker-container-ws-')\n", "\n", "bucket = workshop.create_bucket(region, session, bucket_name, False)\n", "print(bucket)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We'll also create a helper magic to easily create and save a file from the notebook." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.core.magic import register_line_cell_magic\n", "\n", "@register_line_cell_magic\n", "def writetemplate(line, cell):\n", " with open(line, 'w+') as f:\n", " f.write(cell.format(**globals()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Running an application in a container locally.\n", "\n", "This SageMaker Jupyter notebook runs on an EC2 instance with Docker daemon pre-installed. Thus, we can build and test Docker containers on the very same instance. \n", "\n", "We are going to build a simple web server container that says \"Hello World!\". \n", "\n", "## The Dockerfile\n", "\n", "The Dockerfile is similar with a cooking recipe. It describes the steps necessary to prepare the environment of your application and get is started. \n", "\n", "If you already have automation scripts that prepare a new machine (VM or physical) for your application, then you are 90% done here as Docker would just run these inside a container. \n", "\n", "In the more general case, we start with a base image (i.e., well known initial configuration, such as a fresh install of the underlying OS, in this case ubuntu:18.04). We write this in the first line (FROM ...) of the Dockerfile below.\n", "\n", "We then install, configue, compile or build all the software we need (including libraries and other dependencies the application might have). Each \"RUN\" line below (note that some lines are longer and we use the backspace character to avoid horizontal scrolling) has one or more commands and arguments that would be executed in sequence. Each of these lines creates a \"layer\" or stage for building the application environment.\n", "\n", "For this example, we need the webserver (Apache) to listen to port 80/HTTP, thus we use the EXPOSE statement to instruct Docker to setup the appropriate networking environment.\n", "\n", "Lastly, we start the webserver with the \"CMD\" statement pointing to the \"run_apache.sh\" shell script.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%writetemplate Dockerfile\n", "FROM public.ecr.aws/ubuntu/ubuntu:latest\n", "\n", " \n", "# Install dependencies and apache web server\n", "RUN apt-get update && apt-get -y install apache2\n", "\n", "# Create the index html\n", "RUN echo 'Hello World!' > /var/www/html/index.html\n", "\n", "# Configure apache \n", "RUN echo '. /etc/apache2/envvars' > /root/run_apache.sh && \\\n", " echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && \\\n", " echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && \\\n", " echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && \\\n", " chmod 755 /root/run_apache.sh\n", "\n", "EXPOSE 80\n", "\n", "CMD /root/run_apache.sh" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Now let's build the container. \n", "\n", "We already have the \"docker\" runtime installed in this Jupyter environment, so we can easily test our container locally. In other environments, we would need to install docker and its dependencies (e.g., \"sudo yum install docker\", or \"sudo apt-get install docker\" depending on the underlying OS).\n", "\n", "Before we test the container, we need to actually build it by following the recipe instructions from the Dockerfile above. We use the \"-t\" flag to build and tag the image. The resulting container image will be stored in the local Docker image registry. \n", "\n", "We will later learn how to use an external image registry (e.g., AWS Elastic Container Registry/ECR) to push and save the image there. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!docker build -t simple_server ." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run the container \n", "\n", "Recall that the webserver inside our container listens on port 80 for HTTP connections. When we run the container locally, we will bind (i.e., create a mapping) the container port 80 to the localhsot port 8080 (\"-d\" runs detached/background). Thus, we can use \"curl\" to access the webserver within the container on port 8080 of our local machine.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "c_id = !docker run -d -p 8080:80 simple_server\n", " \n", "! sleep 3 && docker ps \n", "! curl http://localhost:8080\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Our \"Hello World\" example should show up. \n", "\n", "We used the command line above to do all this work, but we can use the \"boto3\" library (or similar SDK in the desired language) as well to achieve the same thing. Below, we build a simple function that lists the running containers. Then, we stop our webserver example using its container id." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def list_all_running_containers():\n", " docker_client = docker.from_env()\n", " container_list = docker_client.containers.list()\n", " for c in container_list:\n", " print(c.attrs['Id'], c.attrs['State']['Status'])\n", " return container_list\n", "\n", "\n", "docker_client = docker.from_env()\n", "running_containers = list_all_running_containers()\n", "\n", "print(\"Stopping container... \", c_id)\n", "simple_server_container = docker_client.containers.get(c_id[0])\n", "simple_server_container.stop()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Let's run some real workload\n", "\n", "We are going to use a genomics example here and run the NCBI SRA (Sequence Read Archive) Tool (https://github.com/ncbi/sra-tools), fasterq-dump (https://github.com/ncbi/sra-tools/wiki/HowTo:-fasterq-dump) to extract fastq (sequence of nucleotides, such as GATTATTATTATTACCTTACA, https://en.wikipedia.org/wiki/FASTQ_format) from SRA-accessions.\n", "\n", "The command takes a package name as an argument:\n", "```\n", "$ fasterq-dump SRR000001\n", "```\n", "\n", "We will use the official NCBI container base image from https://hub.docker.com/r/ncbi/sra-tools as our starting point instead of building one from scratch. Third party versions exist too, but they might not be up to date (e.g., https://hub.docker.com/r/pegi3s/sratoolkit/)\n", "\n", "The workflow implemented by our container would be: \n", "1. Upon start, the container runs a script \"sratest.sh\".\n", "3. sratest.sh will \"prefetch\" the data package, whose name is passed via an environment variable. \n", "4. sratest.sh then run \"fasterq-dump\" on the data package\n", "5. sratest.sh will then upload the result to our s3://{bucket}\n", "\n", "The output of the fasterq-dump will be stored in s3://{bucket}/data/sra-toolkit/fasterq/{PACKAGE_NAME}\n", "\n", "We first need to setup our environment with the necessary credentials and input/ouput options.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "PACKAGE_NAME='SRR000002'\n", "\n", "# this is where the output will be stored\n", "sra_prefix = 'data/sra-toolkit/fasterq'\n", "sra_output = f\"s3://{bucket}/{sra_prefix}\"\n", "\n", "# to run the docker container locally, you need the access credentials inside the container when using AWS CLI\n", "# pass the current keys and session token to the container via environment variables\n", "credentials = boto3.session.Session().get_credentials()\n", "current_credentials = credentials.get_frozen_credentials() \n", "\n", "# Please don't print these out or store them in files: \n", "access_key=current_credentials.access_key\n", "secret_key=current_credentials.secret_key\n", "token=current_credentials.token\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We then build our \"sratest.sh\" script following the workflow above. Note that we use the settings above (PACKAGE_NAME, SRA_OUTPUT) to contextualize our build. We use the \"aws s3 sync\" command to push our output files into S3 for later use." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%writetemplate sratest.sh\n", "#!/bin/bash\n", "set -x\n", "\n", "## Prefetch accession\n", "prefetch $PACKAGE_NAME --output-directory /tmp\n", "\n", "## Perform conversion using 8 threads (more will lead to I/O issues)\n", "fasterq-dump $PACKAGE_NAME -e 8\n", "\n", "## Upload results to S3 bucket\n", "aws s3 sync . $SRA_OUTPUT/$PACKAGE_NAME\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We are now ready to build our container starting from the available NCBI official image instead of the bare OS. We still need python and the AWS CLI besides the \"sratest.sh\" script we created above. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%writetemplate Dockerfile.ncbi\n", "FROM ncbi/sra-tools\n", "\n", "RUN apk add aws-cli\n", "RUN export PATH=/usr/local/bin/aws/bin:$PATH\n", "ADD sratest.sh /usr/local/bin/sratest.sh\n", "RUN chmod +x /usr/local/bin/sratest.sh\n", "WORKDIR /tmp\n", "ENTRYPOINT [\"/bin/sh\",\"/usr/local/bin/sratest.sh\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now build a new container image with the above Dockerfile and tag it as \"myncbi/sra-tools\" so that we can use it later." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!docker build -t myncbi/sra-tools -f Dockerfile.ncbi ." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can give this new container a try. We will provide our runtime settings via environment variables specified in the Docker command line." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "PACKAGE_NAME='SRR000002'\n", "\n", "# only run this when you need to clean up the registry and storage\n", "#!docker system prune -a -f\n", "!docker run --env SRA_OUTPUT=$sra_output --env PACKAGE_NAME=$PACKAGE_NAME --env PACKAGE_NAME=$PACKAGE_NAME --env AWS_ACCESS_KEY_ID=$access_key \\\n", " --env AWS_SECRET_ACCESS_KEY=$secret_key --env AWS_SESSION_TOKEN=$token myncbi/sra-tools:latest\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try something else by changing the PACKAGE_NAME and running our container again." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Now try a differnet package\n", "PACKAGE_NAME = 'SRR000003'\n", "!docker run --env SRA_OUTPUT=$sra_output --env PACKAGE_NAME=$PACKAGE_NAME --env PACKAGE_NAME=$PACKAGE_NAME --env AWS_ACCESS_KEY_ID=$access_key \\\n", " --env AWS_SECRET_ACCESS_KEY=$secret_key --env AWS_SESSION_TOKEN=$token myncbi/sra-tools:latest" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Starting from scratch with own Docker image\n", "\n", "So far, we have been using the existing NCBI base image. This was a great time saver, but perhaps we have some special code optimizations, need additional tools, or have certain settings that we want to take advantage of. Let's build our own image starting with the base Ubuntu Linux image.\n", "\n", "Workflow:\n", "1. Install tzdata - this is a dependency for other packages we need. Under normal circumstances, we do not need to explicitly install it; however, there is an issue with \"tzdata\" requiring an interaction to select timezone during the installation process, which would halt the docker built. Thus we install it separately with -y. \n", "2. Install wget and awscli.\n", "3. Download sratookit ubuntu binary and unzip into /opt. Need to generate an UUID for configuration to avoid issues with vdb-config interactive mode on sratoolkit version 2.10.3 and above.\n", "4. Set the PATH to include sratoolkit/bin and HOME to /tmp in order to setup the base configuration.\n", "5. USER nobody is needed to set the permission for sratookit configuration. \n", "6. Use the same sratest.sh script \n", "\n", "We will build a new Dockerfile for this:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%writetemplate Dockerfile.myown\n", "#FROM ubuntu:18.04 \n", "FROM public.ecr.aws/ubuntu/ubuntu:latest\n", "\n", "RUN apt-get update \n", "RUN DEBIAN_FRONTEND=\"noninteractive\" apt-get -y install tzdata \\\n", " && apt-get install -y curl wget libxml-libxml-perl awscli uuid-runtime\n", "\n", "### Known older version that is preconfigured\n", "#RUN wget -q https://ftp-trace.ncbi.nlm.nih.gov/sra/sdk/2.10.0/sratoolkit.2.10.0-ubuntu64.tar.gz -O /tmp/sratoolkit.tar.gz \\\n", "# && tar zxf /tmp/sratoolkit.tar.gz -C /opt/ && rm /tmp/sratoolkit.tar.gz && ln -s /opt/sratoolkit.2.10.0-ubuntu64 /opt/sratoolkit\n", "\n", "### Latest version requires workaround below\n", "RUN wget -q https://ftp-trace.ncbi.nlm.nih.gov/sra/sdk/current/sratoolkit.current-ubuntu64.tar.gz -O /tmp/sratoolkit.tar.gz \\\n", " && tar zxf /tmp/sratoolkit.tar.gz -C /opt/ && rm /tmp/sratoolkit.tar.gz && \\\n", " ln -s /opt/sratoolkit.$(curl -s https://ftp-trace.ncbi.nlm.nih.gov/sra/sdk/current/sratoolkit.current.version)-ubuntu64 /opt/sratoolkit\n", " \n", "ENV PATH=\"/opt/sratoolkit/bin/:${{PATH}}\"\n", "\n", "ADD sratest.sh /usr/local/bin/sratest.sh\n", "RUN chmod +x /usr/local/bin/sratest.sh\n", "\n", "### Workaround for vdb-config --interactive\n", "RUN mkdir /tmp/.ncbi && printf '/LIBS/GUID = \"%s\"\\n' `uuidgen` > /tmp/.ncbi/user-settings.mkfg\n", "\n", "ENV HOME=/tmp\n", "WORKDIR /tmp\n", "\n", "USER nobody\n", "ENTRYPOINT [\"/usr/local/bin/sratest.sh\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will use this Dockerfile to build a new image and tag it with \"myownncbi/sra-tools\" to differentiate it from the previous build." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "!docker build -t myownncbi/sra-tools -f Dockerfile.myown ." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now test it out in a similar manner using the PACKAGE_NAME='SRR000004' and \"myownncbi/sra-tools:latest\" arguments." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "PACKAGE_NAME='SRR000004'\n", "\n", "!docker run --env SRA_OUTPUT=$sra_output --env PACKAGE_NAME=$PACKAGE_NAME --env AWS_ACCESS_KEY_ID=$access_key \\\n", " --env AWS_SECRET_ACCESS_KEY=$secret_key --env AWS_SESSION_TOKEN=$token myownncbi/sra-tools:latest" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall that we used the \"aws s3 sync\" command in our \"sratest.sh\" script above to save our output into the desired bucket... Now we can check the result of our Docker experiments.\n", "\n", "We will use the boto3 session we created above to list the objects in our bucket and then download them. We could do the same with a simple \"aws s3 cp -r s3://{bucket}/ .\" or our favorite S3 browser tool." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# checkout the outfiles on S3\n", "s3_client = session.client('s3')\n", "objs = s3_client.list_objects(Bucket=bucket, Prefix=sra_prefix)\n", "for obj in objs['Contents']:\n", " fn = obj['Key']\n", " p = os.path.dirname(fn)\n", " if not os.path.exists(p):\n", " os.makedirs(p)\n", " s3_client.download_file(bucket, fn , fn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We now have the files locally (in this Jupyter notebook environment) and can do a quick inspection." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "#You only need to do this once per kernel - used in analyzing fastq data. If you don't want to run the last inspection step below, then you don't need this.\n", "!pip install bioinfokit " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "from bioinfokit.analys import fastq\n", "fastq_iter = fastq.fastq_reader(file=f\"{sra_prefix}/{PACKAGE_NAME}/{PACKAGE_NAME}.fastq\") \n", "# read fastq file and print out the first 10, \n", "i = 0\n", "for record in fastq_iter:\n", " # get sequence headers, sequence, and quality values\n", " header_1, sequence, header_2, qual = record\n", " # get sequence length\n", " sequence_len = len(sequence)\n", " # count A bases\n", " a_base = sequence.count('A')\n", " if i < 10:\n", " print(sequence, qual, a_base, sequence_len)\n", " i +=1\n", "\n", "print(f\"Total number of records for package {PACKAGE_NAME} : {i}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Cleanup\n", "\n", "We are done! We built and ran multiple containers, got the results saved for later, and did a quick analysis. Let's do some cleanup." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!aws s3 rb s3://$bucket --force \n", "!rm -rf $sra_prefix" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Other ways to run the container \n", "\n", "We looked at creating and running containers locally in this notebook. While this is great for small examples, we often need more computing or storage resources than our local machine can provide. \n", "\n", "Please check the \"notebook/hpc/batch-fastqc\" notebook for running containers with the AWS Batch service. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" } }, "nbformat": 4, "nbformat_minor": 4 }