{ "cells": [ { "cell_type": "markdown", "metadata": { "run_control": { "frozen": false, "read_only": false } }, "source": [ "# Benchmarking Amazon S3 performance with PyWren and AWS Lambda\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Step by Step instructions\n", "\n", "### Setup Logging (optional)\n", "Only activate the below lines if you want to see all debug messages from PyWren. _Note: The output will be rather chatty and lengthy._\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import logging\n", "logger = logging.getLogger()\n", "logger.setLevel(logging.INFO)\n", "%env PYWREN_LOGLEVEL=INFO" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's setup all the necessary libraries:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2017-02-28T15:23:00.249109", "start_time": "2017-02-28T15:22:58.624921" }, "run_control": { "frozen": false, "read_only": false }, "scrolled": true }, "outputs": [], "source": [ "%pylab inline\n", "import numpy as np\n", "import time\n", "import s3_benchmark\n", "import pandas as pd\n", "import cPickle as pickle\n", "import seaborn as sns\n", "sns.set_style('whitegrid')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**IMPORTANT** - We need to update the S3 Bucket variable with the Amazon S3 bucket that has been created with the AWS Cloudformation template earlier. Please update the following variable with your bucketname that you copied out of the Output tab.\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "S3BUCKET = 'pywren-workshop-s3bucket-12apl9us09h1d'" ] }, { "cell_type": "markdown", "metadata": { "run_control": { "frozen": false, "read_only": false } }, "source": [ "## Step by Step instructions\n", "\n", "_**IMPORTANT** - This lab will write and read many files (200MB) from your Amazon S3 bucket which will incur costs accoriding to our [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/) - make sure to clean out the bucket after the lab to avoid unnecessary cost_\n", "\n", "We are going to benchmark an Amazon S3 bucket by writing a large amount of data to a bucket, and then reading that data back. We are using the S3BUCKET variable, which is the bucket that was created with the CloudFormation template. We will run 100 AWS Lambda functions in parallel (_your account might have a soft limit of less parallel executions, if you encounter an error we suggest to change the `--number` parameter to a smaller value - also feel free to try with higher amount of workers to see the performance boost of parallel execution_ )\n", "\n", "All of the actual benchmark code is in a stand-alone python file, which you can call as follows. It places the output in `write.pickle`. If you are interested in the details you can inspect the [s3_benchmark.py](/edit/Lab-1-Hello-World/benchmark_s3/s3_benchmark.py) file. Here is the relevant code snippet that invokes the distirbuten PyWren functions:\n", "\n", "```python\n", "wrenexec = pywren.default_executor()\n", "\n", "# create list of random keys\n", "keynames = [ key_prefix + str(uuid.uuid4().get_hex().upper()) for _ in range(number)]\n", "futures = wrenexec.map(run_command, keynames)\n", "\n", "results = [f.result() for f in futures]\n", "```\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2017-02-28T13:34:46.018500", "start_time": "2017-02-28T13:34:34.009541" }, "run_control": { "frozen": false, "read_only": false } }, "outputs": [], "source": [ "!python s3_benchmark.py write --mb_per_file=200 --bucket_name={S3BUCKET} --number=100 --outfile=write.pickle" ] }, { "cell_type": "markdown", "metadata": { "run_control": { "frozen": false, "read_only": false } }, "source": [ "Let's have a quick look at our bucket on what files have been created. Here's a direct link to your [S3 Management Console](https://s3.console.aws.amazon.com/s3/buckets/?region=us-west-2&tab=overview)\n", "\n", "We then run the read test" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2017-02-28T13:35:52.454900", "start_time": "2017-02-28T13:34:46.022541" }, "run_control": { "frozen": false, "read_only": false } }, "outputs": [], "source": [ "!python s3_benchmark.py read --key_file=write.pickle --outfile=read.pickle" ] }, { "cell_type": "markdown", "metadata": { "run_control": { "frozen": false, "read_only": false } }, "source": [ "Now let's plot the results and see what's the distribution of read and write rates to Amazon S3 from our 200 AWS Lambda function executions:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2017-02-28T15:53:29.810421", "start_time": "2017-02-28T15:53:28.177439" }, "run_control": { "frozen": false, "read_only": false } }, "outputs": [], "source": [ "current_palette = sns.color_palette()\n", "read_color = current_palette[0]\n", "write_color = current_palette[1]\n", "\n", "write_data = pickle.load(open(\"write.pickle\", 'r'))\n", "write = s3_benchmark.compute_times_rates(write_data['results'])\n", "\n", "read_data = pickle.load(open(\"read.pickle\", 'r'))\n", "read_time_results = [r[:3] for r in read_data['results']]\n", "read = s3_benchmark.compute_times_rates(read_time_results)\n", "\n", "fig = pylab.figure(figsize=(8, 6))\n", "sns.distplot(read['rate'], label='read', color=read_color)\n", "sns.distplot(write['rate'], label='write', color=write_color)\n", "pylab.legend()\n", "pylab.xlabel(\"MB/sec\")\n", "pylab.ylabel(\"Function count (per 1000)\")\n", "pylab.grid(True)\n", "pylab.title(\"Read/Write rates per single AWS Lambda function\")\n", "sns.despine()" ] }, { "cell_type": "markdown", "metadata": { "run_control": { "frozen": false, "read_only": false } }, "source": [ "We can investigate when jobs start and how long they run. Each horizontal line is a job, and then plotted on top is the aggregate number of jobs running at that moment. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2017-02-28T15:53:30.921481", "start_time": "2017-02-28T15:53:30.308690" }, "run_control": { "frozen": false, "read_only": false } }, "outputs": [], "source": [ "from matplotlib.collections import LineCollection\n", "fig = pylab.figure(figsize=(10, 6))\n", "\n", "for plot_i, (datum, l, c) in enumerate([(read, 'read', read_color), \n", " (write, 'write', write_color)]):\n", " ax = fig.add_subplot(1, 2, 1 + plot_i)\n", "\n", " N = len(datum['start_time'])\n", " line_segments = LineCollection([[[datum['start_time'][i], i], \n", " [datum['end_time'][i], i]] for i in range(N)],\n", " linestyles='solid', color='k', alpha=0.4, linewidth=0.2)\n", " #line_segments.set_array(x)\n", "\n", " ax.add_collection(line_segments)\n", "\n", " ax.plot(s3_benchmark.runtime_bins, datum['runtime_jobs_hist'].sum(axis=0), \n", " c=c, label='active jobs total', \n", " zorder=-1)\n", "\n", "\n", " ax.set_xlim(0, np.max(datum['end_time']))\n", " ax.set_ylim(0, len(datum['start_time'])*1.05)\n", " ax.set_xlabel(\"time (sec)\")\n", " if plot_i == 0:\n", " ax.set_ylabel(\"AWS Lambda function execution\")\n", " ax.grid(False)\n", " ax.legend(loc='upper right')\n", "fig.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lastly let's plot all the values in aggregate over time:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "ExecuteTime": { "end_time": "2017-02-28T15:53:31.143363", "start_time": "2017-02-28T15:53:30.923109" }, "run_control": { "frozen": false, "read_only": false }, "scrolled": true }, "outputs": [], "source": [ "fig = pylab.figure(figsize=(8, 4))\n", "ax = fig.add_subplot(1, 1, 1)\n", "for d, l, c in [(read, 'read', read_color), (write, 'write', write_color)]:\n", " ax.plot(d['runtime_rate_hist'].sum(axis=0)/1000, label=l, c=c)\n", "ax.set_xlabel('time (sec)')\n", "ax.set_ylabel(\"GB/sec\")\n", "pylab.legend()\n", "sns.despine()\n", "fig.tight_layout()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Please note that the performance will increase with the amount of workers (parallel AWS Lambda function executions) you will use. We tried this with 2800 workers, and here's the performance chart:\n", "\n", "![2800 Workers](performance_2800workers.png)\n" ] } ], "metadata": { "anaconda-cloud": {}, "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.13" }, "toc": { "nav_menu": { "height": "66px", "width": "252px" }, "navigate_menu": true, "number_sections": true, "sideBar": true, "threshold": 4, "toc_cell": false, "toc_section_display": "block", "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 1 }