{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "### Overview\n", "\n", "This notebook is tested using SageMaker `Studio SparkMagic - PySpark Kernel`. Please ensure that you see `PySpark (SparkMagic)` in the top right on your notebook.\n", "\n", "This notebook does the following:\n", "\n", "* Demonstrates how you can visually connect Amazon SageMaker Studio Sparkmagic kernel to an EMR cluster\n", "* Explore and query data from a Hive table \n", "* Use the data locally\n", "\n", "----------\n", "\n", "\n", "When using PySpark kernel notebooks, there is no need to create a SparkContext or a HiveContext; those are all created for you automatically when you run the first code cell, and you'll be able to see the progress printed. The contexts are created with the following variable names:\n", "- SparkContext \n", "- sqlContext \n", "\n", "----------\n", "### PySpark magics \n", "\n", "The PySpark kernel provides some predefined “magics”, which are special commands that you can call with `%%` (e.g. `%%MAGIC` ). The magic command must be the first word in a code cell and allow for multiple lines of content. You can’t put comments before a cell magic.\n", "\n", "For more information on magics, see [here](http://ipython.readthedocs.org/en/stable/interactive/magics.html).\n", "\n", "#### Running locally (%%local)\n", "\n", "You can use the `%%local` magic to run your code locally on the Jupyter server without going to Spark. When you use %%local all subsequent lines in the cell will be executed locally. The code in the cell must be valid Python code.\n", "\n", "----------\n", "### Livy Connection\n", " \n", "Apache Livy is a service that enables easy interaction with a Spark cluster over a REST interface. It enables easy submission of Spark jobs or snippets of Spark code, synchronous or asynchronous result retrieval, as well as Spark Context management, all via a simple REST interface or an RPC client library. \n", " \n", "![image](https://livy.incubator.apache.org/assets/images/livy-architecture.png)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%local\n", "print(\"Demo Notebook\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Connection to EMR Cluster\n", "\n", "In the cell below, the code block is autogenerated. You can generate this code by clicking on the \"Cluster\" link on the top of the notebook and select the EMR cluster. This should auto populate a cell similar to the code below. The \"j-xxxxxxxxxxxx\" is the cluster id of the cluster selected. \n", "\n", "```\n", "%load_ext sagemaker_studio_analytics_extension.magics\n", "%sm_analytics emr connect --cluster-id j-xxxxxxxxxxxx --auth-type None \n", "```\n", "\n", "For our workshop we used a no-auth cluster for simplicity, but this works equally well for Kerberos, LDAP and HTTP auth mechanisms" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## SparkUI\n", "The above connection generates a presigned url for viewing the SparkUI and debugging commands throughout this notebook" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Lets start by viewing the available SparkMagic Commands" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%help" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the next cell, we will use the sqlContext that was return to use through the connection to query Hive and look at the databases and tables" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dbs = sqlContext.sql(\"show databases\")\n", "dbs.show()\n", "\n", "tables = sqlContext.sql(\"show tables\")\n", "tables.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pyspark.sql.functions import regexp_replace, col, concat, lit\n", "movie_reviews = sqlContext.sql(\"select * from movie_reviews\").cache()\n", "movie_reviews= movie_reviews.where(col('sentiment') != \"sentiment\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Shape\n", "print((movie_reviews.count(), len(movie_reviews.columns)))\n", "\n", "# Count of both positive and negative sentiments\n", "movie_reviews.groupBy('sentiment').count().show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's look at the data size and size of each class (positive and negative) and visualize it. You can see that we have a balanced dataset with equal number on both classes (25000 each)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "pos_reviews = movie_reviews.filter(movie_reviews.sentiment == 'positive').collect()\n", "neg_reviews = movie_reviews.filter(movie_reviews.sentiment == 'negative').collect()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "def plot_counts(positive,negative):\n", " plt.rcParams['figure.figsize']=(6,6)\n", " plt.bar(0,positive,width=0.6,label='Positive Reviews',color='Green')\n", " plt.bar(2,negative,width=0.6,label='Negative Reviews',color='Red')\n", " handles, labels = plt.gca().get_legend_handles_labels()\n", " by_label = dict(zip(labels, handles))\n", " plt.legend(by_label.values(), by_label.keys())\n", " plt.ylabel('Count')\n", " plt.xlabel('Type of Review')\n", " plt.tick_params(\n", " axis='x', \n", " which='both', \n", " bottom=False, \n", " top=False, \n", " labelbottom=False) \n", " plt.show()\n", " \n", "plot_counts(len(pos_reviews),len(neg_reviews))\n", "%matplot plt" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, Let's inspect length of reviews using the pyspark.sql.functions module" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from pyspark.sql.functions import length\n", "reviewlengthDF = movie_reviews.select(length('review').alias('Length of Review')) \n", "reviewlengthDF.show() " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can also execute SparkSQL queries using the %%sql magic and pass results to a local data frame using the `-o` option. This allows for quick data exploration. Max rows returned by default is 2500. You can set the max rows by using the `-n` argument. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%sql -o movie_reviews_sparksql_df -n 10\n", "select * from movie_reviews " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can access and explore the data in the dataframe locally" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%local \n", "movie_reviews_sparksql_df.head(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Session logs (%%logs)\n", "\n", "Instead of the SparkUI, you can also get the logs of your current Livy session to debug any issues you encounter." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%logs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Session information (%%info)\n", "\n", "Livy is an open source REST server for Spark. When you execute a code cell in a sparkmagic notebook, it creates a Livy session to execute your code. `%%info` magic will display the current Livy session information." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%info" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Terminating all Livy Sessions\n", "\n", "You can terminate all livy sessions by using the `%%cleanup -f` command. Keep in mind that this will terminate all sessions, including ones used by other users or jobs that are connected to this cluster. To end your livy session, you can shut down the notebook kernel. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%cleanup -f" ] } ], "metadata": { "instance_type": "ml.t3.medium", "kernelspec": { "display_name": "PySpark (SparkMagic)", "language": "python", "name": "pysparkkernel__SAGEMAKER_INTERNAL__arn:aws:sagemaker:ap-southeast-2:452832661640:image/sagemaker-sparkmagic" }, "language_info": { "codemirror_mode": { "name": "python", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "pyspark", "pygments_lexer": "python3" } }, "nbformat": 4, "nbformat_minor": 4 }