{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Retail Demo Store Experimentation Workshop - Multi-Armed Bandit Experiment Exercise\n", "\n", "In this exercise we will define and launch an experiment using a solution to the [multi-armed bandit problem](https://en.wikipedia.org/wiki/Multi-armed_bandit) to evaluate multiple recommendation approaches concurrently. If you have not already stepped through the **[3.1-Overview](./3.1-Overview.ipynb)** workshop notebook, please do so now as it provides the foundation built upon in this exercise. It is also suggested, but not required, to complete the **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)** and **[3.3-Interleaving-Experiment](./3.3-Interleaving-Experiment.ipynb)** workshop notebooks.\n", "\n", "Recommended Time: 30 minutes\n", "\n", "## Prerequisites\n", "\n", "Since this module uses the Retail Demo Store's [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) microservice to run experiments across variations that depend on the search and personalization features of the Retail Demo Store, it is assumed that you have either completed the [Search](../0-StartHere/Search.ipynb) and [Personalization](../1-Personalization/Lab-1-Introduction-and-data-preparation.ipynb) workshops or those resources have been pre-provisioned in your AWS environment. If you are unsure and attending an AWS managed event such as a workshop, check with your event lead." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 3: Multi-Armed Bandit Experiment\n", "\n", "In the first two exercises we demonstrated how to create and run experiments using a traditional A/B test and an interleaving recommendations test. Both of those approaches require the sequential steps of creating the experiment, running the experiment, and then evaluating the results to determine if a statistically valid preference emerged. The time when the experiment is running is typically referred to as the **exploration** phase since we are exposing users to two variations and gathering the data necessary to draw a conclusion. Only after the test has completed can we use the winning variation across all users to maximize conversion. This is referred to the **exploitation** phase. Minimizing exploration without jeopardizing the integrity of the results in order to maximize exploitation are fundamental elements to any successful experimentation strategy.\n", "\n", "For this exercise, we will take a entirely different approach to evaluating multiple recommendation implementations concurrently where the best performing variation is measured and exploited in real-time based on user feedback but the other variations are still ocassionally explored should the probabilities for conversion change over time. This approach is often referred to as the multi-armed bandit problem since it is analagous to the gambler entering a room of slot machines (i.e. one-armed bandits) and having to decide which arm to pull, how many times to pull each arm, and when to try other machines to maximize the payout.\n", "\n", "The multi-armed bandit approach is ideal for experimentation use-cases in short-lived and dynamic environments with many variations where longer drawn out testing approaches are unfeasible.\n", "\n", "Although bandit testing can support tens of variations in a single experiment, we will use three variations for this exercise. The first variation will represent our current implementation using the [**Default Product Resolver**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/resolvers.py), the second variation will use the [**Similar Products Resolver**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/resolvers.py), and the third variation will use the [**Personalize Recommendation Resolver**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/resolvers.py). We will simulate this experiment using the related products feature on the product detail page. The following screenshot illustrates what an active multi-armed bandit test would look like on the product detail page with experiment annotations.\n", "\n", "![Multi-Armed Bandit](./images/ui-related-mab.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### MultiArmedBanditExperiment Class\n", "\n", "Before stepping through creating and executing our multi-armed bandit test, let's look at the relevant source code for the [**MultiArmedBanditExperiment**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/experiment_mab.py) class that implements this experiment type in the Retail Demo Store project.\n", "\n", "As noted in the **3.1-Overview** notebook, all experiment types are subclasses of the abstract [**Experiment**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/experiment.py) class. See **[3.1-Overview](./3.1-Overview.ipynb)** for more details on the experimentation framework.\n", "\n", "The `MultiArmedBanditExperiment.get_items()` method is where item recommendations are retrieved for the experiment. This method will select the variation using [Thompson Sampling](https://en.wikipedia.org/wiki/Thompson_sampling) as a [Beta Bernoulli sampler](https://en.wikipedia.org/wiki/Bernoulli_distribution). Thompson Sampling is just one of many possible multi-armed bandit algorthims. Two other common algorithms are Eplsilon Greedy and Upper Confidence Bound 1 (UCB-1).\n", "\n", "Thompson Sampling can yield more balanced results in marginal cases. A probability (beta) distribution is maintained for each variation based on the conversion rate observed from user behavior. For each exposure, we sample one possible conversion rate from each variation's beta distribution and select the variation the highest conversion rate. The more data that is gathered, the more confident the algorithm becomes.\n", "\n", "```python\n", "# from src/recommendations/src/recommendations-service/experimentation/experiment_mab.py\n", "\n", "class MultiArmedBanditExperiment(Experiment):\n", " \"\"\" Implementation of the multi-armed bandit problem using the Thompson Sampling approach \n", " to exploring variations to identify and exploit the best performing variation\n", " \"\"\"\n", " def __init__(self, table, **data):\n", " super(MultiArmedBanditExperiment, self).__init__(table, **data)\n", "\n", " def get_items(self, user_id, current_item_id = None, item_list = None, num_results = 10, tracker = None):\n", " ...\n", " \n", " # Determine the variation to use.\n", " variation_idx = self._select_variation_index()\n", "\n", " # Increment exposure count for variation\n", " self._increment_exposure_count(variation_idx)\n", "\n", " # Fetch recommendations using the variation's resolver\n", " variation = self.variations[variation_idx]\n", "\n", " resolve_params = {\n", " 'user_id': user_id,\n", " 'product_id': current_item_id,\n", " 'product_list': item_list,\n", " 'num_results': num_results\n", " }\n", " items = variation.resolver.get_items(**resolve_params)\n", "\n", " # Inject experiment details into recommended items list\n", " rank = 1\n", " for item in items:\n", " correlation_id = self._create_correlation_id(user_id, variation_idx, rank)\n", "\n", " item_experiment = {\n", " 'id': self.id,\n", " 'feature': self.feature,\n", " 'name': self.name,\n", " 'type': self.type,\n", " 'variationIndex': variation_idx,\n", " 'resultRank': rank,\n", " 'correlationId': correlation_id\n", " }\n", "\n", " item.update({ \n", " 'experiment': item_experiment\n", " })\n", "\n", " rank += 1\n", "\n", " ...\n", "\n", " return items\n", "\n", " def _select_variation_index(self):\n", " \"\"\" Selects the variation using Thompson Sampling \"\"\"\n", " variation_count = len(self.variations)\n", " exposures = np.zeros(variation_count)\n", " conversions = np.zeros(variation_count)\n", "\n", " for i in range(variation_count):\n", " variation = self.variations[i]\n", " exposures[i] = int(variation.config.get('exposures', 0))\n", " conversions[i] = int(variation.config.get('conversions', 0))\n", "\n", " # Sample from posterior (this is the Thompson Sampling approach)\n", " # This leads to more exploration because variations with > uncertainty can then be selected\n", " theta = np.random.beta(conversions + 1, exposures + 1)\n", "\n", " # Select variation index with highest posterior p of converting\n", " return np.argmax(theta)\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup - Import Dependencies\n", "\n", "Througout this workshop we will need access to some common libraries and clients for connecting to AWS services. Let's set those up now." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import json\n", "import uuid\n", "import numpy as np\n", "import requests\n", "import pandas as pd\n", "import random\n", "import scipy.stats as scs\n", "import time\n", "import decimal\n", "import seaborn as sns\n", "from scipy.stats import beta\n", "import matplotlib.pyplot as plt\n", "\n", "from boto3.dynamodb.conditions import Key\n", "from random import randint\n", "\n", "%matplotlib inline\n", "plt.style.use('ggplot')\n", "\n", "cmap = plt.get_cmap(\"tab10\", 3)\n", "sns.set_style(\"whitegrid\")\n", "\n", "# We will be using a DynamoDB table to store configuration info for our experiments.\n", "dynamodb = boto3.resource('dynamodb')\n", "\n", "# Service discovery will allow us to dynamically discover Retail Demo Store resources\n", "servicediscovery = boto3.client('servicediscovery')\n", "# Retail Demo Store config parameters are stored in SSM\n", "ssm = boto3.client('ssm')\n", "\n", "# Utility class to convert types for printing as JSON.\n", "class CompatEncoder(json.JSONEncoder):\n", " def default(self, obj):\n", " if isinstance(obj, decimal.Decimal):\n", " if obj % 1 > 0:\n", " return float(obj)\n", " else:\n", " return int(obj)\n", " else:\n", " return super(CompatEncoder, self).default(obj)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Experiment Strategy Datastore\n", "\n", "Let's create an experiment using the multi-armed bandit technique.\n", "\n", "A DynamoDB table was created by the Retail Demo Store CloudFormation template that we will use to store the configuration information for our experiments. The table name can be found in a system parameter." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = ssm.get_parameter(Name='retaildemostore-experiment-strategy-table-name')\n", "\n", "table_name = response['Parameter']['Value'] # Do Not Change\n", "print('Experiments DDB table: ' + table_name)\n", "table = dynamodb.Table(table_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we need to lookup the Amazon Personalize campaign/recommender ARN for product recommendations. This is the campaign/recommender that was created in the Personalization workshop." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = ssm.get_parameter(Name = '/retaildemostore/personalize/related-items-arn')\n", "\n", "inference_arn = response['Parameter']['Value'] # Do Not Change\n", "print('Personalize product recommendations ARN: ' + inference_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create Multi-Armed Bandit Experiment\n", "\n", "The Retail Demo Store supports running multiple experiments concurrently. For this workshop we will create a single multi-armed bandit test/experiment that will expose users of a single group to the variation selected by the [**MultiArmedBanditExperiment**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/experiment_mab.py) class. The Recommendations service already has logic that supports this experiment type when an active experiment is detected.\n", "\n", "Experiment configurations are stored in a DynamoDB table where each item in the table represents an experiment and has the following fields.\n", "\n", "- **id** - Uniquely identified this experience (UUID).\n", "- **feature** - Identifies the Retail Demo Store feature where the experiment should be applied. The name for the product detail related products feature is `product_detail_related`.\n", "- **name** - The name of the experiment. Keep the name short but descriptive. It will be used in the UI for demo purposes and when logging events for experiment result tracking.\n", "- **status** - The status of the experiment (`ACTIVE`, `EXPIRED`, or `PENDING`).\n", "- **type** - The type of test (`ab` for an A/B test, `interleaving` for interleaved recommendations, or `mab` for multi-armed bandit test)\n", "- **variations** - List of configurations representing variations applicable for the experiment. For this experiment, we will configure three variations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature = 'product_detail_related'\n", "experiment_name = 'product_detail_related_mab'\n", "\n", "# First, make sure there are no other active experiments so we can isolate\n", "# this experiment for the exercise.\n", "response = table.scan(\n", " ProjectionExpression='#k', \n", " ExpressionAttributeNames={'#k' : 'id'},\n", " FilterExpression=Key('status').eq('ACTIVE')\n", ")\n", "for item in response['Items']:\n", " response = table.update_item(\n", " Key=item,\n", " UpdateExpression='SET #s = :inactive',\n", " ExpressionAttributeNames={\n", " '#s' : 'status'\n", " },\n", " ExpressionAttributeValues={\n", " ':inactive' : 'INACTIVE'\n", " }\n", " )\n", "\n", "# Query the experiment strategy table to see if our experiment already exists\n", "response = table.query(\n", " IndexName='feature-name-index',\n", " KeyConditionExpression=Key('feature').eq(feature) & Key('name').eq(experiment_name),\n", " FilterExpression=Key('status').eq('ACTIVE')\n", ")\n", "\n", "if response.get('Items') and len(response.get('Items')) > 0:\n", " print('Experiment already exists')\n", " product_detail_experiment = response['Items'][0]\n", "else:\n", " print('Creating experiment')\n", " \n", " # Default product resolver\n", " variation_0 = {\n", " 'type': 'product'\n", " }\n", " \n", " # Similar products resolver\n", " variation_1 = {\n", " 'type': 'similar'\n", " }\n", " \n", " # Amazon Personalize resolver\n", " variation_2 = {\n", " 'type': 'personalize-recommendations',\n", " 'inference_arn': inference_arn\n", " }\n", "\n", " product_detail_experiment = { \n", " 'id': uuid.uuid4().hex,\n", " 'feature': feature,\n", " 'name': experiment_name,\n", " 'status': 'ACTIVE',\n", " 'type': 'mab',\n", " 'variations': [ variation_0, variation_1, variation_2 ]\n", " }\n", " \n", " response = table.put_item(\n", " Item=product_detail_experiment\n", " )\n", "\n", " print(json.dumps(response, indent=4))\n", " \n", "print('Experiment item:')\n", "print(json.dumps(product_detail_experiment, indent=4, cls=CompatEncoder))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Users\n", "\n", "For our experiment simulation, we will load all Retail Demo Store users and run the experiment until the sample size has been met.\n", "\n", "First, let's discover the IP address for the Retail Demo Store's [Users](https://github.com/aws-samples/retail-demo-store/tree/master/src/users) service." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = servicediscovery.discover_instances(\n", " NamespaceName='retaildemostore.local',\n", " ServiceName='users',\n", " MaxResults=1,\n", " HealthStatus='HEALTHY'\n", ")\n", "\n", "users_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']\n", "print('Users Service Instance IP: {}'.format(users_service_instance))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's load all users into a local data frame." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load all users so we have enough to satisfy our sample size requirements.\n", "response = requests.get('http://{}/users/all?count=10000'.format(users_service_instance))\n", "users = response.json()\n", "users_df = pd.DataFrame(users)\n", "pd.set_option('display.max_rows', 5)\n", "\n", "users_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Products\n", "\n", "Next let's load products from the [Products](https://github.com/aws-samples/retail-demo-store/tree/master/src/products) microservice so we can represent a \"current product\" to the [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) service." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = servicediscovery.discover_instances(\n", " NamespaceName='retaildemostore.local',\n", " ServiceName='products',\n", " MaxResults=1,\n", " HealthStatus='HEALTHY'\n", ")\n", "\n", "products_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']\n", "print('Products Service Instance IP: {}'.format(products_service_instance))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load all products.\n", "response = requests.get('http://{}/products/all'.format(products_service_instance))\n", "products = response.json()\n", "products_df = pd.DataFrame(products)\n", "pd.set_option('display.max_rows', 5)\n", "\n", "products_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Discover Recommendations Service\n", "\n", "Next, let's discover the IP address for the Retail Demo Store's [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) service." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = servicediscovery.discover_instances(\n", " NamespaceName='retaildemostore.local',\n", " ServiceName='recommendations',\n", " MaxResults=1,\n", " HealthStatus='HEALTHY'\n", ")\n", "\n", "recommendations_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']\n", "print('Recommendation Service Instance IP: {}'.format(recommendations_service_instance))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simulate Experiment\n", "\n", "Next we will simulate our multi-armed bandit experiment by making calls to the [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) service across the users we just loaded." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simulation Function\n", "\n", "The following `simulate_experiment` function is supplied with the number of trials we want to run and the probability of conversion for each variation for our simulation. It runs the simulation long enough to satisfy the number of trials and calls the Recommendations service for each trial in the experiment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simulate_experiment(n_trials, probs):\n", " \"\"\"Simulates experiment based on pre-determined probabilities\n", "\n", " Example:\n", "\n", " Parameters:\n", " n_trials (int): number of trials to perform\n", " probs (array float): array of floats containing probability/conversion \n", " rate for each variation\n", "\n", " Returns:\n", " df (df) - data frame of simulation data/results\n", " \"\"\"\n", "\n", " # will hold exposure/outcome data\n", " data = []\n", "\n", " print('Simulating experiment for {} users... this may take a few minutes'.format(n_trials))\n", "\n", " for idx in range(n_trials):\n", " if idx > 0 and idx % 500 == 0:\n", " print('Simulated experiment for {} users so far'.format(idx))\n", " \n", " row = {}\n", "\n", " # Get random user\n", " user = users[randint(0, len(users)-1)]\n", "\n", " # Get random product\n", " product = products[randint(0, len(products)-1)]\n", "\n", " # Call Recommendations web service to get recommendations for the user\n", " response = requests.get('http://{}/recommendations?userID={}¤tItemID={}&feature={}'.format(recommendations_service_instance, user['id'], product['id'], feature))\n", "\n", " recommendations = response.json()\n", " recommendation = recommendations[randint(0, len(recommendations)-1)]\n", " \n", " variation = recommendation['experiment']['variationIndex']\n", " exposures[variation] += 1\n", " row['variation'] = variation\n", " \n", " # Conversion based on probability of variation\n", " row['converted'] = np.random.binomial(1, p=probs[variation])\n", "\n", " if row['converted'] == 1:\n", " # Update experiment with outcome/conversion\n", " correlation_id = recommendation['experiment']['correlationId']\n", " requests.post('http://{}/experiment/outcome'.format(recommendations_service_instance), data={'correlationId':correlation_id})\n", " conversions[variation] += 1\n", " \n", " data.append(row)\n", " \n", " theta = np.random.beta(conversions + 1, exposures + 1)\n", " thetas[idx] = theta[variation]\n", " thetaregret[idx] = np.max(thetas) - theta[variation]\n", "\n", " ad_i[idx] = variation\n", " \n", " # convert data into pandas dataframe\n", " df = pd.DataFrame(data)\n", " \n", " print('Done')\n", "\n", " return df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run Simulation\n", "\n", "Next we run the simulation by defining our simulation parameters for the number of trials and probabilities and then call `simulate_experiment`. This will take a few minutes to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "# Number of users/trials\n", "N = 2000\n", "# Probabilities/payouts for variations\n", "probs = [ 0.08, 0.09, 0.15 ]\n", "\n", "# Structures used for experiment analysis\n", "exposures = np.zeros(len(probs))\n", "conversions = np.zeros(len(probs))\n", "theta = np.zeros(len(probs))\n", "thetas = np.zeros(N)\n", "thetaregret = np.zeros(N)\n", "ad_i = np.zeros(N)\n", "\n", "# Run the simulation\n", "exp_data = simulate_experiment(N, probs)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Display some of the data\n", "exp_data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inspect Experiment Summary Statistics\n", "\n", "Since the **Experiment** class updates statistics on the experiment in the experiment strategy table when a user is exposed to an experiment (\"exposure\") and when a user converts (\"outcome\"), we should see updated counts on our experiment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = table.get_item(Key={'id': product_detail_experiment['id']})\n", "\n", "print(json.dumps(response['Item'], indent=4, cls=CompatEncoder))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note the `conversions` and `exposures` counts for each variation above. These counts were incremented by the experiment class each time a trial was run (exposure) and a user converted in the `simulate_experiment` function above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Analyze Simulation Results\n", "\n", "To wrap up, let's analyze some of the results from our simulated A/B test by inspecting the actual conversion rate and verifying our target confidence interval and power.\n", "\n", "First, let's summarize the results." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "exp_summary = exp_data.pivot_table(values='converted', index='variation', aggfunc=np.sum)\n", "# add additional columns to the pivot table\n", "exp_summary['total'] = exp_data.pivot_table(values='converted', index='variation', aggfunc=lambda x: len(x))\n", "exp_summary['rate'] = exp_data.pivot_table(values='converted', index='variation')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "exp_summary" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Plot Variation Selections\n", "\n", "Let's take a closer look at how our experiment optimized for the best performing variation (2) yet continued to explore variations 0 and 1 through the experiment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(20,5))\n", "x = np.arange(0, N)\n", "plt.scatter(x, ad_i, cmap=cmap, c=ad_i, marker=\".\", alpha=1)\n", "plt.title(\"Thompson Sampler - variation selections\")\n", "plt.xlabel(\"Trial\")\n", "plt.ylabel(\"Variation\")\n", "plt.yticks(list(range(len(probs))))\n", "cbar = plt.colorbar()\n", "cbar.ax.locator_params(nbins=len(probs))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Regret\n", "\n", "An additional means of assessing the algorithm's performance is through the concept of regret. Intuitively, regret is quite simple. The algorithm’s regret concerning its action (what variation to show) should be as low as possible. Simply, regret is the difference between the best performance from a variation so far and the performance from the variation chosen for the current trial t." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "best_arm = exp_data.groupby('variation')['converted'].mean().idxmax()\n", "best_value = exp_data.groupby('variation')['converted'].mean().max()\n", "theregret = np.cumsum(best_value - exp_data.converted)\n", "worstregret = np.cumsum(best_value - exp_data.converted*0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(16,4))\n", "plt.plot(theregret / (1+np.arange(len(theregret))), label='true regret')\n", "plt.plot(worstregret / (1+np.arange(len(worstregret))), '--', label='avoid linear regret')\n", "plt.ylim(best_value*-0.2, best_value*1.2)\n", "plt.legend()\n", "plt.xlabel(\"Trial #\")\n", "plt.ylabel(\"Regret\")\n", "plt.title(\"Thompson Sampler regret\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another nice property of the Thompson algorithm, is that its Bayesian properties mean that we can fully inspect the uncertainty of its payout rate. Let's plot the posterior distributions. You can see how the distributions gradually begin to converge towards the variation with the best payout rate." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plt.figure(figsize=(16,4))\n", "cmapi = iter(plt.cm.tab10(list(range(len(probs)))))\n", "x = np.arange(0, max(theta) + 0.2, 0.0001)\n", "for i in range(len(probs)):\n", " pdf = beta(conversions[i], exposures[i]).pdf(x)\n", " c = next(cmapi)\n", " plt.plot(x, pdf, c=c, label='variation {}'.format(i), linewidth=3, alpha=.6)\n", "plt.title('Beta distributions after {} trials'.format(N))\n", "plt.legend();" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Conclusion\n", "\n", "You have completed all three exercises for the Retail Demo Store Experimentation workshop. Although we focused on experimenation around different approaches to personalization, these experimentation techniques can be applied to many other user experiences in your website. \n", "\n", "We started with a traditional A/B test where a default product recommendation approach was tested against personalized product recommendations from Amazon Personalize. Then we used an interleaved experiment to test two product recommendation approaches concurrently to shorten the testing duration. Finally, we deployed a multi-armed bandit approach to test 3 personalization approaches to maximize exploitation of the best performing variation while still exploring across all other variations. It's important to note that these techniques are not mutually exlusive. In other words, it's common to use interleaving or multi-armed bandit experiments as a preliminary step to identify the best performing variations from a larger pool followed by A/B experiments of the top performers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### References and Further Reading\n", "\n", "- [Multi-armed Bandit Problem](https://en.wikipedia.org/wiki/Multi-armed_bandit), Wikipedia\n", "- [Thompson sampling](https://en.wikipedia.org/wiki/Thompson_sampling), Wikipedia\n", "- [Beta distribtion](https://en.wikipedia.org/wiki/Beta_distribution), Wikipedia\n", "- [Understanding the beta distribution](http://varianceexplained.org/statistics/beta_distribution_and_baseball/), David Robinson\n", "- [Solving multiarmed bandits: A comparison of epsilon-greedy and Thompson sampling](https://towardsdatascience.com/solving-multiarmed-bandits-a-comparison-of-epsilon-greedy-and-thompson-sampling-d97167ca9a50), Conor McDonald" ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" } }, "nbformat": 4, "nbformat_minor": 4 }