{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Retail Demo Store Experimentation Workshop - Interleaving Recommendation Exercise\n", "\n", "In this exercise we will define, launch, and evaluate the results of an experiment using recommendation interleaving using the experimentation framework implemented in the Retail Demo Store project. If you have not already stepped through the **[3.1-Overview](./3.1-Overview.ipynb)** workshop notebook, please do so now as it provides the foundation built upon in this exercise. It is also recommended, but not required, to complete the **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)** workshop notebook.\n", "\n", "Recommended Time: 30 minutes\n", "\n", "## Prerequisites\n", "\n", "Since this module uses the Retail Demo Store's Recommendation microservice to run experiments across variations that depend on the personalization features of the Retail Demo Store, it is assumed that you have either completed the [Personalization](../1-Personalization/Lab-1-Introduction-and-data-preparation.ipynb) workshop or those resources have been pre-provisioned in your AWS environment. If you are unsure and attending an AWS managed event such as a workshop, check with your event lead." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 2: Interleaving Recommendations Experiment\n", "\n", "For the first exercise, **[3.2-AB-Experiment](./3.2-AB-Experiment.ipynb)**, we demonstrated how to create and run an A/B experiment using two different variations for making product recommendations. We calculated the sample sizes of users needed to reach a statistically significant result comparing the two variations. Then we ran the experiment using a simulation until the sample sizes were reached for both variations. In real-life, depending on the baseline and minimum detectable effect rate combined with your site's user traffic, the amount of time necessary to complete an experiment can take several days to a few weeks. This can be expensive from both an opportunity cost perspective as well as negatively impacting the pace at which experiments and changes can be rolled out to your site.\n", "\n", "In this exercise we will look at an alternative approach to evaluating product recommendation variations that requires a smaller sample size and shorter experiment durations. This technique is often used as a preliminary step before formal A/B testing to reduce a larger number of variations to just the top performers. Traditional A/B testing is then done against the best performing variations, significantly reducing the overall time necessary for experimentation.\n", "\n", "We will use the same two variations as the last exercise. The first variation will represent our current implementation using the [**Default Product Resolver**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/resolvers.py) and the second variation will use the [**Personalize Recommendation Resolver**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/resolvers.py). The scenario we are simulating is adding product recommendations powered by Amazon Personalize to the home page and measuring the impact/uplift in click-throughs for products as a result of deploying a personalization strategy. We will use the same hypothesis from our A/B test where the conversion rate of our existing approach is 15% and we expect a 25% lift in this rate by adding personalized recommendations." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### What is Interleaving Recommendation Testing?\n", "\n", "The approach of interleaving recommendations is to take the recommendations from two or more variations and interleave, or blend, them into a single set of recommendations for *every user in the experiment*. Because each user in the sample is exposed to recommendations from all variations, we gain some key benefits. First, the sample size can be smaller since we don't need separate groups of users for each variation. This also results in a shorter experiment duration. Additionally, this approach is less susceptible to variances in user type and behavior that could throw off the results of an experiment. For example, it's not uncommon to have power users who shop/watch/listen/read much more than a typical user. With multiple sample groups, the behavior of these users can throw off results for their group, particularly with smaller sample sizes.\n", "\n", "Care must be taken in how recommendations are interleaved, though, to account for position bias in the recommendations and to track variation attribution. There are two common methods to interleaving recommendations. First is a balanced approach where recommendations are taken from each variation in an alternating style where the starting variation is selected randomly. The other approach follows the team-draft analogy where team captains select their \"best player\" (recommendation) from the variations in random selection order. Both methods can result in different interleaving outputs.\n", "\n", "Interleaving recommendations as an approach to experimenation got its start with information retrieval systems and search engines (Yahoo! & Bing) where different approaches to ranking results could be measured concurrently. More recently, [Netflix has adopted the interleaving technique](https://medium.com/netflix-techblog/interleaving-in-online-experiments-at-netflix-a04ee392ec55) to rapidly evaluate different approaches to making movie recommendations to its users. The image below depicts the recommendations from two different recommenders/variations (Ranker A and Ranker B) and examples of how they are interleaved.\n", "\n", "![Interleaving at Netflix](./images/netflix-interleaving.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### InterleavingExperiment Class\n", "\n", "Before stepping through creating and executing our interleaving test, let's look at the relevant source code for the [**InterleavingExperiment**](https://github.com/aws-samples/retail-demo-store/blob/master/src/recommendations/src/recommendations-service/experimentation/experiment_interleaving.py) class that implements this experiment type in the Retail Demo Store project.\n", "\n", "As noted in the **[3.1-Overview](./3.1-Overview.ipynb)** notebook, all experiment types are subclasses of the abstract **Experiment** class. See **[3.1-Overview](./3.1-Overview.ipynb)** for more details on the experimentation framework.\n", "\n", "The `InterleavingExperiment.get_items()` method is where item recommendations are retrieved for the experiment. This method will retrieve recommendations from the resolvers for all variations and then use the configured interleaving method (balanced or team-draft) to interleave the recommendations to produce the final result. Exposure tracking is also implemented to facilitate measuring the outcome of an experiment. The implementations for the balanced and team-draft interleaving methods are not included below but are available in the source code for the Recommendations service.\n", "\n", "```python\n", "# from src/recommendations/src/recommendations-service/experimentation/experiment_interleaving.py\n", "\n", "class InterleavingExperiment(Experiment):\n", " \"\"\" Implements interleaving technique described in research paper by \n", " Chapelle et al http://olivier.chapelle.cc/pub/interleaving.pdf\n", " \"\"\"\n", " METHOD_BALANCED = 'balanced'\n", " METHOD_TEAM_DRAFT = 'team-draft'\n", "\n", " def __init__(self, table, **data):\n", " super(InterleavingExperiment, self).__init__(table, **data)\n", " self.method = data.get('method', InterleavingExperiment.METHOD_BALANCED)\n", "\n", " def get_items(self, user_id, current_item_id = None, item_list = None, num_results = 10, tracker = None):\n", " ...\n", " \n", " # Initialize array structure to hold item recommendations for each variation\n", " variations_data = [[] for x in range(len(self.variations))]\n", "\n", " # Get recomended items for each variation\n", " for i in range(len(self.variations)):\n", " resolve_params = {\n", " 'user_id': user_id,\n", " 'product_id': current_item_id,\n", " 'product_list': item_list,\n", " 'num_results': num_results * 3 # account for overlaps\n", " }\n", " variation = self.variations[i]\n", " items = variation.resolver.get_items(**resolve_params)\n", " variations_data[i] = items\n", "\n", " # Interleave items to produce result\n", " interleaved = []\n", " if self.method == InterleavingExperiment.METHOD_TEAM_DRAFT:\n", " interleaved = self._interleave_team_draft(user_id, variations_data, num_results)\n", " else:\n", " interleaved = self._interleave_balanced(user_id, variations_data, num_results)\n", "\n", " # Increment exposure for each variation (can be optimized)\n", " for i in range(len(self.variations)):\n", " self._increment_exposure_count(i)\n", "\n", " ...\n", "\n", " return interleaved\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup - Import Dependencies\n", "\n", "Througout this workshop we will need access to some common libraries and clients for connecting to AWS services. Let's set those up now." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import boto3\n", "import json\n", "import uuid\n", "import numpy as np\n", "import requests\n", "import pandas as pd\n", "import random\n", "import scipy.stats as scs\n", "import time\n", "import decimal\n", "import matplotlib.pyplot as plt\n", "\n", "from boto3.dynamodb.conditions import Key\n", "from random import randint\n", "\n", "# import custom scripts for plotting results\n", "from src.plot import *\n", "from src.stats import *\n", "\n", "%matplotlib inline\n", "plt.style.use('ggplot')\n", "\n", "# We will be using a DynamoDB table to store configuration info for our experiments.\n", "dynamodb = boto3.resource('dynamodb')\n", "\n", "# Service discovery will allow us to dynamically discover Retail Demo Store resources\n", "servicediscovery = boto3.client('servicediscovery')\n", "# Retail Demo Store config parameters are stored in SSM\n", "ssm = boto3.client('ssm')\n", "\n", "# Utility class to convert types for printing as JSON.\n", "class CompatEncoder(json.JSONEncoder):\n", " def default(self, obj):\n", " if isinstance(obj, decimal.Decimal):\n", " if obj % 1 > 0:\n", " return float(obj)\n", " else:\n", " return int(obj)\n", " else:\n", " return super(CompatEncoder, self).default(obj)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Experiment Strategy Datastore\n", "\n", "Let's create an experiment using the interleaving technique.\n", "\n", "A DynamoDB table was created by the Retail Demo Store CloudFormation template that we will use to store the configuration information for our experiments. The table name can be found in a system parameter." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = ssm.get_parameter(Name='retaildemostore-experiment-strategy-table-name')\n", "\n", "table_name = response['Parameter']['Value'] # Do Not Change\n", "print('Experiments DDB table: ' + table_name)\n", "table = dynamodb.Table(table_name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next we need to lookup the Amazon Personalize campaign ARN for product recommendations. This is the campaign that was created in the Personalization workshop." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = ssm.get_parameter(Name = '/retaildemostore/personalize/recommended-for-you-arn')\n", "\n", "inference_arn = response['Parameter']['Value'] # Do Not Change\n", "print('Personalize product recommendations ARN: ' + inference_arn)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Create Interleaving Experiment\n", "\n", "The Retail Demo Store supports running multiple experiments concurrently. For this workshop we will create a single interleaving test/experiment that will expose users of a single group to recommendations from the default behavior and recommendations from Amazon Personalize. The [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) microservice already has logic that supports interleaving experiments when an active experiment is detected.\n", "\n", "Experiment configurations are stored in a DynamoDB table where each item in the table represents an experiment and has the following fields.\n", "\n", "- **id** - Uniquely identified this experience (UUID).\n", "- **feature** - Identifies the Retail Demo Store feature where the experiment should be applied. The name for the home page product recommendations feature is `home_product_recs`.\n", "- **name** - The name of the experiment. Keep the name short but descriptive. It will be used in the UI for demo purposes and when logging events for experiment result tracking.\n", "- **status** - The status of the experiment (`ACTIVE`, `EXPIRED`, or `PENDING`).\n", "- **type** - The type of test (`ab` for an A/B test, `interleaving` for interleaved recommendations, or `mab` for multi-armed bandit test)\n", "- **method** - The interleaving method (`balanced` or `team-draft`)\n", "- **variations** - List of configurations representing variations for the experiment. For example, for interleaving tests of the `home_product_recs` feature, the `variations` can be two Amazon Personalize campaign ARNs (variation type `personalize-recommendations`) or a single Personalize campaign ARN and the default product behavior." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "feature = 'home_product_recs'\n", "experiment_name = 'home_personalize_interleaving'\n", "\n", "# First, make sure there are no other active experiments so we can isolate\n", "# this experiment for the exercise.\n", "response = table.scan(\n", " ProjectionExpression='#k', \n", " ExpressionAttributeNames={'#k' : 'id'},\n", " FilterExpression=Key('status').eq('ACTIVE')\n", ")\n", "for item in response['Items']:\n", " response = table.update_item(\n", " Key=item,\n", " UpdateExpression='SET #s = :inactive',\n", " ExpressionAttributeNames={\n", " '#s' : 'status'\n", " },\n", " ExpressionAttributeValues={\n", " ':inactive' : 'INACTIVE'\n", " }\n", " )\n", "\n", "# Query the experiment strategy table to see if our experiment already exists\n", "response = table.query(\n", " IndexName='feature-name-index',\n", " KeyConditionExpression=Key('feature').eq(feature) & Key('name').eq(experiment_name),\n", " FilterExpression=Key('status').eq('ACTIVE')\n", ")\n", "\n", "if response.get('Items') and len(response.get('Items')) > 0:\n", " print('Experiment already exists')\n", " home_page_experiment = response['Items'][0]\n", "else:\n", " print('Creating experiment')\n", " \n", " # Default product resolver\n", " variation_0 = {\n", " 'type': 'product'\n", " }\n", " \n", " # Amazon Personalize resolver\n", " variation_1 = {\n", " 'type': 'personalize-recommendations',\n", " 'inference_arn': inference_arn\n", " }\n", "\n", " home_page_experiment = { \n", " 'id': uuid.uuid4().hex,\n", " 'feature': feature,\n", " 'name': experiment_name,\n", " 'status': 'ACTIVE',\n", " 'type': 'interleaving',\n", " 'method': 'team-draft',\n", " 'analytics': {},\n", " 'variations': [ variation_0, variation_1 ]\n", " }\n", " \n", " response = table.put_item(\n", " Item=home_page_experiment\n", " )\n", "\n", " print(json.dumps(response, indent=4))\n", " \n", "print(json.dumps(home_page_experiment, indent=4, cls=CompatEncoder))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Users\n", "\n", "For our experiment simulation, we will load all Retail Demo Store users and run the experiment until the sample size has been met.\n", "\n", "First, let's discover the IP address for the Retail Demo Store's [Users](https://github.com/aws-samples/retail-demo-store/tree/master/src/users) service." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = servicediscovery.discover_instances(\n", " NamespaceName='retaildemostore.local',\n", " ServiceName='users',\n", " MaxResults=1,\n", " HealthStatus='HEALTHY'\n", ")\n", "\n", "users_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']\n", "print('Users Service Instance IP: {}'.format(users_service_instance))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, let's load all users into a local data frame." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Load all users so we have enough to satisfy our sample size requirements.\n", "response = requests.get('http://{}/users/all?count=10000'.format(users_service_instance))\n", "users = response.json()\n", "users_df = pd.DataFrame(users)\n", "pd.set_option('display.max_rows', 5)\n", "\n", "users_df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Discover Recommendations Service\n", "\n", "Next, let's discover the IP address for the Retail Demo Store's [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) service." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = servicediscovery.discover_instances(\n", " NamespaceName='retaildemostore.local',\n", " ServiceName='recommendations',\n", " MaxResults=1,\n", " HealthStatus='HEALTHY'\n", ")\n", "\n", "recommendations_service_instance = response['Instances'][0]['Attributes']['AWS_INSTANCE_IPV4']\n", "print('Recommendation Service Instance IP: {}'.format(recommendations_service_instance))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simulate Experiment\n", "\n", "Next we will simulate our interleaving recommendation experiment by making calls to the [Recommendations](https://github.com/aws-samples/retail-demo-store/tree/master/src/recommendations) service across the users we just loaded." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simulation Function\n", "\n", "The following `simulate_experiment` function is supplied with the number of trials we want to run and the probability of conversion for each variation for our simulation. It runs the simulation long enough to satisfy the number of trials and calls the Recommendations service for each trial in the experiment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def simulate_experiment(n_trials, probs):\n", " \"\"\"Simulates experiment based on pre-determined probabilities\n", "\n", " Example:\n", "\n", " Parameters:\n", " n_trials (int): number of trials to run for experiment\n", " probs (array float): array of floats containing probability/conversion \n", " rate for each variation\n", "\n", " Returns:\n", " df (df) - data frame of simulation data/results\n", " \"\"\"\n", "\n", " # will hold exposure/outcome data\n", " data = []\n", "\n", " print('Simulating experiment for {} users... this may take a few minutes'.format(n_trials))\n", "\n", " for idx in range(n_trials):\n", " if idx > 0 and idx % 500 == 0:\n", " print('Simulated experiment for {} users so far'.format(idx))\n", " \n", " row = {}\n", "\n", " # Get random user\n", " user = users[randint(0, len(users)-1)]\n", "\n", " # Call Recommendations web service to get recommendations for the user\n", " response = requests.get('http://{}/recommendations?userID={}&feature={}'.format(recommendations_service_instance, user['id'], feature))\n", "\n", " recommendations = response.json()\n", " recommendation = recommendations[randint(0, len(recommendations)-1)]\n", " \n", " variation = recommendation['experiment']['variationIndex']\n", " row['variation'] = variation\n", " \n", " # Conversion based on probability of variation\n", " row['converted'] = np.random.binomial(1, p=probs[variation])\n", "\n", " if row['converted'] == 1:\n", " # Update experiment with outcome/conversion\n", " correlation_id = recommendation['experiment']['correlationId']\n", " requests.post('http://{}/experiment/outcome'.format(recommendations_service_instance), data={'correlationId':correlation_id})\n", " \n", " data.append(row)\n", " \n", " # convert data into pandas dataframe\n", " df = pd.DataFrame(data)\n", " \n", " print('Done')\n", "\n", " return df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Run Simulation\n", "\n", "Next we run the simulation by defining our simulation parameters for the number of trials and probabilities and then call `simulate_experiment`. This will take a few minutes to run." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "%%time\n", "\n", "# Number of trials to run\n", "N = 2000\n", "\n", "# bcr: baseline conversion rate\n", "p_A = 0.15\n", "# d_hat: difference in a metric between the two groups, sometimes referred to as minimal detectable effect or lift depending on the context\n", "p_B = 0.1875\n", "\n", "ab_data = simulate_experiment(N, [p_A, p_B])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ab_data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inspect Experiment Summary Statistics\n", "\n", "Since the **Experiment** class updates statistics on the experiment in the experiment strategy table when a user is exposed to an experiment (\"exposure\") and when a user converts (\"outcome\"), we should see updated counts on our experiment. Let's reload our experiment and inspect the exposure and conversion counts for our simulation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "response = table.get_item(Key={'id': home_page_experiment['id']})\n", "\n", "print(json.dumps(response['Item'], indent=4, cls=CompatEncoder))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note the `conversions` and `exposures` counts for each variation above. These counts were incremented by the experiment class each time a trial was run (exposure) and a user converted in the `simulate_experiment` function above." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Analyze Simulation Results\n", "\n", "To wrap up, let's analyze some of the results from our simulated interleaving experiment by inspecting the actual conversion rate and verifying our target confidence interval and power.\n", "\n", "First, let's take a closer look at the results of our simulation. We'll start by calculating some summary statistics." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ab_summary = ab_data.pivot_table(values='converted', index='variation', aggfunc=np.sum)\n", "# add additional columns to the pivot table\n", "ab_summary['total'] = ab_data.pivot_table(values='converted', index='variation', aggfunc=lambda x: len(x))\n", "ab_summary['rate'] = ab_data.pivot_table(values='converted', index='variation')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ab_summary" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next let's isolate data for each variation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "A_group = ab_data[ab_data['variation'] == 0]\n", "B_group = ab_data[ab_data['variation'] == 1]\n", "A_converted, B_converted = A_group['converted'].sum(), B_group['converted'].sum()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "A_converted, B_converted" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Determine the actual sample size for each variation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "A_total, B_total = len(A_group), len(B_group)\n", "A_total, B_total" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Calculate the actual conversion rates and uplift from our simulation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_A, p_B = A_converted / A_total, B_converted / B_total\n", "p_A, p_B" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "p_B - p_A" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Determining Statistical Significance\n", "\n", "For simplicity we will use the same approach as our A/B test to determine statistical significance. \n", "\n", "Let's plot the data from both groups as binomial distributions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(12,6))\n", "xA = np.linspace(A_converted-49, A_converted+50, 100)\n", "yA = scs.binom(A_total, p_A).pmf(xA)\n", "ax.scatter(xA, yA, s=10)\n", "xB = np.linspace(B_converted-49, B_converted+50, 100)\n", "yB = scs.binom(B_total, p_B).pmf(xB)\n", "ax.scatter(xB, yB, s=10)\n", "plt.xlabel('converted')\n", "plt.ylabel('probability')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Based the probabilities from our hypothesis, we should see that the test group in blue (B) converted more users than the control group in red (A). However, the plot above is not a plot of the null and alternate hypothesis. The null hypothesis is a plot of the difference between the probability of the two groups.\n", "\n", "> Given the randomness of our user selection, group hashing, and probabilities, your simulation results should be different for each simulation run and therefore may or may not be statistically significant.\n", "\n", "In order to calculate the difference between the two groups, we need to standardize the data. Because the number of samples can be different between the two groups, we should compare the probability of successes, p.\n", "\n", "According to the central limit theorem, by calculating many sample means we can approximate the true mean of the population from which the data for the control group was taken. The distribution of the sample means will be normally distributed around the true mean with a standard deviation equal to the standard error of the mean." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "SE_A = np.sqrt(p_A * (1-p_A)) / np.sqrt(A_total)\n", "SE_B = np.sqrt(p_B * (1-p_B)) / np.sqrt(B_total)\n", "SE_A, SE_B" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "fig, ax = plt.subplots(figsize=(12,6))\n", "xA = np.linspace(0, .3, A_total)\n", "yA = scs.norm(p_A, SE_A).pdf(xA)\n", "ax.plot(xA, yA)\n", "ax.axvline(x=p_A, c='red', alpha=0.5, linestyle='--')\n", "\n", "xB = np.linspace(0, .3, B_total)\n", "yB = scs.norm(p_B, SE_B).pdf(xB)\n", "ax.plot(xB, yB)\n", "ax.axvline(x=p_B, c='blue', alpha=0.5, linestyle='--')\n", "\n", "plt.xlabel('Converted Proportion')\n", "plt.ylabel('PDF')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Next Steps\n", "\n", "You have completed the exercise for implementing an A/B test using the experimentation framework in the Retail Demo Store. Close this notebook and open the notebook for the next exercise, **[3.4-Multi-Armed-Bandit-Experiment](./3.4-Multi-Armed-Bandit-Experiment.ipynb)**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### References and Further Reading\n", "\n", "- [Large Scale Validation and Analysis of Interleaved Search Evaluation](http://olivier.chapelle.cc/pub/interleaving.pdf), Chapelle et al\n", "- [Innovating Faster on Personalization Algorithms at Netflix Using Interleaving](https://medium.com/netflix-techblog/interleaving-in-online-experiments-at-netflix-a04ee392ec55), Netflix Technology Blog" ] } ], "metadata": { "kernelspec": { "display_name": "conda_python3", "language": "python", "name": "conda_python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.13" } }, "nbformat": 4, "nbformat_minor": 4 }