{ "cells": [ { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "\n", "# re:Invent 2022 workshop AIM342: Advancing Responsible AI\n" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 0. Setup\n", "\n", "Make sure you select the `conda_rai_kernel` in your notebook instance.\n", "\n", "This kernel includes pytorch, numpy, pillow, matplotlib, torch, torchvision, cv2, seaborn, and pandas, which you need\n", "for this notebook.\n", "\n", "You may run this notebook by selecting Run->Run All Cells from the menu above. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Imports\n", "from utils.dataset_utils import *\n", "from utils.eval_utils import *\n", "from utils.plotting_utils import *\n", "from utils.train_utils import *\n", "\n", "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 1. Introduction\n", "\n", "### 1.1 Assess bias in the world of handwritten numbers\n", "\n", "Building and operating machine learning applications responsibly requires an active, consistent approach to prevent, assess, and mitigate bias. This workshop takes attendees through a computer vision case study in assessing unwanted bias, covering demographic group selection, evaluation dataset selection, bias metric selection, bias reporting, and root cause analysis – and concludes by discussing prevention, mitigation and communication strategies. Attendees will follow along with a Jupyter notebook. The workshop will be run by an AWS team of experts on Responsible AI that develops science, tools, and best practices to advance Responsible AI within AWS.\n", "\n", "You will conduct experiments on a synthetic task that has parallels to real-world bias in AI: the handwritten numbers dataset. This dataset contains images of blue and orange handwritten numbers in [0,9]. In AI systems that represent people, such as face, voice, or speech recognition, we can use demographic attributes like gender, age, or race to identify groups of people. In the world of handwritten numbers, similar attributes exist. In addition to its digit identity 0-9, an instance from this dataset can be linear or curvy, orange or blue. We call the instances *handwritten numbers*. \n", "\n", "Attributes of handwritten numbers:\n", "* digit = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9'}\n", "* curviness = {curvy, neutral, linear}\n", "* color = {orange, blue}\n", "\n", "We define our groups with these attributes. For example, \"all numbers where color==orange\" is a group. Using these groups, we can practice the bias measurement techniques that apply to real-world AI systems. We will not draw a 1:1 correspondence between numbers and people. Although this is a simplification, the handwritten numbers dataset gives us a practical and objective sandbox. \n", "\n", "The handwritten numbers dataset is derived from MNIST [1] and inspired by other derivatives, such as biasedMNIST [2].\n", "\n", "1. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. (1998) \"Gradient-based learning applied to document recognition.\" Proceedings of the IEEE, 86(11):2278-2324, November 1998. Online (http://yann.lecun.com/exdb/publis/index.html#lecun-98)\n", "2. Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, and Seong Joon Oh. (2020) Learning de-biased representations with biased representations (http://proceedings.mlr.press/v119/bahng20a/bahng20a.pdf). In Proceedings of the 37th International Conference on Machine Learning (ICML'20). JMLR.org, Article 50, 528–539.\n", "\n", "### 1.2 Learning Objectives\n", "In this workshop you will learn to\n", "* Measure and understand bias\n", "* Apply bias metrics\n", "* Explore bias between intersectional groups\n", "* Compare results across evaluation datasets\n", "* Compare results across training datasets\n", "\n", "\n", "### 1.3 Runtime\n", "This module takes about 60 minutes to run." ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "### 2. Define the problem\n", "\n", "Your team is ready to launch an image recognition service for its customers in the world of handwritten numbers. The service will take number selfies as input, and produce the identity of the number, 0-9, as output. The model uses computer vision technology to perform this recognition. You will review the model for unwanted bias, using an evaluation dataset gathered internally. \n", "\n", "**Your task is to run the model on the evaluation set and measure bias in the output, in order to make a business decision: does the model treat all numbers fairly, at least in the groups you have identified? Is this service fair enough?**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.1 Download the model and evaluation data " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# This takes about 15s\n", "train_data, eval_data = create_starting_data()\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# This takes about 10s\n", "base_model = train_baseline_model(df=train_data)" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 3. Understand your evaluation dataset\n", "\n", "#### 3.1 Learn what the model should do, by looking at the evaluation data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, look at the structure. See how your groups are captured as annotations." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(eval_data[['digit','curviness','color']][:5])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.2 Look at sample images to understand the input." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# View an example from each digit group in the evaluation set. \n", "# Run this cell multiple times to view different examples.\n", "plot_one_each(eval_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.3 Look at the groups to understand what analyses you can run\n", "\n", "* Our data is annotated with digits, so we can measure whether the model performs better for 9's or for 3's. \n", "* Our data is annotated with color, so we can measure whether the model performs better for blues or for oranges.\n", "* Our data is annotated with curviness, so we can measure whether the model performs better for curvy or for linear instances. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 3.4 Find out whether groups you care about are well represented in the evaluation data.\n", "\n", "Use a histogram to view representation. In this exercise, your customers are numbers and their attributes are digit, color, and curviness." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "eval_data.digit.value_counts().sort_index().plot(kind='bar', xlabel='Digit', ylabel=\"Count\", rot=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "eval_data.color.value_counts().sort_index().plot(kind='bar', xlabel='Color', ylabel=\"Count\", rot=0)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "eval_data.curviness.value_counts().sort_index().plot(kind='bar', xlabel='curviness', ylabel=\"Count\", rot=0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Run the model on evaluation data\n", "\n", "\n", "Now that you understand the evaluation data, run the model on this data to see what the outputs are.\n", "\n", "For some outputs, the prediction matches the true digit. When the model makes an error, the prediction doesn't match." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "results_df = run_baseline_model(base_model, eval_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.1 View some examples where the model was correct" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = results_df.loc[(results_df['label'] == results_df['pred1'])][:5]\n", "plot_five(df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.2 View examples where the model made an error" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "df = results_df.loc[(results_df['label'] != results_df['pred1'])][:5]\n", "plot_five(df)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Measure overall accuracy\n", "\n", "#### 5.1 Measure the accuracy of these outputs: how often did the model correctly recognize the number?\n", "\n", "Accuracy is the percentage of instances for which the model prediction matches ground truth.\n", "\n", "Once we know how to measure accuracy, we will measure bias as differences in accuracy across groups." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Compute accuracy on the full validation set\n", "acc = compute_accuracy(results_df, 'label', 'pred1')\n", "print('Accuracy: ', acc)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 5.2 Calculate a confidence interval (CI) on the accuracy, to understand how strong the finding is.\n", "\n", "A one-time calculation of accuracy on a sample of data is really an estimate for the accuracy of the model on any data drawn from the same distribution as your sample. \n", "\n", "A confidence interval is a standard statistical technique based on variability in the data, to compute upper and lower bounds on accuracy for any such in-distribution data. We calculate 95% intervals: the range where we can be 95% certain that accuracy falls.\n", "\n", "If the confidence interval is small, then we can be confident that the accuracy is a good estimate of model performance on this evaluation set. While the point estimate of accuracy informs us about the model, the CI tells us more about the evaluation data. \n", "\n", "Later we will use CIs to understand whether differences in accuracy (i.e., unwanted bias) are meaningful." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "lb, ub = compute_confidence_bounds(results_df, 'label', 'pred1')\n", "print(f'95% confidence interval for accuracy is [{round(lb, 4)}, {round(ub, 4)}]')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6.Measure group-level accuracy\n", "\n", "Your goal is to see whether the model treats all groups fairly. Next, see how accuracy breaks down into groups.\n", "\n", "#### 6.1 Start with groups defined by their digit. Does the model have good accuracy for all 9’s? For all 3’s?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Accuracy broken down by digit\n", "print_accuracy_by_group(results_df, group_col='digit', prediction_col='pred1', ground_truth_col='label')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Plot accuracy broken down by digit identity\n", "plot_confidence_intervals_from_df(df_with_preds=results_df,\n", " label_col='label',\n", " pred_col='pred1',\n", " dataset_name='eval_data',\n", " group_name='digit',\n", " show_error=True,\n", " marker='o',\n", " use_legend=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 6.2 Next, look at color. Does the model have good accuracy for orange and blue?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Print out accuracy broken down by color\n", "print_accuracy_by_group(results_df, group_col='color', prediction_col='pred1', ground_truth_col='label')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Plot accuracy broken down by color\n", "plot_confidence_intervals_from_df(df_with_preds=results_df,\n", " label_col='label',\n", " pred_col='pred1',\n", " dataset_name='eval_data',\n", " group_name='color',\n", " show_error=True,\n", " use_legend=False)" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "#### 6.3. Finally, look at curviness. Does the model have good accuracy for all groups?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Print accuracy broken down by curviness\n", "print_accuracy_by_group(results_df, group_col='curviness', prediction_col='pred1', ground_truth_col='label')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Plot accuracy broken down by curviness\n", "plot_confidence_intervals_from_df(df_with_preds=results_df,\n", " label_col='label',\n", " pred_col='pred1',\n", " dataset_name='eval_data',\n", " group_name='curviness',\n", " use_legend=False,\n", " show_error=True)" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "#### 6.4. Review initial findings. You found disparities in accuracy. Is it bias?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "✅ **Check-in:** *Your task is to measure bias in the model by comparing accuracy across number groups that you care about. So far, you have explored a dataset that is annotated for these groups, and run your model on the data. You calculated overall accuracy to get an idea of how well the model performs on average, and you measured accuracy at the group level to compare performance for customer groups. You found disparities in accuracy. Is it bias?*" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 7. Compare bias metrics using digit groups\n", "\n", "Start with your fairness goal: all of your customers should experience comparable performance from your service. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "#### 7.1 Calculate disparity in accuracy between best- and worst-served numbers, to get a feel for the bias in your evaluation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Difference in accuracy between best-served and worst-served groups\n", "digit_accuracy = get_group_accuracy_by_value(results_df, group_col='digit', prediction_col='pred1', ground_truth_col='label')\n", "print_absolute_vs_relative_accuracy(digit_accuracy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You found a disparity. The absolute disparity applies to accuracy and errors. \n", "\n", "#### 7.2 Calculate the ratio of errors.\n", "\n", "The ratio of errors tells you how often a disadvantaged group experiences negative outcomes, compared to another group." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print_absolute_vs_ratio_error(digit_accuracy, 'digit')" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "If the handwritten numbers have to be classified before passing through airport security, the error ratio tells you how much more often numbers from the worst-performing group get stopped for a pat-down, compared to the best-performing group." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 8. Apply metrics for bias to color and curviness groups\n", "\n", "#### 8.1 Calculate absolute disparity in accuracy for groups by color and groups by curviness." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "color_accuracy = get_group_accuracy_by_value(results_df, group_col='color', prediction_col='pred1', ground_truth_col='label')\n", "print_absolute_vs_ratio_error(color_accuracy)\n", "print_absolute_vs_relative_accuracy(color_accuracy)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "curve_accuracy = get_group_accuracy_by_value(results_df, group_col='curviness', prediction_col='pred1', ground_truth_col='label')\n", "print_absolute_vs_ratio_error(curve_accuracy)\n", "print_absolute_vs_relative_accuracy(curve_accuracy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 8.2 Interpret your results: is this model biased?" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "✅ **Check-in:** *Your task is to measure bias across several groups. You measured bias using absolute and relative differences in both accuracy and error. You found that there is disparity across digit and curviness groups. You may drive business decisions based on your tolerance for this level of perceived bias.*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 9. Apply metrics to group intersections\n", "\n", "You have measured bias based on one attribute at a time. Next, apply bias metrics to the intersections of groups, like \"blue 8’s”, to understand how group disparities interact." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 9.1 View accuracy by digit-color intersections" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Print out the accuracy on images by digit-color intersections\n", "print_accuracy_by_intersection(results_df, group1_col='digit', group2_col='color', prediction_col='pred1', ground_truth_col='label')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 9.2 Plot accuracy on digit-color intersections" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Plot accuracy on images by digit-color intersections\n", "plot_intersectional_confidence_intervals_from_df(results_df, label_col='label', pred_col='pred1', \n", " dataset_name='validation data', group_1='digit', group_2='color',\n", " use_legend=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 9.3 Calculate disparity on color-digit intersections" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "color_digit_accuracy = get_accuracy_by_intersection(results_df, group1_col='digit', \n", " group2_col='color', prediction_col='pred1', \n", " ground_truth_col='label')\n", "print_absolute_vs_ratio_error(color_digit_accuracy)\n", "print_absolute_vs_relative_accuracy(color_digit_accuracy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 9.4 Interpret intersectional results\n", "\n", "Oranges get similar accuracy to blues.\n", "\n", "8's get above-average accuracy.\n", "\n", "However orange 8's get lower accuracy than the average, and much lower accuracy than blue 8's.\n", "\n", "Orange 3's get reasonable accuracy, but blue 3's drive the last-place performance of 3's overall." ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 10.Measure bias on a new evaluation set\n", "\n", "Corporate customers of your service may want to use it in their own applications. First, they need to verify that the service is not biased on its own end user data. Use what you learned to explore bias on the new evaluation set. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 10.1 Pull up the customer data and explore it: look at sample images\n", "\n", "The file structure and groups are the same as in the original. Look at samples to understand whether the images are similar." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "customer_data = get_val2(eval_data)\n", "plot_one_each(customer_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You find that all the 4’s used terrible cameras, and their selfies are corrupted. Next, check whether this affects your determination of bias.\n", "\n", "#### 10.2 Run the baseline model on your customer data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "customer_results_df = run_baseline_model(base_model, customer_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 10.3 Calculate accuracy on customar data: Overall and broken down by digit" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "acc = compute_accuracy(customer_results_df, 'label', 'pred1')\n", "print(f'overall accuracy on customer data = {acc}')\n", "print_accuracy_by_group(customer_results_df, group_col='digit', prediction_col='pred1', ground_truth_col='label')\n", "plot_confidence_intervals_from_df(customer_results_df, 'label', 'pred1', 'eval_data', 'digit', use_legend=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 10.4 Compute disparity metrics" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "digit_accuracy = get_group_accuracy_by_value(customer_results_df, group_col='digit', prediction_col='pred1', ground_truth_col='label')\n", "print_absolute_vs_ratio_error(digit_accuracy)\n", "print_absolute_vs_relative_accuracy(digit_accuracy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The corrupted images you found in 10.1 seem to affect the accuracy and disparity of the model. \n", "\n", "The worst-performing group has changed, and the overall disparity of best-vs-worst has increased." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "✅ **Check-in:** *Your task is to measure bias on multiple evaluation datasets. You evaluated the same model on an internal evaluation set, and on a dataset provided by your customer. While overall accuracy was stable, group-level differences and overall bias changed.*" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## 11. Attempt to mitigate bias against 3's\n", "\n", "We found disparities on 3's in the original dataset, and on 4's in the customer data. \n", "\n", "Mitigation strategies for these two situations may be different.\n", "\n", "Next, we will try collecting more training data for 3's and re-test on the original dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 11.1 Collect more training data and re-train" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# This takes about 12s\n", "augmented_train_data = create_augmented_data_2()\n", "# This takes about 10s\n", "augmented_model = train_baseline_model(df=augmented_train_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 11.2 Run the new model on your original evaluation set" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Run the augmented model on the original eval_data\n", "augmented_results_df = run_baseline_model(augmented_model, eval_data)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 11.3 Calculate accuracy of the new model\n", "\n", "Does accuracy improve?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Original training data\n", "acc = compute_accuracy(results_df, 'label', 'pred1')\n", "print(f'Overall accuracy on original training + eval = {acc}')\n", "\n", "acc2 = compute_accuracy(augmented_results_df, 'label', 'pred1')\n", "print(f'Overall accuracy on augmented training + eval = {acc2}')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 11.4 Plot accuracy by digit\n", "\n", "Does performance on 3's improve? " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_confidence_intervals_from_two_dfs(results_df, augmented_results_df,\n", " label_col='label', pred_col='pred1',\n", " dataset_name_1='original data', dataset_name_2='augmented data',\n", " group_name='digit',\n", " use_legend=False, figsize=(12, 6), fontsize=16, tick_spacing=1,\n", " show_error=True, marker_1='o', marker_2='^')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 11.5 Calculate disparity\n", "\n", "Does disparity improve?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "digit_accuracy = get_group_accuracy_by_value(augmented_results_df, group_col='digit', prediction_col='pred1', ground_truth_col='label')\n", "print_absolute_vs_ratio_error(digit_accuracy)\n", "print_absolute_vs_relative_accuracy(digit_accuracy)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "✅ **Check-in:** *You have assessed bias in the digit recognition service, using multiple evaluation datasets and two models. You have discovered disparities that vary based on both model and dataset, and you have interpreted the results. Great work!*" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.7.10 ('rein-venv': venv)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.10" }, "vscode": { "interpreter": { "hash": "649521b0a185222e78d29b8363ccdc870673c68af113653d0fcb956939307f98" } } }, "nbformat": 4, "nbformat_minor": 1 }