{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Key Identity Verification APIs\n", "\n", "-------\n", "\n", "This lab provides hands on experience with the key Amazon Rekognition APIs for identity verification. \n", "\n", "## Introduction \n", "\n", "-------\n", "\n", "In-person user identity verification is slow to scale, costly, and high friction for users. Machine learning powered facial biometrics can enable online user identity verification. Amazon Rekognition offers pre-trained facial recognition and analysis capabilities that you can quickly add to your user onboarding and authentication workflows to verify opted-in users' identity online. \n", "\n", "In this notebook, we'll use the Amazon Rekgonition's key APIs for Identity Verification. After running this notebook you should be able to use the following APIs:\n", "\n", "- DetectFaces: DetectFaces detects the 100 largest faces in the image. \n", "- CompareFaces: Compares a face in the source input image with each of the 100 largest faces detected in the target input image.\n", "- CreateCollection: Creates a searchable index of faces. \n", "- IndexFaces: Detects faces in the input image and adds them to the specified collection and returns a face id which can be used in subsequent searches. \n", "- SearchFacesByImage: For a given input image, first detects the largest face in the image, and then searches the specified collection for matching faces.\n", "- SearchFaces: For a given input face ID, searches for matching faces in the collection of the face it belongs to.\n", "- DeleteFaces: Deletes faces from a collection. You specify a collection ID and an array of face IDs to remove from the collection.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import io\n", "import boto3\n", "import json\n", "from IPython.display import Image as IImage\n", "import pandas as pd\n", "\n", "%store -r bucket_name\n", "mySession = boto3.session.Session()\n", "aws_region = mySession.region_name\n", "print(\"AWS Region: {}\".format(aws_region))\n", "print(\"AWS Bucket: {}\".format(bucket_name))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setup Clients \n", "-----\n", "Here we are going to use both S3 and Rekognition APIs " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "s3_client = boto3.client('s3')\n", "rek_client = boto3.client('rekognition')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Display a Face" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "## Image of a Face\n", "face_image = \"face_6.jpeg\"\n", "print(face_image)\n", "display(IImage(url=s3_client.generate_presigned_url('get_object', \n", " Params={'Bucket': bucket_name, \n", " 'Key' : face_image})))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## DetectFaces API\n", "----\n", "\n", "[DetectFaces](\"https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectFaces.html\") detects the 100 largest faces in the image. For each face detected, the operation returns face details. These details include a bounding box of the face, a confidence value (that the bounding box contains a face), and a fixed set of attributes such as facial landmarks (for example, coordinates of eye and mouth), presence of beard, sunglasses, and so on.\n", "\n", "\n", "