{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Face detection using Amazon Rekognition" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "This notebook provides a walkthrough of [face detection API](https://docs.aws.amazon.com/rekognition/latest/dg/faces.html) in Amazon Rekognition to identify faces.\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "# Initialize dependencies\n", "import boto3\n", "import botocore\n", "from IPython.display import HTML, display, Image as IImage\n", "import time\n", "\n", "# Initialize clients\n", "REGION = boto3.session.Session().region_name\n", "rekognition = boto3.client('rekognition', REGION)\n", "s3 = boto3.client('s3', REGION)\n", "\n", "%store -r bucket_name" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Detect faces in image\n", "***" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Show image\n", "image_name = \"media/looking_at_screen.jpg\"\n", "display(IImage(url=s3.generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': image_name})))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Call Rekognition to detect faces in the image\n", "\n", "" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "detect_faces_response = rekognition.detect_faces(\n", " Attributes=['ALL'],\n", " Image={\n", " 'S3Object': {\n", " 'Bucket': bucket_name,\n", " 'Name': image_name,\n", " }\n", " }\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Review the raw JSON reponse from Rekognition\n", "\n", "In the JSON response below, you will see faces, detected attributes, confidence score and additional information." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "display(detect_faces_response)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Display number of faces detected" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(\"Number of faces detected: {}\".format(len(detect_faces_response['FaceDetails'])))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Recognize faces in video\n", "Face recognition in video is an async operation. \n", "https://docs.aws.amazon.com/rekognition/latest/dg/faces-sqs-video.html. \n", "\n", "- First we start a face detection job which returns a Job Id.\n", "- We can then call `get_face_detection` to get the job status and after job is complete, we can get object metadata.\n", "- In production use cases, you would usually use StepFunction or SNS topic to get notified when job is complete.\n", "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Show video in the player" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "video_name = \"media/leaving.mp4\"\n", "s3_video_url = s3.generate_presigned_url('get_object', Params={'Bucket': bucket_name, 'Key': video_name})\n", "\n", "video_tag = \"\".format(s3_video_url)\n", "video_ui = \"
{}
\".format(video_tag)\n", "display(HTML(video_ui))\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Call Rekognition to start a job for face detection" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "start_face_detection = rekognition.start_face_detection(\n", " Video={\n", " 'S3Object': {\n", " 'Bucket': bucket_name,\n", " 'Name': video_name,\n", " }\n", " },\n", ")\n", "\n", "faces_job_id = start_face_detection['JobId']\n", "display(\"Job Id: {0}\".format(faces_job_id))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Additional (Optional) Request Attributes\n", "\n", "ClientRequestToken, JobTag, MinConfidence, and NotificationChannel:\n", "https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartFaceDetection.html\n", "\n", "FaceDetail:\n", "https://docs.aws.amazon.com/rekognition/latest/APIReference/API_FaceDetail.html\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wait for object detection job to complete\n", "\n", "In production use cases, you would usually use StepFunction or SNS topic to get notified when job is complete." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "get_face_detection = rekognition.get_face_detection(\n", " JobId=faces_job_id\n", ")\n", "\n", "while(get_face_detection['JobStatus'] == 'IN_PROGRESS'):\n", " time.sleep(5)\n", " print('.', end='')\n", " \n", " get_face_detection = rekognition.get_face_detection(\n", " JobId=faces_job_id)\n", " \n", "display(get_face_detection['JobStatus'])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Review raw JSON reponse from Rekognition\n", "\n", "In the JSON response below, you will see list of detected faces and attributes.\n", "For each detected face, you will see information like Timestamp" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "display(get_face_detection)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Display face detected by timestamp and alert when faces are not detected" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Faces detected in each frame\n", "prev_ts = 0\n", "threshold = 1000 # ms\n", "for face in get_face_detection['Faces']:\n", " ts = face[\"Timestamp\"]\n", " cconfidence = face[\"Face\"][\"Confidence\"]\n", " if ts-prev_ts>threshold:\n", " print(\"ALERT - no face detected for {} seconds\".format((ts-prev_ts)/1000))\n", " print(\"Detected face on Timestamp: {} with confidence: {}\".format(ts, cconfidence))\n", " prev_ts = ts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***\n", "### References\n", "- https://docs.aws.amazon.com/rekognition/latest/APIReference/API_DetectFaces.html\n", "- https://docs.aws.amazon.com/rekognition/latest/APIReference/API_StartFaceDetection.html\n", "- https://docs.aws.amazon.com/rekognition/latest/APIReference/API_GetFaceDetection.html\n", "\n", "***" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You have successfully used Amazon Rekognition to detect faces in images and videos." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 4 }