/* * Copyright 2010-2023 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). * You may not use this file except in compliance with the License. * A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either * express or implied. See the License for the specific language governing * permissions and limitations under the License. */ package com.amazonaws.services.rekognition.model; import java.io.Serializable; import com.amazonaws.AmazonWebServiceRequest; /** *
* Detects faces in the input image and adds them to the specified collection. *
** Amazon Rekognition doesn't save the actual faces that are detected. Instead, * the underlying detection algorithm first detects the faces in the input * image. For each face, the algorithm extracts facial features into a feature * vector, and stores it in the backend database. Amazon Rekognition uses * feature vectors when it performs face match and search operations using the * SearchFaces and SearchFacesByImage operations. *
** For more information, see Adding faces to a collection in the Amazon * Rekognition Developer Guide. *
** To get the number of faces in a collection, call DescribeCollection. *
*
* If you're using version 1.0 of the face detection model,
* IndexFaces
indexes the 15 largest faces in the input image.
* Later versions of the face detection model index the 100 largest faces in the
* input image.
*
* If you're using version 4 or later of the face model, image orientation
* information is not returned in the OrientationCorrection
field.
*
* To determine which version of the model you're using, call
* DescribeCollection and supply the collection ID. You can also get the
* model version from the value of FaceModelVersion
in the response
* from IndexFaces
*
* For more information, see Model Versioning in the Amazon Rekognition * Developer Guide. *
*
* If you provide the optional ExternalImageId
for the input image
* you provided, Amazon Rekognition associates this ID with all faces that it
* detects. When you call the ListFaces operation, the response returns
* the external ID. You can use this external image ID to create a client-side
* index to associate the faces with each image. You can then use the index to
* find all faces in an image.
*
* You can specify the maximum number of faces to index with the
* MaxFaces
input parameter. This is useful when you want to index
* the largest faces in an image and don't want to index smaller faces, such as
* those belonging to people standing in the background.
*
* The QualityFilter
input parameter allows you to filter out
* detected faces that don’t meet a required quality bar. The quality bar is
* based on a variety of common use cases. By default, IndexFaces
* chooses the quality bar that's used to filter faces. You can also explicitly
* choose the quality bar. Use QualityFilter
, to set the quality
* bar by specifying LOW
, MEDIUM
, or HIGH
* . If you do not want to filter detected faces, specify NONE
.
*
* To use quality filtering, you need a collection associated with version 3 of * the face model or higher. To get the version of the face model associated * with a collection, call DescribeCollection. *
*
* Information about faces detected in an image, but not indexed, is returned in
* an array of UnindexedFace objects, UnindexedFaces
. Faces
* aren't indexed for reasons such as:
*
* The number of faces detected exceeds the value of the MaxFaces
* request parameter.
*
* The face is too small compared to the image dimensions. *
** The face is too blurry. *
** The image is too dark. *
** The face has an extreme pose. *
** The face doesn’t have enough detail to be suitable for face search. *
*
* In response, the IndexFaces
operation returns an array of
* metadata for all detected faces, FaceRecords
. This includes:
*
* The bounding box, BoundingBox
, of the detected face.
*
* A confidence value, Confidence
, which indicates the confidence
* that the bounding box contains a face.
*
* A face ID, FaceId
, assigned by the service for each face that's
* detected and stored.
*
* An image ID, ImageId
, assigned by the service for the input
* image.
*
* If you request ALL
or specific facial attributes (e.g.,
* FACE_OCCLUDED
) by using the detectionAttributes parameter,
* Amazon Rekognition returns detailed facial attributes, such as facial
* landmarks (for example, location of eye and mouth), facial occlusion, and
* other facial attributes.
*
* If you provide the same image, specify the same collection, and use the same
* external ID in the IndexFaces
operation, Amazon Rekognition
* doesn't save duplicate face metadata.
*
* The input image is passed either as base64-encoded image bytes, or as a * reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call * Amazon Rekognition operations, passing image bytes isn't supported. The image * must be formatted as a PNG or JPEG file. *
*
* This operation requires permissions to perform the
* rekognition:IndexFaces
action.
*
* The ID of an existing collection to which you want to add the faces that * are detected in the input images. *
*
* Constraints:
* Length: 1 - 255
* Pattern: [a-zA-Z0-9_.\-]+
*/
private String collectionId;
/**
*
* The input image as base64-encoded bytes or an S3 object. If you use the * AWS CLI to call Amazon Rekognition operations, passing base64-encoded * image bytes isn't supported. *
*
* If you are using an AWS SDK to call Amazon Rekognition, you might not
* need to base64-encode image bytes passed using the Bytes
* field. For more information, see Images in the Amazon Rekognition
* developer guide.
*
* The ID you want to assign to all the faces detected in the image. *
*
* Constraints:
* Length: 1 - 255
* Pattern: [a-zA-Z0-9_.\-:]+
*/
private String externalImageId;
/**
*
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
, Pose
,
* Quality
, and Landmarks
- will always be
* returned. You can request for specific facial attributes (in addition to
* the default list) - by using ["DEFAULT", "FACE_OCCLUDED"]
or
* just ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more attributes may
* increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the service uses a
* logical AND operator to determine which attributes to return (in this
* case, all attributes).
*
* The maximum number of faces to index. The value of MaxFaces
* must be greater than or equal to 1. IndexFaces
returns no
* more than 100 detected faces in an image, even if you specify a larger
* value for MaxFaces
.
*
* If IndexFaces
detects more faces than the value of
* MaxFaces
, the faces with the lowest quality are filtered out
* first. If there are still more faces than the value of
* MaxFaces
, the faces with the smallest bounding boxes are
* filtered out (up to the number that's needed to satisfy the value of
* MaxFaces
). Information about the unindexed faces is
* available in the UnindexedFaces
array.
*
* The faces that are returned by IndexFaces
are sorted by the
* largest face bounding box size to the smallest size, in descending order.
*
* MaxFaces
can be used with a collection associated with any
* version of the face model.
*
* Constraints:
* Range: 1 -
*/
private Integer maxFaces;
/**
*
* A filter that specifies a quality bar for how much filtering is done to
* identify faces. Filtered faces aren't indexed. If you specify
* AUTO
, Amazon Rekognition chooses the quality bar. If you
* specify LOW
, MEDIUM
, or HIGH
,
* filtering removes all faces that don’t meet the chosen quality bar. The
* default value is AUTO
. The quality bar is based on a variety
* of common use cases. Low-quality detections can occur for a number of
* reasons. Some examples are an object that's misidentified as a face, a
* face that's too blurry, or a face with a pose that's too extreme to use.
* If you specify NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be associated * with version 3 of the face model or higher. *
*
* Constraints:
* Allowed Values: NONE, AUTO, LOW, MEDIUM, HIGH
*/
private String qualityFilter;
/**
* Default constructor for IndexFacesRequest object. Callers should use the
* setter or fluent setter (with...) methods to initialize any additional
* object members.
*/
public IndexFacesRequest() {
}
/**
* Constructs a new IndexFacesRequest object. Callers should use the setter
* or fluent setter (with...) methods to initialize any additional object
* members.
*
* @param collectionId
* The ID of an existing collection to which you want to add the * faces that are detected in the input images. *
* @param image* The input image as base64-encoded bytes or an S3 object. If * you use the AWS CLI to call Amazon Rekognition operations, * passing base64-encoded image bytes isn't supported. *
*
* If you are using an AWS SDK to call Amazon Rekognition, you
* might not need to base64-encode image bytes passed using the
* Bytes
field. For more information, see Images in
* the Amazon Rekognition developer guide.
*
* The ID of an existing collection to which you want to add the faces that * are detected in the input images. *
*
* Constraints:
* Length: 1 - 255
* Pattern: [a-zA-Z0-9_.\-]+
*
* @return
* The ID of an existing collection to which you want to add the * faces that are detected in the input images. *
*/ public String getCollectionId() { return collectionId; } /** ** The ID of an existing collection to which you want to add the faces that * are detected in the input images. *
*
* Constraints:
* Length: 1 - 255
* Pattern: [a-zA-Z0-9_.\-]+
*
* @param collectionId
* The ID of an existing collection to which you want to add the * faces that are detected in the input images. *
*/ public void setCollectionId(String collectionId) { this.collectionId = collectionId; } /** ** The ID of an existing collection to which you want to add the faces that * are detected in the input images. *
** Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Length: 1 - 255
* Pattern: [a-zA-Z0-9_.\-]+
*
* @param collectionId
* The ID of an existing collection to which you want to add the * faces that are detected in the input images. *
* @return A reference to this updated object so that method calls can be * chained together. */ public IndexFacesRequest withCollectionId(String collectionId) { this.collectionId = collectionId; return this; } /** ** The input image as base64-encoded bytes or an S3 object. If you use the * AWS CLI to call Amazon Rekognition operations, passing base64-encoded * image bytes isn't supported. *
*
* If you are using an AWS SDK to call Amazon Rekognition, you might not
* need to base64-encode image bytes passed using the Bytes
* field. For more information, see Images in the Amazon Rekognition
* developer guide.
*
* The input image as base64-encoded bytes or an S3 object. If you * use the AWS CLI to call Amazon Rekognition operations, passing * base64-encoded image bytes isn't supported. *
*
* If you are using an AWS SDK to call Amazon Rekognition, you might
* not need to base64-encode image bytes passed using the
* Bytes
field. For more information, see Images in the
* Amazon Rekognition developer guide.
*
* The input image as base64-encoded bytes or an S3 object. If you use the * AWS CLI to call Amazon Rekognition operations, passing base64-encoded * image bytes isn't supported. *
*
* If you are using an AWS SDK to call Amazon Rekognition, you might not
* need to base64-encode image bytes passed using the Bytes
* field. For more information, see Images in the Amazon Rekognition
* developer guide.
*
* The input image as base64-encoded bytes or an S3 object. If * you use the AWS CLI to call Amazon Rekognition operations, * passing base64-encoded image bytes isn't supported. *
*
* If you are using an AWS SDK to call Amazon Rekognition, you
* might not need to base64-encode image bytes passed using the
* Bytes
field. For more information, see Images in
* the Amazon Rekognition developer guide.
*
* The input image as base64-encoded bytes or an S3 object. If you use the * AWS CLI to call Amazon Rekognition operations, passing base64-encoded * image bytes isn't supported. *
*
* If you are using an AWS SDK to call Amazon Rekognition, you might not
* need to base64-encode image bytes passed using the Bytes
* field. For more information, see Images in the Amazon Rekognition
* developer guide.
*
* Returns a reference to this object so that method calls can be chained * together. * * @param image
* The input image as base64-encoded bytes or an S3 object. If * you use the AWS CLI to call Amazon Rekognition operations, * passing base64-encoded image bytes isn't supported. *
*
* If you are using an AWS SDK to call Amazon Rekognition, you
* might not need to base64-encode image bytes passed using the
* Bytes
field. For more information, see Images in
* the Amazon Rekognition developer guide.
*
* The ID you want to assign to all the faces detected in the image. *
*
* Constraints:
* Length: 1 - 255
* Pattern: [a-zA-Z0-9_.\-:]+
*
* @return
* The ID you want to assign to all the faces detected in the image. *
*/ public String getExternalImageId() { return externalImageId; } /** ** The ID you want to assign to all the faces detected in the image. *
*
* Constraints:
* Length: 1 - 255
* Pattern: [a-zA-Z0-9_.\-:]+
*
* @param externalImageId
* The ID you want to assign to all the faces detected in the * image. *
*/ public void setExternalImageId(String externalImageId) { this.externalImageId = externalImageId; } /** ** The ID you want to assign to all the faces detected in the image. *
** Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Length: 1 - 255
* Pattern: [a-zA-Z0-9_.\-:]+
*
* @param externalImageId
* The ID you want to assign to all the faces detected in the * image. *
* @return A reference to this updated object so that method calls can be * chained together. */ public IndexFacesRequest withExternalImageId(String externalImageId) { this.externalImageId = externalImageId; return this; } /** *
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
, Pose
,
* Quality
, and Landmarks
- will always be
* returned. You can request for specific facial attributes (in addition to
* the default list) - by using ["DEFAULT", "FACE_OCCLUDED"]
or
* just ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more attributes may
* increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the service uses a
* logical AND operator to determine which attributes to return (in this
* case, all attributes).
*
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
,
* Pose
, Quality
, and
* Landmarks
- will always be returned. You can request
* for specific facial attributes (in addition to the default list)
* - by using ["DEFAULT", "FACE_OCCLUDED"]
or just
* ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more
* attributes may increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the service
* uses a logical AND operator to determine which attributes to
* return (in this case, all attributes).
*
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
, Pose
,
* Quality
, and Landmarks
- will always be
* returned. You can request for specific facial attributes (in addition to
* the default list) - by using ["DEFAULT", "FACE_OCCLUDED"]
or
* just ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more attributes may
* increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the service uses a
* logical AND operator to determine which attributes to return (in this
* case, all attributes).
*
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
,
* Pose
, Quality
, and
* Landmarks
- will always be returned. You can
* request for specific facial attributes (in addition to the
* default list) - by using
* ["DEFAULT", "FACE_OCCLUDED"]
or just
* ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more
* attributes may increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the
* service uses a logical AND operator to determine which
* attributes to return (in this case, all attributes).
*
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
, Pose
,
* Quality
, and Landmarks
- will always be
* returned. You can request for specific facial attributes (in addition to
* the default list) - by using ["DEFAULT", "FACE_OCCLUDED"]
or
* just ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more attributes may
* increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the service uses a
* logical AND operator to determine which attributes to return (in this
* case, all attributes).
*
* Returns a reference to this object so that method calls can be chained * together. * * @param detectionAttributes
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
,
* Pose
, Quality
, and
* Landmarks
- will always be returned. You can
* request for specific facial attributes (in addition to the
* default list) - by using
* ["DEFAULT", "FACE_OCCLUDED"]
or just
* ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more
* attributes may increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the
* service uses a logical AND operator to determine which
* attributes to return (in this case, all attributes).
*
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
, Pose
,
* Quality
, and Landmarks
- will always be
* returned. You can request for specific facial attributes (in addition to
* the default list) - by using ["DEFAULT", "FACE_OCCLUDED"]
or
* just ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more attributes may
* increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the service uses a
* logical AND operator to determine which attributes to return (in this
* case, all attributes).
*
* Returns a reference to this object so that method calls can be chained * together. * * @param detectionAttributes
* An array of facial attributes you want to be returned. A
* DEFAULT
subset of facial attributes -
* BoundingBox
, Confidence
,
* Pose
, Quality
, and
* Landmarks
- will always be returned. You can
* request for specific facial attributes (in addition to the
* default list) - by using
* ["DEFAULT", "FACE_OCCLUDED"]
or just
* ["FACE_OCCLUDED"]
. You can request for all facial
* attributes by using ["ALL"]
. Requesting more
* attributes may increase response time.
*
* If you provide both, ["ALL", "DEFAULT"]
, the
* service uses a logical AND operator to determine which
* attributes to return (in this case, all attributes).
*
* The maximum number of faces to index. The value of MaxFaces
* must be greater than or equal to 1. IndexFaces
returns no
* more than 100 detected faces in an image, even if you specify a larger
* value for MaxFaces
.
*
* If IndexFaces
detects more faces than the value of
* MaxFaces
, the faces with the lowest quality are filtered out
* first. If there are still more faces than the value of
* MaxFaces
, the faces with the smallest bounding boxes are
* filtered out (up to the number that's needed to satisfy the value of
* MaxFaces
). Information about the unindexed faces is
* available in the UnindexedFaces
array.
*
* The faces that are returned by IndexFaces
are sorted by the
* largest face bounding box size to the smallest size, in descending order.
*
* MaxFaces
can be used with a collection associated with any
* version of the face model.
*
* Constraints:
* Range: 1 -
*
* @return
* The maximum number of faces to index. The value of
* MaxFaces
must be greater than or equal to 1.
* IndexFaces
returns no more than 100 detected faces
* in an image, even if you specify a larger value for
* MaxFaces
.
*
* If IndexFaces
detects more faces than the value of
* MaxFaces
, the faces with the lowest quality are
* filtered out first. If there are still more faces than the value
* of MaxFaces
, the faces with the smallest bounding
* boxes are filtered out (up to the number that's needed to satisfy
* the value of MaxFaces
). Information about the
* unindexed faces is available in the UnindexedFaces
* array.
*
* The faces that are returned by IndexFaces
are sorted
* by the largest face bounding box size to the smallest size, in
* descending order.
*
* MaxFaces
can be used with a collection associated
* with any version of the face model.
*
* The maximum number of faces to index. The value of MaxFaces
* must be greater than or equal to 1. IndexFaces
returns no
* more than 100 detected faces in an image, even if you specify a larger
* value for MaxFaces
.
*
* If IndexFaces
detects more faces than the value of
* MaxFaces
, the faces with the lowest quality are filtered out
* first. If there are still more faces than the value of
* MaxFaces
, the faces with the smallest bounding boxes are
* filtered out (up to the number that's needed to satisfy the value of
* MaxFaces
). Information about the unindexed faces is
* available in the UnindexedFaces
array.
*
* The faces that are returned by IndexFaces
are sorted by the
* largest face bounding box size to the smallest size, in descending order.
*
* MaxFaces
can be used with a collection associated with any
* version of the face model.
*
* Constraints:
* Range: 1 -
*
* @param maxFaces
* The maximum number of faces to index. The value of
* MaxFaces
must be greater than or equal to 1.
* IndexFaces
returns no more than 100 detected
* faces in an image, even if you specify a larger value for
* MaxFaces
.
*
* If IndexFaces
detects more faces than the value
* of MaxFaces
, the faces with the lowest quality
* are filtered out first. If there are still more faces than the
* value of MaxFaces
, the faces with the smallest
* bounding boxes are filtered out (up to the number that's
* needed to satisfy the value of MaxFaces
).
* Information about the unindexed faces is available in the
* UnindexedFaces
array.
*
* The faces that are returned by IndexFaces
are
* sorted by the largest face bounding box size to the smallest
* size, in descending order.
*
* MaxFaces
can be used with a collection associated
* with any version of the face model.
*
* The maximum number of faces to index. The value of MaxFaces
* must be greater than or equal to 1. IndexFaces
returns no
* more than 100 detected faces in an image, even if you specify a larger
* value for MaxFaces
.
*
* If IndexFaces
detects more faces than the value of
* MaxFaces
, the faces with the lowest quality are filtered out
* first. If there are still more faces than the value of
* MaxFaces
, the faces with the smallest bounding boxes are
* filtered out (up to the number that's needed to satisfy the value of
* MaxFaces
). Information about the unindexed faces is
* available in the UnindexedFaces
array.
*
* The faces that are returned by IndexFaces
are sorted by the
* largest face bounding box size to the smallest size, in descending order.
*
* MaxFaces
can be used with a collection associated with any
* version of the face model.
*
* Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Range: 1 -
*
* @param maxFaces
* The maximum number of faces to index. The value of
* MaxFaces
must be greater than or equal to 1.
* IndexFaces
returns no more than 100 detected
* faces in an image, even if you specify a larger value for
* MaxFaces
.
*
* If IndexFaces
detects more faces than the value
* of MaxFaces
, the faces with the lowest quality
* are filtered out first. If there are still more faces than the
* value of MaxFaces
, the faces with the smallest
* bounding boxes are filtered out (up to the number that's
* needed to satisfy the value of MaxFaces
).
* Information about the unindexed faces is available in the
* UnindexedFaces
array.
*
* The faces that are returned by IndexFaces
are
* sorted by the largest face bounding box size to the smallest
* size, in descending order.
*
* MaxFaces
can be used with a collection associated
* with any version of the face model.
*
* A filter that specifies a quality bar for how much filtering is done to
* identify faces. Filtered faces aren't indexed. If you specify
* AUTO
, Amazon Rekognition chooses the quality bar. If you
* specify LOW
, MEDIUM
, or HIGH
,
* filtering removes all faces that don’t meet the chosen quality bar. The
* default value is AUTO
. The quality bar is based on a variety
* of common use cases. Low-quality detections can occur for a number of
* reasons. Some examples are an object that's misidentified as a face, a
* face that's too blurry, or a face with a pose that's too extreme to use.
* If you specify NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be associated * with version 3 of the face model or higher. *
*
* Constraints:
* Allowed Values: NONE, AUTO, LOW, MEDIUM, HIGH
*
* @return
* A filter that specifies a quality bar for how much filtering is
* done to identify faces. Filtered faces aren't indexed. If you
* specify AUTO
, Amazon Rekognition chooses the quality
* bar. If you specify LOW
, MEDIUM
, or
* HIGH
, filtering removes all faces that don’t meet
* the chosen quality bar. The default value is AUTO
.
* The quality bar is based on a variety of common use cases.
* Low-quality detections can occur for a number of reasons. Some
* examples are an object that's misidentified as a face, a face
* that's too blurry, or a face with a pose that's too extreme to
* use. If you specify NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be * associated with version 3 of the face model or higher. *
* @see QualityFilter */ public String getQualityFilter() { return qualityFilter; } /** *
* A filter that specifies a quality bar for how much filtering is done to
* identify faces. Filtered faces aren't indexed. If you specify
* AUTO
, Amazon Rekognition chooses the quality bar. If you
* specify LOW
, MEDIUM
, or HIGH
,
* filtering removes all faces that don’t meet the chosen quality bar. The
* default value is AUTO
. The quality bar is based on a variety
* of common use cases. Low-quality detections can occur for a number of
* reasons. Some examples are an object that's misidentified as a face, a
* face that's too blurry, or a face with a pose that's too extreme to use.
* If you specify NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be associated * with version 3 of the face model or higher. *
*
* Constraints:
* Allowed Values: NONE, AUTO, LOW, MEDIUM, HIGH
*
* @param qualityFilter
* A filter that specifies a quality bar for how much filtering
* is done to identify faces. Filtered faces aren't indexed. If
* you specify AUTO
, Amazon Rekognition chooses the
* quality bar. If you specify LOW
,
* MEDIUM
, or HIGH
, filtering removes
* all faces that don’t meet the chosen quality bar. The default
* value is AUTO
. The quality bar is based on a
* variety of common use cases. Low-quality detections can occur
* for a number of reasons. Some examples are an object that's
* misidentified as a face, a face that's too blurry, or a face
* with a pose that's too extreme to use. If you specify
* NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be * associated with version 3 of the face model or higher. *
* @see QualityFilter */ public void setQualityFilter(String qualityFilter) { this.qualityFilter = qualityFilter; } /** *
* A filter that specifies a quality bar for how much filtering is done to
* identify faces. Filtered faces aren't indexed. If you specify
* AUTO
, Amazon Rekognition chooses the quality bar. If you
* specify LOW
, MEDIUM
, or HIGH
,
* filtering removes all faces that don’t meet the chosen quality bar. The
* default value is AUTO
. The quality bar is based on a variety
* of common use cases. Low-quality detections can occur for a number of
* reasons. Some examples are an object that's misidentified as a face, a
* face that's too blurry, or a face with a pose that's too extreme to use.
* If you specify NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be associated * with version 3 of the face model or higher. *
** Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Allowed Values: NONE, AUTO, LOW, MEDIUM, HIGH
*
* @param qualityFilter
* A filter that specifies a quality bar for how much filtering
* is done to identify faces. Filtered faces aren't indexed. If
* you specify AUTO
, Amazon Rekognition chooses the
* quality bar. If you specify LOW
,
* MEDIUM
, or HIGH
, filtering removes
* all faces that don’t meet the chosen quality bar. The default
* value is AUTO
. The quality bar is based on a
* variety of common use cases. Low-quality detections can occur
* for a number of reasons. Some examples are an object that's
* misidentified as a face, a face that's too blurry, or a face
* with a pose that's too extreme to use. If you specify
* NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be * associated with version 3 of the face model or higher. *
* @return A reference to this updated object so that method calls can be * chained together. * @see QualityFilter */ public IndexFacesRequest withQualityFilter(String qualityFilter) { this.qualityFilter = qualityFilter; return this; } /** *
* A filter that specifies a quality bar for how much filtering is done to
* identify faces. Filtered faces aren't indexed. If you specify
* AUTO
, Amazon Rekognition chooses the quality bar. If you
* specify LOW
, MEDIUM
, or HIGH
,
* filtering removes all faces that don’t meet the chosen quality bar. The
* default value is AUTO
. The quality bar is based on a variety
* of common use cases. Low-quality detections can occur for a number of
* reasons. Some examples are an object that's misidentified as a face, a
* face that's too blurry, or a face with a pose that's too extreme to use.
* If you specify NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be associated * with version 3 of the face model or higher. *
*
* Constraints:
* Allowed Values: NONE, AUTO, LOW, MEDIUM, HIGH
*
* @param qualityFilter
* A filter that specifies a quality bar for how much filtering
* is done to identify faces. Filtered faces aren't indexed. If
* you specify AUTO
, Amazon Rekognition chooses the
* quality bar. If you specify LOW
,
* MEDIUM
, or HIGH
, filtering removes
* all faces that don’t meet the chosen quality bar. The default
* value is AUTO
. The quality bar is based on a
* variety of common use cases. Low-quality detections can occur
* for a number of reasons. Some examples are an object that's
* misidentified as a face, a face that's too blurry, or a face
* with a pose that's too extreme to use. If you specify
* NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be * associated with version 3 of the face model or higher. *
* @see QualityFilter */ public void setQualityFilter(QualityFilter qualityFilter) { this.qualityFilter = qualityFilter.toString(); } /** *
* A filter that specifies a quality bar for how much filtering is done to
* identify faces. Filtered faces aren't indexed. If you specify
* AUTO
, Amazon Rekognition chooses the quality bar. If you
* specify LOW
, MEDIUM
, or HIGH
,
* filtering removes all faces that don’t meet the chosen quality bar. The
* default value is AUTO
. The quality bar is based on a variety
* of common use cases. Low-quality detections can occur for a number of
* reasons. Some examples are an object that's misidentified as a face, a
* face that's too blurry, or a face with a pose that's too extreme to use.
* If you specify NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be associated * with version 3 of the face model or higher. *
** Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Allowed Values: NONE, AUTO, LOW, MEDIUM, HIGH
*
* @param qualityFilter
* A filter that specifies a quality bar for how much filtering
* is done to identify faces. Filtered faces aren't indexed. If
* you specify AUTO
, Amazon Rekognition chooses the
* quality bar. If you specify LOW
,
* MEDIUM
, or HIGH
, filtering removes
* all faces that don’t meet the chosen quality bar. The default
* value is AUTO
. The quality bar is based on a
* variety of common use cases. Low-quality detections can occur
* for a number of reasons. Some examples are an object that's
* misidentified as a face, a face that's too blurry, or a face
* with a pose that's too extreme to use. If you specify
* NONE
, no filtering is performed.
*
* To use quality filtering, the collection you are using must be * associated with version 3 of the face model or higher. *
* @return A reference to this updated object so that method calls can be * chained together. * @see QualityFilter */ public IndexFacesRequest withQualityFilter(QualityFilter qualityFilter) { this.qualityFilter = qualityFilter.toString(); return this; } /** * Returns a string representation of this object; useful for testing and * debugging. * * @return A string representation of this object. * @see java.lang.Object#toString() */ @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append("{"); if (getCollectionId() != null) sb.append("CollectionId: " + getCollectionId() + ","); if (getImage() != null) sb.append("Image: " + getImage() + ","); if (getExternalImageId() != null) sb.append("ExternalImageId: " + getExternalImageId() + ","); if (getDetectionAttributes() != null) sb.append("DetectionAttributes: " + getDetectionAttributes() + ","); if (getMaxFaces() != null) sb.append("MaxFaces: " + getMaxFaces() + ","); if (getQualityFilter() != null) sb.append("QualityFilter: " + getQualityFilter()); sb.append("}"); return sb.toString(); } @Override public int hashCode() { final int prime = 31; int hashCode = 1; hashCode = prime * hashCode + ((getCollectionId() == null) ? 0 : getCollectionId().hashCode()); hashCode = prime * hashCode + ((getImage() == null) ? 0 : getImage().hashCode()); hashCode = prime * hashCode + ((getExternalImageId() == null) ? 0 : getExternalImageId().hashCode()); hashCode = prime * hashCode + ((getDetectionAttributes() == null) ? 0 : getDetectionAttributes().hashCode()); hashCode = prime * hashCode + ((getMaxFaces() == null) ? 0 : getMaxFaces().hashCode()); hashCode = prime * hashCode + ((getQualityFilter() == null) ? 0 : getQualityFilter().hashCode()); return hashCode; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (obj instanceof IndexFacesRequest == false) return false; IndexFacesRequest other = (IndexFacesRequest) obj; if (other.getCollectionId() == null ^ this.getCollectionId() == null) return false; if (other.getCollectionId() != null && other.getCollectionId().equals(this.getCollectionId()) == false) return false; if (other.getImage() == null ^ this.getImage() == null) return false; if (other.getImage() != null && other.getImage().equals(this.getImage()) == false) return false; if (other.getExternalImageId() == null ^ this.getExternalImageId() == null) return false; if (other.getExternalImageId() != null && other.getExternalImageId().equals(this.getExternalImageId()) == false) return false; if (other.getDetectionAttributes() == null ^ this.getDetectionAttributes() == null) return false; if (other.getDetectionAttributes() != null && other.getDetectionAttributes().equals(this.getDetectionAttributes()) == false) return false; if (other.getMaxFaces() == null ^ this.getMaxFaces() == null) return false; if (other.getMaxFaces() != null && other.getMaxFaces().equals(this.getMaxFaces()) == false) return false; if (other.getQualityFilter() == null ^ this.getQualityFilter() == null) return false; if (other.getQualityFilter() != null && other.getQualityFilter().equals(this.getQualityFilter()) == false) return false; return true; } }