/* * Copyright 2010-2023 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). * You may not use this file except in compliance with the License. * A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either * express or implied. See the License for the specific language governing * permissions and limitations under the License. */ package com.amazonaws.services.rekognition.model; import java.io.Serializable; public class IndexFacesResult implements Serializable { /** *
* An array of faces detected and added to the collection. For more * information, see Searching Faces in a Collection in the Amazon * Rekognition Developer Guide. *
*/ private java.util.List
* If your collection is associated with a face detection model that's later
* than version 3.0, the value of OrientationCorrection
is
* always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's * version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable
* image file format (Exif) metadata that includes the image's orientation.
* Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent
* object locations after the orientation information in the Exif metadata
* is used to correct the image orientation. Images in .png format don't
* contain Exif metadata. The value of OrientationCorrection
is
* null.
*
* If the image doesn't contain orientation information in its Exif * metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, * ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't * translated and represent the object locations before the image is * rotated. *
*
* Bounding box information is returned in the FaceRecords
* array. You can get the version of the face detection model by calling
* DescribeCollection.
*
* Constraints:
* Allowed Values: ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270
*/
private String orientationCorrection;
/**
*
* The version number of the face detection model that's associated with the
* input collection (CollectionId
).
*
* An array of faces that were detected in the image but weren't indexed.
* They weren't indexed because the quality filter identified them as low
* quality, or the MaxFaces
request parameter filtered them
* out. To use the quality filter, you specify the
* QualityFilter
request parameter.
*
* An array of faces detected and added to the collection. For more * information, see Searching Faces in a Collection in the Amazon * Rekognition Developer Guide. *
* * @return* An array of faces detected and added to the collection. For more * information, see Searching Faces in a Collection in the Amazon * Rekognition Developer Guide. *
*/ public java.util.List* An array of faces detected and added to the collection. For more * information, see Searching Faces in a Collection in the Amazon * Rekognition Developer Guide. *
* * @param faceRecords* An array of faces detected and added to the collection. For * more information, see Searching Faces in a Collection in the * Amazon Rekognition Developer Guide. *
*/ public void setFaceRecords(java.util.Collection* An array of faces detected and added to the collection. For more * information, see Searching Faces in a Collection in the Amazon * Rekognition Developer Guide. *
** Returns a reference to this object so that method calls can be chained * together. * * @param faceRecords
* An array of faces detected and added to the collection. For * more information, see Searching Faces in a Collection in the * Amazon Rekognition Developer Guide. *
* @return A reference to this updated object so that method calls can be * chained together. */ public IndexFacesResult withFaceRecords(FaceRecord... faceRecords) { if (getFaceRecords() == null) { this.faceRecords = new java.util.ArrayList* An array of faces detected and added to the collection. For more * information, see Searching Faces in a Collection in the Amazon * Rekognition Developer Guide. *
** Returns a reference to this object so that method calls can be chained * together. * * @param faceRecords
* An array of faces detected and added to the collection. For * more information, see Searching Faces in a Collection in the * Amazon Rekognition Developer Guide. *
* @return A reference to this updated object so that method calls can be * chained together. */ public IndexFacesResult withFaceRecords(java.util.Collection
* If your collection is associated with a face detection model that's later
* than version 3.0, the value of OrientationCorrection
is
* always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's * version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable
* image file format (Exif) metadata that includes the image's orientation.
* Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent
* object locations after the orientation information in the Exif metadata
* is used to correct the image orientation. Images in .png format don't
* contain Exif metadata. The value of OrientationCorrection
is
* null.
*
* If the image doesn't contain orientation information in its Exif * metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, * ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't * translated and represent the object locations before the image is * rotated. *
*
* Bounding box information is returned in the FaceRecords
* array. You can get the version of the face detection model by calling
* DescribeCollection.
*
* Constraints:
* Allowed Values: ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270
*
* @return
* If your collection is associated with a face detection model
* that's later than version 3.0, the value of
* OrientationCorrection
is always null and no
* orientation information is returned.
*
* If your collection is associated with a face detection model * that's version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain
* exchangeable image file format (Exif) metadata that includes the
* image's orientation. Amazon Rekognition uses this orientation
* information to perform image correction - the bounding box
* coordinates are translated to represent object locations after
* the orientation information in the Exif metadata is used to
* correct the image orientation. Images in .png format don't
* contain Exif metadata. The value of
* OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif * metadata, Amazon Rekognition returns an estimated orientation * (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition * doesn’t perform image correction for images. The bounding box * coordinates aren't translated and represent the object locations * before the image is rotated. *
*
* Bounding box information is returned in the
* FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
*
* If your collection is associated with a face detection model that's later
* than version 3.0, the value of OrientationCorrection
is
* always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's * version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable
* image file format (Exif) metadata that includes the image's orientation.
* Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent
* object locations after the orientation information in the Exif metadata
* is used to correct the image orientation. Images in .png format don't
* contain Exif metadata. The value of OrientationCorrection
is
* null.
*
* If the image doesn't contain orientation information in its Exif * metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, * ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't * translated and represent the object locations before the image is * rotated. *
*
* Bounding box information is returned in the FaceRecords
* array. You can get the version of the face detection model by calling
* DescribeCollection.
*
* Constraints:
* Allowed Values: ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270
*
* @param orientationCorrection
* If your collection is associated with a face detection model
* that's later than version 3.0, the value of
* OrientationCorrection
is always null and no
* orientation information is returned.
*
* If your collection is associated with a face detection model * that's version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain
* exchangeable image file format (Exif) metadata that includes
* the image's orientation. Amazon Rekognition uses this
* orientation information to perform image correction - the
* bounding box coordinates are translated to represent object
* locations after the orientation information in the Exif
* metadata is used to correct the image orientation. Images in
* .png format don't contain Exif metadata. The value of
* OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its * Exif metadata, Amazon Rekognition returns an estimated * orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). * Amazon Rekognition doesn’t perform image correction for * images. The bounding box coordinates aren't translated and * represent the object locations before the image is rotated. *
*
* Bounding box information is returned in the
* FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
*
* If your collection is associated with a face detection model that's later
* than version 3.0, the value of OrientationCorrection
is
* always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's * version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable
* image file format (Exif) metadata that includes the image's orientation.
* Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent
* object locations after the orientation information in the Exif metadata
* is used to correct the image orientation. Images in .png format don't
* contain Exif metadata. The value of OrientationCorrection
is
* null.
*
* If the image doesn't contain orientation information in its Exif * metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, * ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't * translated and represent the object locations before the image is * rotated. *
*
* Bounding box information is returned in the FaceRecords
* array. You can get the version of the face detection model by calling
* DescribeCollection.
*
* Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Allowed Values: ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270
*
* @param orientationCorrection
* If your collection is associated with a face detection model
* that's later than version 3.0, the value of
* OrientationCorrection
is always null and no
* orientation information is returned.
*
* If your collection is associated with a face detection model * that's version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain
* exchangeable image file format (Exif) metadata that includes
* the image's orientation. Amazon Rekognition uses this
* orientation information to perform image correction - the
* bounding box coordinates are translated to represent object
* locations after the orientation information in the Exif
* metadata is used to correct the image orientation. Images in
* .png format don't contain Exif metadata. The value of
* OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its * Exif metadata, Amazon Rekognition returns an estimated * orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). * Amazon Rekognition doesn’t perform image correction for * images. The bounding box coordinates aren't translated and * represent the object locations before the image is rotated. *
*
* Bounding box information is returned in the
* FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
*
* If your collection is associated with a face detection model that's later
* than version 3.0, the value of OrientationCorrection
is
* always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's * version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable
* image file format (Exif) metadata that includes the image's orientation.
* Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent
* object locations after the orientation information in the Exif metadata
* is used to correct the image orientation. Images in .png format don't
* contain Exif metadata. The value of OrientationCorrection
is
* null.
*
* If the image doesn't contain orientation information in its Exif * metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, * ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't * translated and represent the object locations before the image is * rotated. *
*
* Bounding box information is returned in the FaceRecords
* array. You can get the version of the face detection model by calling
* DescribeCollection.
*
* Constraints:
* Allowed Values: ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270
*
* @param orientationCorrection
* If your collection is associated with a face detection model
* that's later than version 3.0, the value of
* OrientationCorrection
is always null and no
* orientation information is returned.
*
* If your collection is associated with a face detection model * that's version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain
* exchangeable image file format (Exif) metadata that includes
* the image's orientation. Amazon Rekognition uses this
* orientation information to perform image correction - the
* bounding box coordinates are translated to represent object
* locations after the orientation information in the Exif
* metadata is used to correct the image orientation. Images in
* .png format don't contain Exif metadata. The value of
* OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its * Exif metadata, Amazon Rekognition returns an estimated * orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). * Amazon Rekognition doesn’t perform image correction for * images. The bounding box coordinates aren't translated and * represent the object locations before the image is rotated. *
*
* Bounding box information is returned in the
* FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
*
* If your collection is associated with a face detection model that's later
* than version 3.0, the value of OrientationCorrection
is
* always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's * version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable
* image file format (Exif) metadata that includes the image's orientation.
* Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent
* object locations after the orientation information in the Exif metadata
* is used to correct the image orientation. Images in .png format don't
* contain Exif metadata. The value of OrientationCorrection
is
* null.
*
* If the image doesn't contain orientation information in its Exif * metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, * ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't * translated and represent the object locations before the image is * rotated. *
*
* Bounding box information is returned in the FaceRecords
* array. You can get the version of the face detection model by calling
* DescribeCollection.
*
* Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Allowed Values: ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270
*
* @param orientationCorrection
* If your collection is associated with a face detection model
* that's later than version 3.0, the value of
* OrientationCorrection
is always null and no
* orientation information is returned.
*
* If your collection is associated with a face detection model * that's version 3.0 or earlier, the following applies: *
*
* If the input image is in .jpeg format, it might contain
* exchangeable image file format (Exif) metadata that includes
* the image's orientation. Amazon Rekognition uses this
* orientation information to perform image correction - the
* bounding box coordinates are translated to represent object
* locations after the orientation information in the Exif
* metadata is used to correct the image orientation. Images in
* .png format don't contain Exif metadata. The value of
* OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its * Exif metadata, Amazon Rekognition returns an estimated * orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). * Amazon Rekognition doesn’t perform image correction for * images. The bounding box coordinates aren't translated and * represent the object locations before the image is rotated. *
*
* Bounding box information is returned in the
* FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
*
* The version number of the face detection model that's associated with the
* input collection (CollectionId
).
*
* The version number of the face detection model that's associated
* with the input collection (CollectionId
).
*
* The version number of the face detection model that's associated with the
* input collection (CollectionId
).
*
* The version number of the face detection model that's
* associated with the input collection (
* CollectionId
).
*
* The version number of the face detection model that's associated with the
* input collection (CollectionId
).
*
* Returns a reference to this object so that method calls can be chained * together. * * @param faceModelVersion
* The version number of the face detection model that's
* associated with the input collection (
* CollectionId
).
*
* An array of faces that were detected in the image but weren't indexed.
* They weren't indexed because the quality filter identified them as low
* quality, or the MaxFaces
request parameter filtered them
* out. To use the quality filter, you specify the
* QualityFilter
request parameter.
*
* An array of faces that were detected in the image but weren't
* indexed. They weren't indexed because the quality filter
* identified them as low quality, or the MaxFaces
* request parameter filtered them out. To use the quality filter,
* you specify the QualityFilter
request parameter.
*
* An array of faces that were detected in the image but weren't indexed.
* They weren't indexed because the quality filter identified them as low
* quality, or the MaxFaces
request parameter filtered them
* out. To use the quality filter, you specify the
* QualityFilter
request parameter.
*
* An array of faces that were detected in the image but weren't
* indexed. They weren't indexed because the quality filter
* identified them as low quality, or the MaxFaces
* request parameter filtered them out. To use the quality
* filter, you specify the QualityFilter
request
* parameter.
*
* An array of faces that were detected in the image but weren't indexed.
* They weren't indexed because the quality filter identified them as low
* quality, or the MaxFaces
request parameter filtered them
* out. To use the quality filter, you specify the
* QualityFilter
request parameter.
*
* Returns a reference to this object so that method calls can be chained * together. * * @param unindexedFaces
* An array of faces that were detected in the image but weren't
* indexed. They weren't indexed because the quality filter
* identified them as low quality, or the MaxFaces
* request parameter filtered them out. To use the quality
* filter, you specify the QualityFilter
request
* parameter.
*
* An array of faces that were detected in the image but weren't indexed.
* They weren't indexed because the quality filter identified them as low
* quality, or the MaxFaces
request parameter filtered them
* out. To use the quality filter, you specify the
* QualityFilter
request parameter.
*
* Returns a reference to this object so that method calls can be chained * together. * * @param unindexedFaces
* An array of faces that were detected in the image but weren't
* indexed. They weren't indexed because the quality filter
* identified them as low quality, or the MaxFaces
* request parameter filtered them out. To use the quality
* filter, you specify the QualityFilter
request
* parameter.
*