/*
* Copyright 2018-2023 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with
* the License. A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions
* and limitations under the License.
*/
package com.amazonaws.services.rekognition.model;
import java.io.Serializable;
import javax.annotation.Generated;
@Generated("com.amazonaws:aws-java-sdk-code-generator")
public class IndexFacesResult extends com.amazonaws.AmazonWebServiceResult
* An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection
* in the Amazon Rekognition Developer Guide.
*
* If your collection is associated with a face detection model that's later than version 3.0, the value of
*
* If your collection is associated with a face detection model that's version 3.0 or earlier, the following
* applies:
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that
* includes the image's orientation. Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent object locations after the orientation
* information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain
* Exif metadata. The value of
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an
* estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image
* correction for images. The bounding box coordinates aren't translated and represent the object locations before
* the image is rotated.
*
* Bounding box information is returned in the
* The version number of the face detection model that's associated with the input collection (
*
* An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality
* filter identified them as low quality, or the
* An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection
* in the Amazon Rekognition Developer Guide.
*
* An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection
* in the Amazon Rekognition Developer Guide.
*
* An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection
* in the Amazon Rekognition Developer Guide.
*
* NOTE: This method appends the values to the existing list (if any). Use
* {@link #setFaceRecords(java.util.Collection)} or {@link #withFaceRecords(java.util.Collection)} if you want to
* override the existing values.
*
* An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection
* in the Amazon Rekognition Developer Guide.
*
* If your collection is associated with a face detection model that's later than version 3.0, the value of
*
* If your collection is associated with a face detection model that's version 3.0 or earlier, the following
* applies:
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that
* includes the image's orientation. Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent object locations after the orientation
* information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain
* Exif metadata. The value of
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an
* estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image
* correction for images. The bounding box coordinates aren't translated and represent the object locations before
* the image is rotated.
*
* Bounding box information is returned in the OrientationCorrection
is always null and no orientation information is returned.
*
*
* OrientationCorrection
is null.
* FaceRecords
array. You can get the version of the face
* detection model by calling DescribeCollection.
* CollectionId
).
* MaxFaces
request parameter filtered them out. To use
* the quality filter, you specify the QualityFilter
request parameter.
* OrientationCorrection
is always null and no orientation information is returned.
*
*
* OrientationCorrection
is null.
* FaceRecords
array. You can get the version of the face
* detection model by calling DescribeCollection.
* OrientationCorrection
is always null and no orientation information is returned.
* If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata
* that includes the image's orientation. Amazon Rekognition uses this orientation information to perform
* image correction - the bounding box coordinates are translated to represent object locations after the
* orientation information in the Exif metadata is used to correct the image orientation. Images in .png
* format don't contain Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't translated and represent the object * locations before the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
* @see OrientationCorrection
*/
public void setOrientationCorrection(String orientationCorrection) {
this.orientationCorrection = orientationCorrection;
}
/**
*
* If your collection is associated with a face detection model that's later than version 3.0, the value of
* OrientationCorrection
is always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that
* includes the image's orientation. Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent object locations after the orientation
* information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain
* Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image * correction for images. The bounding box coordinates aren't translated and represent the object locations before * the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of the face
* detection model by calling DescribeCollection.
*
OrientationCorrection
is always null and no orientation information is returned.
* * If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata
* that includes the image's orientation. Amazon Rekognition uses this orientation information to perform
* image correction - the bounding box coordinates are translated to represent object locations after the
* orientation information in the Exif metadata is used to correct the image orientation. Images in .png
* format don't contain Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't translated and represent the object * locations before the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of
* the face detection model by calling DescribeCollection.
* @see OrientationCorrection
*/
public String getOrientationCorrection() {
return this.orientationCorrection;
}
/**
*
* If your collection is associated with a face detection model that's later than version 3.0, the value of
* OrientationCorrection
is always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that
* includes the image's orientation. Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent object locations after the orientation
* information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain
* Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image * correction for images. The bounding box coordinates aren't translated and represent the object locations before * the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of the face
* detection model by calling DescribeCollection.
*
OrientationCorrection
is always null and no orientation information is returned.
* * If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata
* that includes the image's orientation. Amazon Rekognition uses this orientation information to perform
* image correction - the bounding box coordinates are translated to represent object locations after the
* orientation information in the Exif metadata is used to correct the image orientation. Images in .png
* format don't contain Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't translated and represent the object * locations before the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
* @return Returns a reference to this object so that method calls can be chained together.
* @see OrientationCorrection
*/
public IndexFacesResult withOrientationCorrection(String orientationCorrection) {
setOrientationCorrection(orientationCorrection);
return this;
}
/**
*
* If your collection is associated with a face detection model that's later than version 3.0, the value of
* OrientationCorrection
is always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that
* includes the image's orientation. Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent object locations after the orientation
* information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain
* Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image * correction for images. The bounding box coordinates aren't translated and represent the object locations before * the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of the face
* detection model by calling DescribeCollection.
*
OrientationCorrection
is always null and no orientation information is returned.
* * If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata
* that includes the image's orientation. Amazon Rekognition uses this orientation information to perform
* image correction - the bounding box coordinates are translated to represent object locations after the
* orientation information in the Exif metadata is used to correct the image orientation. Images in .png
* format don't contain Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't translated and represent the object * locations before the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
* @see OrientationCorrection
*/
public void setOrientationCorrection(OrientationCorrection orientationCorrection) {
withOrientationCorrection(orientationCorrection);
}
/**
*
* If your collection is associated with a face detection model that's later than version 3.0, the value of
* OrientationCorrection
is always null and no orientation information is returned.
*
* If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that
* includes the image's orientation. Amazon Rekognition uses this orientation information to perform image
* correction - the bounding box coordinates are translated to represent object locations after the orientation
* information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain
* Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image * correction for images. The bounding box coordinates aren't translated and represent the object locations before * the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of the face
* detection model by calling DescribeCollection.
*
OrientationCorrection
is always null and no orientation information is returned.
* * If your collection is associated with a face detection model that's version 3.0 or earlier, the following * applies: *
*
* If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata
* that includes the image's orientation. Amazon Rekognition uses this orientation information to perform
* image correction - the bounding box coordinates are translated to represent object locations after the
* orientation information in the Exif metadata is used to correct the image orientation. Images in .png
* format don't contain Exif metadata. The value of OrientationCorrection
is null.
*
* If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an * estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform * image correction for images. The bounding box coordinates aren't translated and represent the object * locations before the image is rotated. *
*
* Bounding box information is returned in the FaceRecords
array. You can get the version of the
* face detection model by calling DescribeCollection.
* @return Returns a reference to this object so that method calls can be chained together.
* @see OrientationCorrection
*/
public IndexFacesResult withOrientationCorrection(OrientationCorrection orientationCorrection) {
this.orientationCorrection = orientationCorrection.toString();
return this;
}
/**
*
* The version number of the face detection model that's associated with the input collection (
* CollectionId
).
*
CollectionId
).
*/
public void setFaceModelVersion(String faceModelVersion) {
this.faceModelVersion = faceModelVersion;
}
/**
*
* The version number of the face detection model that's associated with the input collection (
* CollectionId
).
*
CollectionId
).
*/
public String getFaceModelVersion() {
return this.faceModelVersion;
}
/**
*
* The version number of the face detection model that's associated with the input collection (
* CollectionId
).
*
CollectionId
).
* @return Returns a reference to this object so that method calls can be chained together.
*/
public IndexFacesResult withFaceModelVersion(String faceModelVersion) {
setFaceModelVersion(faceModelVersion);
return this;
}
/**
*
* An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality
* filter identified them as low quality, or the MaxFaces
request parameter filtered them out. To use
* the quality filter, you specify the QualityFilter
request parameter.
*
MaxFaces
request parameter filtered
* them out. To use the quality filter, you specify the QualityFilter
request parameter.
*/
public java.util.List
* An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality
* filter identified them as low quality, or the MaxFaces
request parameter filtered them out. To use
* the quality filter, you specify the QualityFilter
request parameter.
*
MaxFaces
request parameter filtered
* them out. To use the quality filter, you specify the QualityFilter
request parameter.
*/
public void setUnindexedFaces(java.util.Collection
* An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality
* filter identified them as low quality, or the MaxFaces
request parameter filtered them out. To use
* the quality filter, you specify the QualityFilter
request parameter.
*
* NOTE: This method appends the values to the existing list (if any). Use * {@link #setUnindexedFaces(java.util.Collection)} or {@link #withUnindexedFaces(java.util.Collection)} if you want * to override the existing values. *
* * @param unindexedFaces * An array of faces that were detected in the image but weren't indexed. They weren't indexed because the * quality filter identified them as low quality, or theMaxFaces
request parameter filtered
* them out. To use the quality filter, you specify the QualityFilter
request parameter.
* @return Returns a reference to this object so that method calls can be chained together.
*/
public IndexFacesResult withUnindexedFaces(UnindexedFace... unindexedFaces) {
if (this.unindexedFaces == null) {
setUnindexedFaces(new java.util.ArrayList
* An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality
* filter identified them as low quality, or the MaxFaces
request parameter filtered them out. To use
* the quality filter, you specify the QualityFilter
request parameter.
*
MaxFaces
request parameter filtered
* them out. To use the quality filter, you specify the QualityFilter
request parameter.
* @return Returns a reference to this object so that method calls can be chained together.
*/
public IndexFacesResult withUnindexedFaces(java.util.Collection