'2.0', 'service' => '
Amazon Comprehend is an Amazon Web Services service for gaining insight into the content of documents. Use these actions to determine the topics contained in your documents, the topics they discuss, the predominant sentiment expressed in them, the predominant language used, and more.
', 'operations' => [ 'BatchDetectDominantLanguage' => 'Determines the dominant language of the input text for a batch of documents. For a list of languages that Amazon Comprehend can detect, see Amazon Comprehend Supported Languages.
', 'BatchDetectEntities' => 'Inspects the text of a batch of documents for named entities and returns information about them. For more information about named entities, see Entities in the Comprehend Developer Guide.
', 'BatchDetectKeyPhrases' => 'Detects the key noun phrases found in a batch of documents.
', 'BatchDetectSentiment' => 'Inspects a batch of documents and returns an inference of the prevailing sentiment, POSITIVE
, NEUTRAL
, MIXED
, or NEGATIVE
, in each one.
Inspects the text of a batch of documents for the syntax and part of speech of the words in the document and returns information about them. For more information, see Syntax in the Comprehend Developer Guide.
', 'BatchDetectTargetedSentiment' => 'Inspects a batch of documents and returns a sentiment analysis for each entity identified in the documents.
For more information about targeted sentiment, see Targeted sentiment.
', 'ClassifyDocument' => 'Creates a new document classification request to analyze a single document in real-time, using a previously created and trained custom model and an endpoint.
You can input plain text or you can upload a single-page input document (text, PDF, Word, or image).
If the system detects errors while processing a page in the input document, the API response includes an entry in Errors
that describes the errors.
If the system detects a document-level error in your input document, the API returns an InvalidRequestException
error response. For details about this exception, see Errors in semi-structured documents in the Comprehend Developer Guide.
Analyzes input text for the presence of personally identifiable information (PII) and returns the labels of identified PII entity types such as name, address, bank account number, or phone number.
', 'CreateDataset' => 'Creates a dataset to upload training or test data for a model associated with a flywheel. For more information about datasets, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'CreateDocumentClassifier' => 'Creates a new document classifier that you can use to categorize documents. To create a classifier, you provide a set of training documents that are labeled with the categories that you want to use. For more information, see Training classifier models in the Comprehend Developer Guide.
', 'CreateEndpoint' => 'Creates a model-specific endpoint for synchronous inference for a previously trained custom model For information about endpoints, see Managing endpoints.
', 'CreateEntityRecognizer' => 'Creates an entity recognizer using submitted files. After your CreateEntityRecognizer
request is submitted, you can check job status using the DescribeEntityRecognizer
API.
A flywheel is an Amazon Web Services resource that orchestrates the ongoing training of a model for custom classification or custom entity recognition. You can create a flywheel to start with an existing trained model, or Comprehend can create and train a new model.
When you create the flywheel, Comprehend creates a data lake in your account. The data lake holds the training data and test data for all versions of the model.
To use a flywheel with an existing trained model, you specify the active model version. Comprehend copies the model\'s training data and test data into the flywheel\'s data lake.
To use the flywheel with a new model, you need to provide a dataset for training data (and optional test data) when you create the flywheel.
For more information about flywheels, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'DeleteDocumentClassifier' => 'Deletes a previously created document classifier
Only those classifiers that are in terminated states (IN_ERROR, TRAINED) will be deleted. If an active inference job is using the model, a ResourceInUseException
will be returned.
This is an asynchronous action that puts the classifier into a DELETING state, and it is then removed by a background job. Once removed, the classifier disappears from your account and is no longer available for use.
', 'DeleteEndpoint' => 'Deletes a model-specific endpoint for a previously-trained custom model. All endpoints must be deleted in order for the model to be deleted. For information about endpoints, see Managing endpoints.
', 'DeleteEntityRecognizer' => 'Deletes an entity recognizer.
Only those recognizers that are in terminated states (IN_ERROR, TRAINED) will be deleted. If an active inference job is using the model, a ResourceInUseException
will be returned.
This is an asynchronous action that puts the recognizer into a DELETING state, and it is then removed by a background job. Once removed, the recognizer disappears from your account and is no longer available for use.
', 'DeleteFlywheel' => 'Deletes a flywheel. When you delete the flywheel, Amazon Comprehend does not delete the data lake or the model associated with the flywheel.
For more information about flywheels, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'DeleteResourcePolicy' => 'Deletes a resource-based policy that is attached to a custom model.
', 'DescribeDataset' => 'Returns information about the dataset that you specify. For more information about datasets, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'DescribeDocumentClassificationJob' => 'Gets the properties associated with a document classification job. Use this operation to get the status of a classification job.
', 'DescribeDocumentClassifier' => 'Gets the properties associated with a document classifier.
', 'DescribeDominantLanguageDetectionJob' => 'Gets the properties associated with a dominant language detection job. Use this operation to get the status of a detection job.
', 'DescribeEndpoint' => 'Gets the properties associated with a specific endpoint. Use this operation to get the status of an endpoint. For information about endpoints, see Managing endpoints.
', 'DescribeEntitiesDetectionJob' => 'Gets the properties associated with an entities detection job. Use this operation to get the status of a detection job.
', 'DescribeEntityRecognizer' => 'Provides details about an entity recognizer including status, S3 buckets containing training data, recognizer metadata, metrics, and so on.
', 'DescribeEventsDetectionJob' => 'Gets the status and details of an events detection job.
', 'DescribeFlywheel' => 'Provides configuration information about the flywheel. For more information about flywheels, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'DescribeFlywheelIteration' => 'Retrieve the configuration properties of a flywheel iteration. For more information about flywheels, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'DescribeKeyPhrasesDetectionJob' => 'Gets the properties associated with a key phrases detection job. Use this operation to get the status of a detection job.
', 'DescribePiiEntitiesDetectionJob' => 'Gets the properties associated with a PII entities detection job. For example, you can use this operation to get the job status.
', 'DescribeResourcePolicy' => 'Gets the details of a resource-based policy that is attached to a custom model, including the JSON body of the policy.
', 'DescribeSentimentDetectionJob' => 'Gets the properties associated with a sentiment detection job. Use this operation to get the status of a detection job.
', 'DescribeTargetedSentimentDetectionJob' => 'Gets the properties associated with a targeted sentiment detection job. Use this operation to get the status of the job.
', 'DescribeTopicsDetectionJob' => 'Gets the properties associated with a topic detection job. Use this operation to get the status of a detection job.
', 'DetectDominantLanguage' => 'Determines the dominant language of the input text. For a list of languages that Amazon Comprehend can detect, see Amazon Comprehend Supported Languages.
', 'DetectEntities' => 'Detects named entities in input text when you use the pre-trained model. Detects custom entities if you have a custom entity recognition model.
When detecting named entities using the pre-trained model, use plain text as the input. For more information about named entities, see Entities in the Comprehend Developer Guide.
When you use a custom entity recognition model, you can input plain text or you can upload a single-page input document (text, PDF, Word, or image).
If the system detects errors while processing a page in the input document, the API response includes an entry in Errors
for each error.
If the system detects a document-level error in your input document, the API returns an InvalidRequestException
error response. For details about this exception, see Errors in semi-structured documents in the Comprehend Developer Guide.
Detects the key noun phrases found in the text.
', 'DetectPiiEntities' => 'Inspects the input text for entities that contain personally identifiable information (PII) and returns information about them.
', 'DetectSentiment' => 'Inspects text and returns an inference of the prevailing sentiment (POSITIVE
, NEUTRAL
, MIXED
, or NEGATIVE
).
Inspects text for syntax and the part of speech of words in the document. For more information, see Syntax in the Comprehend Developer Guide.
', 'DetectTargetedSentiment' => 'Inspects the input text and returns a sentiment analysis for each entity identified in the text.
For more information about targeted sentiment, see Targeted sentiment.
', 'ImportModel' => 'Creates a new custom model that replicates a source custom model that you import. The source model can be in your Amazon Web Services account or another one.
If the source model is in another Amazon Web Services account, then it must have a resource-based policy that authorizes you to import it.
The source model must be in the same Amazon Web Services Region that you\'re using when you import. You can\'t import a model that\'s in a different Region.
', 'ListDatasets' => 'List the datasets that you have configured in this Region. For more information about datasets, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'ListDocumentClassificationJobs' => 'Gets a list of the documentation classification jobs that you have submitted.
', 'ListDocumentClassifierSummaries' => 'Gets a list of summaries of the document classifiers that you have created
', 'ListDocumentClassifiers' => 'Gets a list of the document classifiers that you have created.
', 'ListDominantLanguageDetectionJobs' => 'Gets a list of the dominant language detection jobs that you have submitted.
', 'ListEndpoints' => 'Gets a list of all existing endpoints that you\'ve created. For information about endpoints, see Managing endpoints.
', 'ListEntitiesDetectionJobs' => 'Gets a list of the entity detection jobs that you have submitted.
', 'ListEntityRecognizerSummaries' => 'Gets a list of summaries for the entity recognizers that you have created.
', 'ListEntityRecognizers' => 'Gets a list of the properties of all entity recognizers that you created, including recognizers currently in training. Allows you to filter the list of recognizers based on criteria such as status and submission time. This call returns up to 500 entity recognizers in the list, with a default number of 100 recognizers in the list.
The results of this list are not in any particular order. Please get the list and sort locally if needed.
', 'ListEventsDetectionJobs' => 'Gets a list of the events detection jobs that you have submitted.
', 'ListFlywheelIterationHistory' => 'Information about the history of a flywheel iteration. For more information about flywheels, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'ListFlywheels' => 'Gets a list of the flywheels that you have created.
', 'ListKeyPhrasesDetectionJobs' => 'Get a list of key phrase detection jobs that you have submitted.
', 'ListPiiEntitiesDetectionJobs' => 'Gets a list of the PII entity detection jobs that you have submitted.
', 'ListSentimentDetectionJobs' => 'Gets a list of sentiment detection jobs that you have submitted.
', 'ListTagsForResource' => 'Lists all tags associated with a given Amazon Comprehend resource.
', 'ListTargetedSentimentDetectionJobs' => 'Gets a list of targeted sentiment detection jobs that you have submitted.
', 'ListTopicsDetectionJobs' => 'Gets a list of the topic detection jobs that you have submitted.
', 'PutResourcePolicy' => 'Attaches a resource-based policy to a custom model. You can use this policy to authorize an entity in another Amazon Web Services account to import the custom model, which replicates it in Amazon Comprehend in their account.
', 'StartDocumentClassificationJob' => 'Starts an asynchronous document classification job. Use the DescribeDocumentClassificationJob
operation to track the progress of the job.
Starts an asynchronous dominant language detection job for a collection of documents. Use the operation to track the status of a job.
', 'StartEntitiesDetectionJob' => 'Starts an asynchronous entity detection job for a collection of documents. Use the operation to track the status of a job.
This API can be used for either standard entity detection or custom entity recognition. In order to be used for custom entity recognition, the optional EntityRecognizerArn
must be used in order to provide access to the recognizer being used to detect the custom entity.
Starts an asynchronous event detection job for a collection of documents.
', 'StartFlywheelIteration' => 'Start the flywheel iteration.This operation uses any new datasets to train a new model version. For more information about flywheels, see Flywheel overview in the Amazon Comprehend Developer Guide.
', 'StartKeyPhrasesDetectionJob' => 'Starts an asynchronous key phrase detection job for a collection of documents. Use the operation to track the status of a job.
', 'StartPiiEntitiesDetectionJob' => 'Starts an asynchronous PII entity detection job for a collection of documents.
', 'StartSentimentDetectionJob' => 'Starts an asynchronous sentiment detection job for a collection of documents. Use the operation to track the status of a job.
', 'StartTargetedSentimentDetectionJob' => 'Starts an asynchronous targeted sentiment detection job for a collection of documents. Use the DescribeTargetedSentimentDetectionJob
operation to track the status of a job.
Starts an asynchronous topic detection job. Use the DescribeTopicDetectionJob
operation to track the status of a job.
Stops a dominant language detection job in progress.
If the job state is IN_PROGRESS
the job is marked for termination and put into the STOP_REQUESTED
state. If the job completes before it can be stopped, it is put into the COMPLETED
state; otherwise the job is stopped and put into the STOPPED
state.
If the job is in the COMPLETED
or FAILED
state when you call the StopDominantLanguageDetectionJob
operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
', 'StopEntitiesDetectionJob' => 'Stops an entities detection job in progress.
If the job state is IN_PROGRESS
the job is marked for termination and put into the STOP_REQUESTED
state. If the job completes before it can be stopped, it is put into the COMPLETED
state; otherwise the job is stopped and put into the STOPPED
state.
If the job is in the COMPLETED
or FAILED
state when you call the StopDominantLanguageDetectionJob
operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
', 'StopEventsDetectionJob' => 'Stops an events detection job in progress.
', 'StopKeyPhrasesDetectionJob' => 'Stops a key phrases detection job in progress.
If the job state is IN_PROGRESS
the job is marked for termination and put into the STOP_REQUESTED
state. If the job completes before it can be stopped, it is put into the COMPLETED
state; otherwise the job is stopped and put into the STOPPED
state.
If the job is in the COMPLETED
or FAILED
state when you call the StopDominantLanguageDetectionJob
operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
', 'StopPiiEntitiesDetectionJob' => 'Stops a PII entities detection job in progress.
', 'StopSentimentDetectionJob' => 'Stops a sentiment detection job in progress.
If the job state is IN_PROGRESS
, the job is marked for termination and put into the STOP_REQUESTED
state. If the job completes before it can be stopped, it is put into the COMPLETED
state; otherwise the job is be stopped and put into the STOPPED
state.
If the job is in the COMPLETED
or FAILED
state when you call the StopDominantLanguageDetectionJob
operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
', 'StopTargetedSentimentDetectionJob' => 'Stops a targeted sentiment detection job in progress.
If the job state is IN_PROGRESS
, the job is marked for termination and put into the STOP_REQUESTED
state. If the job completes before it can be stopped, it is put into the COMPLETED
state; otherwise the job is be stopped and put into the STOPPED
state.
If the job is in the COMPLETED
or FAILED
state when you call the StopDominantLanguageDetectionJob
operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
', 'StopTrainingDocumentClassifier' => 'Stops a document classifier training job while in progress.
If the training job state is TRAINING
, the job is marked for termination and put into the STOP_REQUESTED
state. If the training job completes before it can be stopped, it is put into the TRAINED
; otherwise the training job is stopped and put into the STOPPED
state and the service sends back an HTTP 200 response with an empty HTTP body.
Stops an entity recognizer training job while in progress.
If the training job state is TRAINING
, the job is marked for termination and put into the STOP_REQUESTED
state. If the training job completes before it can be stopped, it is put into the TRAINED
; otherwise the training job is stopped and putted into the STOPPED
state and the service sends back an HTTP 200 response with an empty HTTP body.
Associates a specific tag with an Amazon Comprehend resource. A tag is a key-value pair that adds as a metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'UntagResource' => 'Removes a specific tag associated with an Amazon Comprehend resource.
', 'UpdateEndpoint' => 'Updates information about the specified endpoint. For information about endpoints, see Managing endpoints.
', 'UpdateFlywheel' => 'Update the configuration information for an existing flywheel.
', ], 'shapes' => [ 'AnyLengthString' => [ 'base' => NULL, 'refs' => [ 'DatasetProperties$Message' => 'A description of the status of the dataset.
', 'DocumentClassificationJobProperties$Message' => 'A description of the status of the job.
', 'DocumentClassifierProperties$Message' => 'Additional information about the status of the classifier.
', 'DominantLanguageDetectionJobProperties$Message' => 'A description for the status of a job.
', 'EndpointProperties$Message' => 'Specifies a reason for failure in cases of Failed
status.
A description of the status of a job.
', 'EntityRecognizerMetadataEntityTypesListItem$Type' => 'Type of entity from the list of entity types in the metadata of an entity recognizer.
', 'EntityRecognizerProperties$Message' => 'A description of the status of the recognizer.
', 'EventsDetectionJobProperties$Message' => 'A description of the status of the events detection job.
', 'FlywheelIterationProperties$Message' => 'A description of the status of the flywheel iteration.
', 'FlywheelProperties$Message' => 'A description of the status of the flywheel.
', 'FlywheelSummary$Message' => 'A description of the status of the flywheel.
', 'KeyPhrasesDetectionJobProperties$Message' => 'A description of the status of a job.
', 'PiiEntitiesDetectionJobProperties$Message' => 'A description of the status of a job.
', 'SentimentDetectionJobProperties$Message' => 'A description of the status of a job.
', 'TargetedSentimentDetectionJobProperties$Message' => 'A description of the status of a job.
', 'TopicsDetectionJobProperties$Message' => 'A description for the status of a job.
', ], ], 'AttributeNamesList' => [ 'base' => NULL, 'refs' => [ 'AugmentedManifestsListItem$AttributeNames' => 'The JSON attribute that contains the annotations for your training documents. The number of attribute names that you specify depends on whether your augmented manifest file is the output of a single labeling job or a chained labeling job.
If your file is the output of a single labeling job, specify the LabelAttributeName key that was used when the job was created in Ground Truth.
If your file is the output of a chained labeling job, specify the LabelAttributeName key for one or more jobs in the chain. Each LabelAttributeName key provides the annotations from an individual job.
', 'DatasetAugmentedManifestsListItem$AttributeNames' => 'The JSON attribute that contains the annotations for your training documents. The number of attribute names that you specify depends on whether your augmented manifest file is the output of a single labeling job or a chained labeling job.
If your file is the output of a single labeling job, specify the LabelAttributeName key that was used when the job was created in Ground Truth.
If your file is the output of a chained labeling job, specify the LabelAttributeName key for one or more jobs in the chain. Each LabelAttributeName key provides the annotations from an individual job.
', ], ], 'AttributeNamesListItem' => [ 'base' => NULL, 'refs' => [ 'AttributeNamesList$member' => NULL, ], ], 'AugmentedManifestsDocumentTypeFormat' => [ 'base' => NULL, 'refs' => [ 'AugmentedManifestsListItem$DocumentType' => 'The type of augmented manifest. PlainTextDocument or SemiStructuredDocument. If you don\'t specify, the default is PlainTextDocument.
PLAIN_TEXT_DOCUMENT
A document type that represents any unicode text that is encoded in UTF-8.
SEMI_STRUCTURED_DOCUMENT
A document type with positional and structural context, like a PDF. For training with Amazon Comprehend, only PDFs are supported. For inference, Amazon Comprehend support PDFs, DOCX and TXT.
The type of augmented manifest. If you don\'t specify, the default is PlainTextDocument.
PLAIN_TEXT_DOCUMENT
A document type that represents any unicode text that is encoded in UTF-8.
An augmented manifest file that provides training data for your custom model. An augmented manifest file is a labeled dataset that is produced by Amazon SageMaker Ground Truth.
', 'refs' => [ 'DocumentClassifierAugmentedManifestsList$member' => NULL, 'EntityRecognizerAugmentedManifestsList$member' => NULL, ], ], 'BatchDetectDominantLanguageItemResult' => [ 'base' => 'The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
', 'refs' => [ 'ListOfDetectDominantLanguageResult$member' => NULL, ], ], 'BatchDetectDominantLanguageRequest' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectDominantLanguageResponse' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectEntitiesItemResult' => [ 'base' => 'The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
', 'refs' => [ 'ListOfDetectEntitiesResult$member' => NULL, ], ], 'BatchDetectEntitiesRequest' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectEntitiesResponse' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectKeyPhrasesItemResult' => [ 'base' => 'The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
', 'refs' => [ 'ListOfDetectKeyPhrasesResult$member' => NULL, ], ], 'BatchDetectKeyPhrasesRequest' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectKeyPhrasesResponse' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectSentimentItemResult' => [ 'base' => 'The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
', 'refs' => [ 'ListOfDetectSentimentResult$member' => NULL, ], ], 'BatchDetectSentimentRequest' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectSentimentResponse' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectSyntaxItemResult' => [ 'base' => 'The result of calling the operation. The operation returns one object that is successfully processed by the operation.
', 'refs' => [ 'ListOfDetectSyntaxResult$member' => NULL, ], ], 'BatchDetectSyntaxRequest' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectSyntaxResponse' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectTargetedSentimentItemResult' => [ 'base' => 'Analysis results for one of the documents in the batch.
', 'refs' => [ 'ListOfDetectTargetedSentimentResult$member' => NULL, ], ], 'BatchDetectTargetedSentimentRequest' => [ 'base' => NULL, 'refs' => [], ], 'BatchDetectTargetedSentimentResponse' => [ 'base' => NULL, 'refs' => [], ], 'BatchItemError' => [ 'base' => 'Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError
object for each document that contained an error.
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList
is empty.
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList
is empty.
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList
is empty.
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList
is empty.
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList
is empty.
List of errors that the operation can return.
', ], ], 'BatchSizeLimitExceededException' => [ 'base' => 'The number of documents in the request exceeds the limit of 25. Try your request again with fewer documents.
', 'refs' => [], ], 'Block' => [ 'base' => 'Information about each word or line of text in the input document.
For additional information, see Block in the Amazon Textract API reference.
', 'refs' => [ 'ListOfBlocks$member' => NULL, ], ], 'BlockReference' => [ 'base' => 'A reference to a block.
', 'refs' => [ 'ListOfBlockReferences$member' => NULL, ], ], 'BlockType' => [ 'base' => NULL, 'refs' => [ 'Block$BlockType' => 'The block represents a line of text or one word of text.
WORD - A word that\'s detected on a document page. A word is one or more ISO basic Latin script characters that aren\'t separated by spaces.
LINE - A string of tab-delimited, contiguous words that are detected on a document page
The bounding box around the detected page or around an element on a document page. The left (x-coordinate) and top (y-coordinate) are coordinates that represent the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).
For additional information, see BoundingBox in the Amazon Textract API reference.
', 'refs' => [ 'Geometry$BoundingBox' => 'An axis-aligned coarse representation of the location of the recognized item on the document page.
', ], ], 'ChildBlock' => [ 'base' => 'Nested block contained within a block.
', 'refs' => [ 'ListOfChildBlocks$member' => NULL, ], ], 'ClassifierEvaluationMetrics' => [ 'base' => 'Describes the result metrics for the test data associated with an documentation classifier.
', 'refs' => [ 'ClassifierMetadata$EvaluationMetrics' => 'Describes the result metrics for the test data associated with an documentation classifier.
', ], ], 'ClassifierMetadata' => [ 'base' => 'Provides information about a document classifier.
', 'refs' => [ 'DocumentClassifierProperties$ClassifierMetadata' => 'Information about the document classifier, including the number of documents used for training the classifier, the number of documents used for test the classifier, and an accuracy rating.
', ], ], 'ClassifyDocumentRequest' => [ 'base' => NULL, 'refs' => [], ], 'ClassifyDocumentResponse' => [ 'base' => NULL, 'refs' => [], ], 'ClientRequestTokenString' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'CreateDocumentClassifierRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'CreateEndpointRequest$ClientRequestToken' => 'An idempotency token provided by the customer. If this token matches a previous endpoint creation request, Amazon Comprehend will not return a ResourceInUseException
.
A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'CreateFlywheelRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartDocumentClassificationJobRequest$ClientRequestToken' => 'A unique identifier for the request. If you do not set the client request token, Amazon Comprehend generates one.
', 'StartDominantLanguageDetectionJobRequest$ClientRequestToken' => 'A unique identifier for the request. If you do not set the client request token, Amazon Comprehend generates one.
', 'StartEntitiesDetectionJobRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartEventsDetectionJobRequest$ClientRequestToken' => 'An unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartFlywheelIterationRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartKeyPhrasesDetectionJobRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartPiiEntitiesDetectionJobRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartSentimentDetectionJobRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartTargetedSentimentDetectionJobRequest$ClientRequestToken' => 'A unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartTopicsDetectionJobRequest$ClientRequestToken' => 'A unique identifier for the request. If you do not set the client request token, Amazon Comprehend generates one.
', ], ], 'ComprehendArn' => [ 'base' => NULL, 'refs' => [ 'DocumentClassificationJobProperties$JobArn' => 'The Amazon Resource Name (ARN) of the document classification job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:document-classification-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:document-classification-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the dominant language detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:dominant-language-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:dominant-language-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the entities detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:entities-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:entities-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the events detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:events-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:events-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the key phrases detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:key-phrases-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:key-phrases-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the given Amazon Comprehend resource you are querying.
', 'ListTagsForResourceResponse$ResourceArn' => 'The Amazon Resource Name (ARN) of the given Amazon Comprehend resource you are querying.
', 'PiiEntitiesDetectionJobProperties$JobArn' => 'The Amazon Resource Name (ARN) of the PII entities detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:pii-entities-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:pii-entities-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the sentiment detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:sentiment-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:sentiment-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the document classification job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:document-classification-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:document-classification-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the dominant language detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:dominant-language-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:dominant-language-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the entities detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:entities-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:entities-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the events detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:events-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:events-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the key phrase detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:key-phrases-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:key-phrases-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the PII entity detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:pii-entities-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:pii-entities-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the sentiment detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:sentiment-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:sentiment-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the targeted sentiment detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:targeted-sentiment-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:targeted-sentiment-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the topics detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:topics-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:document-classification-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the given Amazon Comprehend resource to which you want to associate the tags.
', 'TargetedSentimentDetectionJobProperties$JobArn' => 'The Amazon Resource Name (ARN) of the targeted sentiment detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:targeted-sentiment-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:targeted-sentiment-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the topics detection job. It is a unique, fully qualified identifier for the job. It includes the Amazon Web Services account, Amazon Web Services Region, and the job ID. The format of the ARN is as follows:
arn:<partition>:comprehend:<region>:<account-id>:topics-detection-job/<job-id>
The following is an example job ARN:
arn:aws:comprehend:us-west-2:111122223333:topics-detection-job/1234abcd12ab34cd56ef1234567890ab
The Amazon Resource Name (ARN) of the given Amazon Comprehend resource from which you want to remove the tags.
', ], ], 'ComprehendArnName' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetRequest$DatasetName' => 'Name of the dataset.
', 'CreateDocumentClassifierRequest$DocumentClassifierName' => 'The name of the document classifier.
', 'CreateEntityRecognizerRequest$RecognizerName' => 'The name given to the newly created recognizer. Recognizer names can be a maximum of 256 characters. Alphanumeric characters, hyphens (-) and underscores (_) are allowed. The name must be unique in the account/Region.
', 'CreateFlywheelRequest$FlywheelName' => 'Name for the flywheel.
', 'DatasetProperties$DatasetName' => 'The name of the dataset.
', 'DocumentClassifierFilter$DocumentClassifierName' => 'The name that you assigned to the document classifier
', 'DocumentClassifierSummary$DocumentClassifierName' => 'The name that you assigned the document classifier.
', 'EntityRecognizerFilter$RecognizerName' => 'The name that you assigned the entity recognizer.
', 'EntityRecognizerSummary$RecognizerName' => 'The name that you assigned the entity recognizer.
', 'ImportModelRequest$ModelName' => 'The name to assign to the custom model that is created in Amazon Comprehend by this import.
', ], ], 'ComprehendDatasetArn' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetResponse$DatasetArn' => 'The ARN of the dataset.
', 'DatasetProperties$DatasetArn' => 'The ARN of the dataset.
', 'DescribeDatasetRequest$DatasetArn' => 'The ARN of the dataset.
', ], ], 'ComprehendEndpointArn' => [ 'base' => NULL, 'refs' => [ 'CreateEndpointResponse$EndpointArn' => 'The Amazon Resource Number (ARN) of the endpoint being created.
', 'DeleteEndpointRequest$EndpointArn' => 'The Amazon Resource Number (ARN) of the endpoint being deleted.
', 'DescribeEndpointRequest$EndpointArn' => 'The Amazon Resource Number (ARN) of the endpoint being described.
', 'EndpointProperties$EndpointArn' => 'The Amazon Resource Number (ARN) of the endpoint.
', 'UpdateEndpointRequest$EndpointArn' => 'The Amazon Resource Number (ARN) of the endpoint being updated.
', ], ], 'ComprehendEndpointName' => [ 'base' => NULL, 'refs' => [ 'CreateEndpointRequest$EndpointName' => 'This is the descriptive suffix that becomes part of the EndpointArn
used for all subsequent requests to this resource.
The Amazon Resource Number (ARN) of the flywheel of the flywheel to receive the data.
', 'CreateEndpointRequest$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel to which the endpoint will be attached.
', 'CreateFlywheelResponse$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel.
', 'DeleteFlywheelRequest$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel to delete.
', 'DescribeFlywheelIterationRequest$FlywheelArn' => '', 'DescribeFlywheelRequest$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel.
', 'DocumentClassificationJobProperties$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel
', 'DocumentClassifierProperties$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel
', 'EndpointProperties$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel
', 'EntitiesDetectionJobProperties$FlywheelArn' => 'The Amazon Resource Name (ARN) of the flywheel associated with this job.
', 'EntityRecognizerProperties$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel
', 'FlywheelIterationProperties$FlywheelArn' => '', 'FlywheelProperties$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel.
', 'FlywheelSummary$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel
', 'ListDatasetsRequest$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel.
', 'ListFlywheelIterationHistoryRequest$FlywheelArn' => 'The ARN of the flywheel.
', 'StartDocumentClassificationJobRequest$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel associated with the model to use.
', 'StartEntitiesDetectionJobRequest$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel associated with the model to use.
', 'StartFlywheelIterationRequest$FlywheelArn' => 'The ARN of the flywheel.
', 'StartFlywheelIterationResponse$FlywheelArn' => '', 'UpdateEndpointRequest$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel
', 'UpdateFlywheelRequest$FlywheelArn' => 'The Amazon Resource Number (ARN) of the flywheel to update.
', ], ], 'ComprehendModelArn' => [ 'base' => NULL, 'refs' => [ 'CreateEndpointRequest$ModelArn' => 'The Amazon Resource Number (ARN) of the model to which the endpoint will be attached.
', 'CreateEndpointResponse$ModelArn' => 'The Amazon Resource Number (ARN) of the model to which the endpoint is attached.
', 'CreateFlywheelRequest$ActiveModelArn' => 'To associate an existing model with the flywheel, specify the Amazon Resource Number (ARN) of the model version.
', 'CreateFlywheelResponse$ActiveModelArn' => 'The Amazon Resource Number (ARN) of the active model version.
', 'DeleteResourcePolicyRequest$ResourceArn' => 'The Amazon Resource Name (ARN) of the custom model version that has the policy to delete.
', 'DescribeResourcePolicyRequest$ResourceArn' => 'The Amazon Resource Name (ARN) of the custom model version that has the resource policy.
', 'EndpointFilter$ModelArn' => 'The Amazon Resource Number (ARN) of the model to which the endpoint is attached.
', 'EndpointProperties$ModelArn' => 'The Amazon Resource Number (ARN) of the model to which the endpoint is attached.
', 'EndpointProperties$DesiredModelArn' => 'ARN of the new model to use for updating an existing endpoint. This ARN is going to be different from the model ARN when the update is in progress
', 'FlywheelIterationProperties$EvaluatedModelArn' => 'The ARN of the evaluated model associated with this flywheel iteration.
', 'FlywheelIterationProperties$TrainedModelArn' => 'The ARN of the trained model associated with this flywheel iteration.
', 'FlywheelProperties$ActiveModelArn' => 'The Amazon Resource Number (ARN) of the active model version.
', 'FlywheelSummary$ActiveModelArn' => 'ARN of the active model version for the flywheel.
', 'ImportModelRequest$SourceModelArn' => 'The Amazon Resource Name (ARN) of the custom model to import.
', 'ImportModelResponse$ModelArn' => 'The Amazon Resource Name (ARN) of the custom model being imported.
', 'PutResourcePolicyRequest$ResourceArn' => 'The Amazon Resource Name (ARN) of the custom model to attach the policy to.
', 'UpdateEndpointRequest$DesiredModelArn' => 'The ARN of the new model to use when updating an existing endpoint.
', 'UpdateEndpointResponse$DesiredModelArn' => 'The Amazon Resource Number (ARN) of the new model.
', 'UpdateFlywheelRequest$ActiveModelArn' => 'The Amazon Resource Number (ARN) of the active model version.
', ], ], 'ConcurrentModificationException' => [ 'base' => 'Concurrent modification of the tags associated with an Amazon Comprehend resource is not supported.
', 'refs' => [], ], 'ContainsPiiEntitiesRequest' => [ 'base' => NULL, 'refs' => [], ], 'ContainsPiiEntitiesResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateDocumentClassifierRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateDocumentClassifierResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateEndpointRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateEndpointResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateEntityRecognizerRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateEntityRecognizerResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateFlywheelRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateFlywheelResponse' => [ 'base' => NULL, 'refs' => [], ], 'CustomerInputString' => [ 'base' => NULL, 'refs' => [ 'ClassifyDocumentRequest$Text' => 'The document text to be analyzed. If you enter text using this parameter, do not use the Bytes
parameter.
A UTF-8 text string. The string must contain at least 20 characters. The maximum string size is 100 KB.
', 'DetectEntitiesRequest$Text' => 'A UTF-8 text string. The maximum string size is 100 KB. If you enter text using this parameter, do not use the Bytes
parameter.
A UTF-8 text string. The string must contain less than 100 KB of UTF-8 encoded characters.
', 'DetectSentimentRequest$Text' => 'A UTF-8 text string. The maximum string size is 5 KB.
', 'DetectSyntaxRequest$Text' => 'A UTF-8 string. The maximum string size is 5 KB.
', 'DetectTargetedSentimentRequest$Text' => 'A UTF-8 text string. The maximum string length is 5 KB.
', ], ], 'CustomerInputStringList' => [ 'base' => NULL, 'refs' => [ 'BatchDetectDominantLanguageRequest$TextList' => 'A list containing the UTF-8 encoded text of the input documents. The list can contain a maximum of 25 documents. Each document should contain at least 20 characters. The maximum size of each document is 5 KB.
', 'BatchDetectEntitiesRequest$TextList' => 'A list containing the UTF-8 encoded text of the input documents. The list can contain a maximum of 25 documents. The maximum size of each document is 5 KB.
', 'BatchDetectKeyPhrasesRequest$TextList' => 'A list containing the UTF-8 encoded text of the input documents. The list can contain a maximum of 25 documents. The maximum size of each document is 5 KB.
', 'BatchDetectSentimentRequest$TextList' => 'A list containing the UTF-8 encoded text of the input documents. The list can contain a maximum of 25 documents. The maximum size of each document is 5 KB.
', 'BatchDetectSyntaxRequest$TextList' => 'A list containing the UTF-8 encoded text of the input documents. The list can contain a maximum of 25 documents. The maximum size for each document is 5 KB.
', 'BatchDetectTargetedSentimentRequest$TextList' => 'A list containing the UTF-8 encoded text of the input documents. The list can contain a maximum of 25 documents. The maximum size of each document is 5 KB.
', ], ], 'DataSecurityConfig' => [ 'base' => 'Data security configuration.
', 'refs' => [ 'CreateFlywheelRequest$DataSecurityConfig' => 'Data security configurations.
', 'FlywheelProperties$DataSecurityConfig' => 'Data security configuration.
', ], ], 'DatasetAugmentedManifestsList' => [ 'base' => NULL, 'refs' => [ 'DatasetInputDataConfig$AugmentedManifests' => 'A list of augmented manifest files that provide training data for your custom model. An augmented manifest file is a labeled dataset that is produced by Amazon SageMaker Ground Truth.
', ], ], 'DatasetAugmentedManifestsListItem' => [ 'base' => 'An augmented manifest file that provides training data for your custom model. An augmented manifest file is a labeled dataset that is produced by Amazon SageMaker Ground Truth.
', 'refs' => [ 'DatasetAugmentedManifestsList$member' => NULL, ], ], 'DatasetDataFormat' => [ 'base' => NULL, 'refs' => [ 'DatasetInputDataConfig$DataFormat' => ' COMPREHEND_CSV
: The data format is a two-column CSV file, where the first column contains labels and the second column contains documents.
AUGMENTED_MANIFEST
: The data format
Describes the dataset input data configuration for a document classifier model.
For more information on how the input file is formatted, see Preparing training data in the Comprehend Developer Guide.
', 'refs' => [ 'DatasetInputDataConfig$DocumentClassifierInputDataConfig' => 'The input properties for training a document classifier model.
For more information on how the input file is formatted, see Preparing training data in the Comprehend Developer Guide.
', ], ], 'DatasetEntityRecognizerAnnotations' => [ 'base' => 'Describes the annotations associated with a entity recognizer.
', 'refs' => [ 'DatasetEntityRecognizerInputDataConfig$Annotations' => 'The S3 location of the annotation documents for your custom entity recognizer.
', ], ], 'DatasetEntityRecognizerDocuments' => [ 'base' => 'Describes the documents submitted with a dataset for an entity recognizer model.
', 'refs' => [ 'DatasetEntityRecognizerInputDataConfig$Documents' => 'The format and location of the training documents for your custom entity recognizer.
', ], ], 'DatasetEntityRecognizerEntityList' => [ 'base' => 'Describes the dataset entity list for an entity recognizer model.
For more information on how the input file is formatted, see Preparing training data in the Comprehend Developer Guide.
', 'refs' => [ 'DatasetEntityRecognizerInputDataConfig$EntityList' => 'The S3 location of the entity list for your custom entity recognizer.
', ], ], 'DatasetEntityRecognizerInputDataConfig' => [ 'base' => 'Specifies the format and location of the input data. You must provide either the Annotations
parameter or the EntityList
parameter.
The input properties for training an entity recognizer model.
', ], ], 'DatasetFilter' => [ 'base' => 'Filter the datasets based on creation time or dataset status.
', 'refs' => [ 'ListDatasetsRequest$Filter' => 'Filters the datasets to be returned in the response.
', ], ], 'DatasetInputDataConfig' => [ 'base' => 'Specifies the format and location of the input data for the dataset.
', 'refs' => [ 'CreateDatasetRequest$InputDataConfig' => 'Information about the input data configuration. The type of input data varies based on the format of the input and whether the data is for a classifier model or an entity recognition model.
', ], ], 'DatasetProperties' => [ 'base' => 'Properties associated with the dataset.
', 'refs' => [ 'DatasetPropertiesList$member' => NULL, 'DescribeDatasetResponse$DatasetProperties' => 'The dataset properties.
', ], ], 'DatasetPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListDatasetsResponse$DatasetPropertiesList' => 'The dataset properties list.
', ], ], 'DatasetStatus' => [ 'base' => NULL, 'refs' => [ 'DatasetFilter$Status' => 'Filter the datasets based on the dataset status.
', 'DatasetProperties$Status' => 'The dataset status. While the system creates the dataset, the status is CREATING
. When the dataset is ready to use, the status changes to COMPLETED
.
The dataset type. You can specify that the data in a dataset is for training the model or for testing the model.
', 'DatasetFilter$DatasetType' => 'Filter the datasets based on the dataset type.
', 'DatasetProperties$DatasetType' => 'The dataset type (training data or test data).
', ], ], 'DeleteDocumentClassifierRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteDocumentClassifierResponse' => [ 'base' => NULL, 'refs' => [], ], 'DeleteEndpointRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteEndpointResponse' => [ 'base' => NULL, 'refs' => [], ], 'DeleteEntityRecognizerRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteEntityRecognizerResponse' => [ 'base' => NULL, 'refs' => [], ], 'DeleteFlywheelRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteFlywheelResponse' => [ 'base' => NULL, 'refs' => [], ], 'DeleteResourcePolicyRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteResourcePolicyResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDocumentClassificationJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDocumentClassificationJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDocumentClassifierRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDocumentClassifierResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDominantLanguageDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDominantLanguageDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEndpointRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEndpointResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEntitiesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEntitiesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEntityRecognizerRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEntityRecognizerResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEventsDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEventsDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeFlywheelIterationRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeFlywheelIterationResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeFlywheelRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeFlywheelResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeKeyPhrasesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeKeyPhrasesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribePiiEntitiesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribePiiEntitiesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeResourcePolicyRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeResourcePolicyResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeSentimentDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeSentimentDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeTargetedSentimentDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeTargetedSentimentDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeTopicsDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeTopicsDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'Description' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetRequest$Description' => 'Description of the dataset.
', 'DatasetProperties$Description' => 'Description of the dataset.
', ], ], 'DetectDominantLanguageRequest' => [ 'base' => NULL, 'refs' => [], ], 'DetectDominantLanguageResponse' => [ 'base' => NULL, 'refs' => [], ], 'DetectEntitiesRequest' => [ 'base' => NULL, 'refs' => [], ], 'DetectEntitiesResponse' => [ 'base' => NULL, 'refs' => [], ], 'DetectKeyPhrasesRequest' => [ 'base' => NULL, 'refs' => [], ], 'DetectKeyPhrasesResponse' => [ 'base' => NULL, 'refs' => [], ], 'DetectPiiEntitiesRequest' => [ 'base' => NULL, 'refs' => [], ], 'DetectPiiEntitiesResponse' => [ 'base' => NULL, 'refs' => [], ], 'DetectSentimentRequest' => [ 'base' => NULL, 'refs' => [], ], 'DetectSentimentResponse' => [ 'base' => NULL, 'refs' => [], ], 'DetectSyntaxRequest' => [ 'base' => NULL, 'refs' => [], ], 'DetectSyntaxResponse' => [ 'base' => NULL, 'refs' => [], ], 'DetectTargetedSentimentRequest' => [ 'base' => NULL, 'refs' => [], ], 'DetectTargetedSentimentResponse' => [ 'base' => NULL, 'refs' => [], ], 'DocumentClass' => [ 'base' => 'Specifies the class that categorizes the document being analyzed
', 'refs' => [ 'ListOfClasses$member' => NULL, ], ], 'DocumentClassificationConfig' => [ 'base' => 'Configuration required for a custom classification model.
', 'refs' => [ 'TaskConfig$DocumentClassificationConfig' => 'Configuration required for a classification model.
', ], ], 'DocumentClassificationJobFilter' => [ 'base' => 'Provides information for filtering a list of document classification jobs. For more information, see the operation. You can provide only one filter parameter in each request.
', 'refs' => [ 'ListDocumentClassificationJobsRequest$Filter' => 'Filters the jobs that are returned. You can filter jobs on their names, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'DocumentClassificationJobProperties' => [ 'base' => 'Provides information about a document classification job.
', 'refs' => [ 'DescribeDocumentClassificationJobResponse$DocumentClassificationJobProperties' => 'An object that describes the properties associated with the document classification job.
', 'DocumentClassificationJobPropertiesList$member' => NULL, ], ], 'DocumentClassificationJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListDocumentClassificationJobsResponse$DocumentClassificationJobPropertiesList' => 'A list containing the properties of each job returned.
', ], ], 'DocumentClassifierArn' => [ 'base' => NULL, 'refs' => [ 'CreateDocumentClassifierResponse$DocumentClassifierArn' => 'The Amazon Resource Name (ARN) that identifies the document classifier.
', 'DeleteDocumentClassifierRequest$DocumentClassifierArn' => 'The Amazon Resource Name (ARN) that identifies the document classifier.
', 'DescribeDocumentClassifierRequest$DocumentClassifierArn' => 'The Amazon Resource Name (ARN) that identifies the document classifier. The CreateDocumentClassifier
operation returns this identifier in its response.
The Amazon Resource Name (ARN) that identifies the document classifier.
', 'DocumentClassifierProperties$DocumentClassifierArn' => 'The Amazon Resource Name (ARN) that identifies the document classifier.
', 'DocumentClassifierProperties$SourceModelArn' => 'The Amazon Resource Name (ARN) of the source model. This model was imported from a different Amazon Web Services account to create the document classifier model in your Amazon Web Services account.
', 'StartDocumentClassificationJobRequest$DocumentClassifierArn' => 'The Amazon Resource Name (ARN) of the document classifier to use to process the job.
', 'StartDocumentClassificationJobResponse$DocumentClassifierArn' => 'The ARN of the custom classification model.
', 'StopTrainingDocumentClassifierRequest$DocumentClassifierArn' => 'The Amazon Resource Name (ARN) that identifies the document classifier currently being trained.
', ], ], 'DocumentClassifierAugmentedManifestsList' => [ 'base' => NULL, 'refs' => [ 'DocumentClassifierInputDataConfig$AugmentedManifests' => 'A list of augmented manifest files that provide training data for your custom model. An augmented manifest file is a labeled dataset that is produced by Amazon SageMaker Ground Truth.
This parameter is required if you set DataFormat
to AUGMENTED_MANIFEST
.
The format of your training data:
COMPREHEND_CSV
: A two-column CSV file, where labels are provided in the first column, and documents are provided in the second. If you use this value, you must provide the S3Uri
parameter in your request.
AUGMENTED_MANIFEST
: A labeled dataset that is produced by Amazon SageMaker Ground Truth. This file is in JSON lines format. Each line is a complete JSON object that contains a training document and its associated labels.
If you use this value, you must provide the AugmentedManifests
parameter in your request.
If you don\'t specify a value, Amazon Comprehend uses COMPREHEND_CSV
as the default.
The type of input documents for training the model. Provide plain-text documents to create a plain-text model, and provide semi-structured documents to create a native model.
', ], ], 'DocumentClassifierDocuments' => [ 'base' => 'The location of the training documents. This parameter is required in a request to create a native classifier model.
', 'refs' => [ 'DocumentClassifierInputDataConfig$Documents' => 'The S3 location of the training documents. This parameter is required in a request to create a native classifier model.
', ], ], 'DocumentClassifierEndpointArn' => [ 'base' => NULL, 'refs' => [ 'ClassifyDocumentRequest$EndpointArn' => 'The Amazon Resource Number (ARN) of the endpoint. For information about endpoints, see Managing endpoints.
', ], ], 'DocumentClassifierFilter' => [ 'base' => 'Provides information for filtering a list of document classifiers. You can only specify one filtering parameter in a request. For more information, see the ListDocumentClassifiers
operation.
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'DocumentClassifierInputDataConfig' => [ 'base' => 'The input properties for training a document classifier.
For more information on how the input file is formatted, see Preparing training data in the Comprehend Developer Guide.
', 'refs' => [ 'CreateDocumentClassifierRequest$InputDataConfig' => 'Specifies the format and location of the input data for the job.
', 'DocumentClassifierProperties$InputDataConfig' => 'The input data configuration that you supplied when you created the document classifier for training.
', ], ], 'DocumentClassifierMode' => [ 'base' => NULL, 'refs' => [ 'CreateDocumentClassifierRequest$Mode' => 'Indicates the mode in which the classifier will be trained. The classifier can be trained in multi-class mode, which identifies one and only one class for each document, or multi-label mode, which identifies one or more labels for each document. In multi-label mode, multiple labels for an individual document are separated by a delimiter. The default delimiter between labels is a pipe (|).
', 'DocumentClassificationConfig$Mode' => 'Classification mode indicates whether the documents are MULTI_CLASS
or MULTI_LABEL
.
Indicates the mode in which the specific classifier was trained. This also indicates the format of input documents and the format of the confusion matrix. Each classifier can only be trained in one mode and this cannot be changed once the classifier is trained.
', ], ], 'DocumentClassifierOutputDataConfig' => [ 'base' => 'Provide the location for output data from a custom classifier job. This field is mandatory if you are training a native classifier model.
', 'refs' => [ 'CreateDocumentClassifierRequest$OutputDataConfig' => 'Specifies the location for the output files from a custom classifier job. This parameter is required for a request that creates a native classifier model.
', 'DocumentClassifierProperties$OutputDataConfig' => 'Provides output results configuration parameters for custom classifier jobs.
', ], ], 'DocumentClassifierProperties' => [ 'base' => 'Provides information about a document classifier.
', 'refs' => [ 'DescribeDocumentClassifierResponse$DocumentClassifierProperties' => 'An object that contains the properties associated with a document classifier.
', 'DocumentClassifierPropertiesList$member' => NULL, ], ], 'DocumentClassifierPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListDocumentClassifiersResponse$DocumentClassifierPropertiesList' => 'A list containing the properties of each job returned.
', ], ], 'DocumentClassifierSummariesList' => [ 'base' => NULL, 'refs' => [ 'ListDocumentClassifierSummariesResponse$DocumentClassifierSummariesList' => 'The list of summaries of document classifiers.
', ], ], 'DocumentClassifierSummary' => [ 'base' => 'Describes information about a document classifier and its versions.
', 'refs' => [ 'DocumentClassifierSummariesList$member' => NULL, ], ], 'DocumentLabel' => [ 'base' => 'Specifies one of the label or labels that categorize the document being analyzed.
', 'refs' => [ 'ListOfLabels$member' => NULL, ], ], 'DocumentMetadata' => [ 'base' => 'Information about the document, discovered during text extraction.
', 'refs' => [ 'ClassifyDocumentResponse$DocumentMetadata' => 'Extraction information about the document. This field is present in the response only if your request includes the Byte
parameter.
Information about the document, discovered during text extraction. This field is present in the response only if your request used the Byte
parameter.
This field defines the Amazon Textract API operation that Amazon Comprehend uses to extract text from PDF files and image files. Enter one of the following values:
TEXTRACT_DETECT_DOCUMENT_TEXT
- The Amazon Comprehend service uses the DetectDocumentText
API operation.
TEXTRACT_ANALYZE_DOCUMENT
- The Amazon Comprehend service uses the AnalyzeDocument
API operation.
Specifies the type of Amazon Textract features to apply. If you chose TEXTRACT_ANALYZE_DOCUMENT
as the read action, you must specify one or both of the following values:
TABLES
- Returns additional information about any tables that are detected in the input document.
FORMS
- Returns additional information about any forms that are detected in the input document.
Determines the text extraction actions for PDF files. Enter one of the following values:
SERVICE_DEFAULT
- use the Amazon Comprehend service defaults for PDF files.
FORCE_DOCUMENT_READ_ACTION
- Amazon Comprehend uses the Textract API specified by DocumentReadAction for all PDF files, including digital PDF files.
Provides configuration parameters to override the default actions for extracting text from PDF documents and image files.
By default, Amazon Comprehend performs the following actions to extract text from files, based on the input file type:
Word files - Amazon Comprehend parser extracts the text.
Digital PDF files - Amazon Comprehend parser extracts the text.
Image files and scanned PDF files - Amazon Comprehend uses the Amazon Textract DetectDocumentText
API to extract the text.
DocumentReaderConfig
does not apply to plain text files or Word files.
For image files and PDF documents, you can override these default actions using the fields listed below. For more information, see Setting text extraction options in the Comprehend Developer Guide.
', 'refs' => [ 'ClassifyDocumentRequest$DocumentReaderConfig' => 'Provides configuration parameters to override the default actions for extracting text from PDF documents and image files.
', 'DetectEntitiesRequest$DocumentReaderConfig' => 'Provides configuration parameters to override the default actions for extracting text from PDF documents and image files.
', 'DocumentClassifierInputDataConfig$DocumentReaderConfig' => NULL, 'InputDataConfig$DocumentReaderConfig' => 'Provides configuration parameters to override the default actions for extracting text from PDF documents and image files.
', ], ], 'DocumentType' => [ 'base' => NULL, 'refs' => [ 'DocumentTypeListItem$Type' => 'Document type.
', ], ], 'DocumentTypeListItem' => [ 'base' => 'Document type for each page in the document.
', 'refs' => [ 'ListOfDocumentType$member' => NULL, ], ], 'DominantLanguage' => [ 'base' => 'Returns the code for the dominant language in the input text and the level of confidence that Amazon Comprehend has in the accuracy of the detection.
', 'refs' => [ 'ListOfDominantLanguages$member' => NULL, ], ], 'DominantLanguageDetectionJobFilter' => [ 'base' => 'Provides information for filtering a list of dominant language detection jobs. For more information, see the operation.
', 'refs' => [ 'ListDominantLanguageDetectionJobsRequest$Filter' => 'Filters that jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'DominantLanguageDetectionJobProperties' => [ 'base' => 'Provides information about a dominant language detection job.
', 'refs' => [ 'DescribeDominantLanguageDetectionJobResponse$DominantLanguageDetectionJobProperties' => 'An object that contains the properties associated with a dominant language detection job.
', 'DominantLanguageDetectionJobPropertiesList$member' => NULL, ], ], 'DominantLanguageDetectionJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListDominantLanguageDetectionJobsResponse$DominantLanguageDetectionJobPropertiesList' => 'A list containing the properties of each job that is returned.
', ], ], 'Double' => [ 'base' => NULL, 'refs' => [ 'ClassifierEvaluationMetrics$Accuracy' => 'The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.
', 'ClassifierEvaluationMetrics$Precision' => 'A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.
', 'ClassifierEvaluationMetrics$Recall' => 'A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.
', 'ClassifierEvaluationMetrics$F1Score' => 'A measure of how accurate the classifier results are for the test data. It is derived from the Precision
and Recall
values. The F1Score
is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones. Unlike the Precision metric which comes from averaging the precision of all available labels, this is based on the overall score of all precision scores added together.
', 'ClassifierEvaluationMetrics$MicroRecall' => 'A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results. Specifically, this indicates how many of the correct categories in the text that the model can predict. It is a percentage of correct categories in the text that can found. Instead of averaging the recall scores of all labels (as with Recall), micro Recall is based on the overall score of all recall scores added together.
', 'ClassifierEvaluationMetrics$MicroF1Score' => 'A measure of how accurate the classifier results are for the test data. It is a combination of the Micro Precision
and Micro Recall
values. The Micro F1Score
is the harmonic mean of the two scores. The highest score is 1, and the worst score is 0.
Indicates the fraction of labels that are incorrectly predicted. Also seen as the fraction of wrong labels compared to the total number of labels. Scores closer to zero are better.
', 'EntityRecognizerEvaluationMetrics$Precision' => 'A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.
', 'EntityRecognizerEvaluationMetrics$Recall' => 'A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.
', 'EntityRecognizerEvaluationMetrics$F1Score' => 'A measure of how accurate the recognizer results are for the test data. It is derived from the Precision
and Recall
values. The F1Score
is the harmonic average of the two scores. For plain text entity recognizer models, the range is 0 to 100, where 100 is the best score. For PDF/Word entity recognizer models, the range is 0 to 1, where 1 is the best score.
A measure of the usefulness of the recognizer results for a specific entity type in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.
', 'EntityTypesEvaluationMetrics$Recall' => 'A measure of how complete the recognizer results are for a specific entity type in the test data. High recall means that the recognizer returned most of the relevant results.
', 'EntityTypesEvaluationMetrics$F1Score' => 'A measure of how accurate the recognizer results are for a specific entity type in the test data. It is derived from the Precision
and Recall
values. The F1Score
is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
The average F1 score from the evaluation metrics.
', 'FlywheelModelEvaluationMetrics$AveragePrecision' => 'Average precision metric for the model.
', 'FlywheelModelEvaluationMetrics$AverageRecall' => 'Average recall metric for the model.
', 'FlywheelModelEvaluationMetrics$AverageAccuracy' => 'Average accuracy metric for the model.
', ], ], 'EndpointFilter' => [ 'base' => 'The filter used to determine which endpoints are returned. You can filter jobs on their name, model, status, or the date and time that they were created. You can only set one filter at a time.
', 'refs' => [ 'ListEndpointsRequest$Filter' => 'Filters the endpoints that are returned. You can filter endpoints on their name, model, status, or the date and time that they were created. You can only set one filter at a time.
', ], ], 'EndpointProperties' => [ 'base' => 'Specifies information about the specified endpoint. For information about endpoints, see Managing endpoints.
', 'refs' => [ 'DescribeEndpointResponse$EndpointProperties' => 'Describes information associated with the specific endpoint.
', 'EndpointPropertiesList$member' => NULL, ], ], 'EndpointPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListEndpointsResponse$EndpointPropertiesList' => 'Displays a list of endpoint properties being retrieved by the service in response to the request.
', ], ], 'EndpointStatus' => [ 'base' => NULL, 'refs' => [ 'EndpointFilter$Status' => 'Specifies the status of the endpoint being returned. Possible values are: Creating, Ready, Updating, Deleting, Failed.
', 'EndpointProperties$Status' => 'Specifies the status of the endpoint. Because the endpoint updates and creation are asynchronous, so customers will need to wait for the endpoint to be Ready
status before making inference requests.
Provides information for filtering a list of dominant language detection jobs. For more information, see the operation.
', 'refs' => [ 'ListEntitiesDetectionJobsRequest$Filter' => 'Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'EntitiesDetectionJobProperties' => [ 'base' => 'Provides information about an entities detection job.
', 'refs' => [ 'DescribeEntitiesDetectionJobResponse$EntitiesDetectionJobProperties' => 'An object that contains the properties associated with an entities detection job.
', 'EntitiesDetectionJobPropertiesList$member' => NULL, ], ], 'EntitiesDetectionJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListEntitiesDetectionJobsResponse$EntitiesDetectionJobPropertiesList' => 'A list containing the properties of each job that is returned.
', ], ], 'Entity' => [ 'base' => 'Provides information about an entity.
', 'refs' => [ 'ListOfEntities$member' => NULL, ], ], 'EntityLabel' => [ 'base' => '
Specifies one of the label or labels that categorize the personally identifiable information (PII) entity being analyzed.
', 'refs' => [ 'ListOfEntityLabels$member' => NULL, ], ], 'EntityRecognitionConfig' => [ 'base' => 'Configuration required for an entity recognition model.
', 'refs' => [ 'TaskConfig$EntityRecognitionConfig' => 'Configuration required for an entity recognition model.
', ], ], 'EntityRecognizerAnnotations' => [ 'base' => 'Describes the annotations associated with a entity recognizer.
', 'refs' => [ 'EntityRecognizerInputDataConfig$Annotations' => 'The S3 location of the CSV file that annotates your training documents.
', ], ], 'EntityRecognizerArn' => [ 'base' => NULL, 'refs' => [ 'CreateEntityRecognizerResponse$EntityRecognizerArn' => 'The Amazon Resource Name (ARN) that identifies the entity recognizer.
', 'DeleteEntityRecognizerRequest$EntityRecognizerArn' => 'The Amazon Resource Name (ARN) that identifies the entity recognizer.
', 'DescribeEntityRecognizerRequest$EntityRecognizerArn' => 'The Amazon Resource Name (ARN) that identifies the entity recognizer.
', 'EntitiesDetectionJobProperties$EntityRecognizerArn' => 'The Amazon Resource Name (ARN) that identifies the entity recognizer.
', 'EntityRecognizerProperties$EntityRecognizerArn' => 'The Amazon Resource Name (ARN) that identifies the entity recognizer.
', 'EntityRecognizerProperties$SourceModelArn' => 'The Amazon Resource Name (ARN) of the source model. This model was imported from a different Amazon Web Services account to create the entity recognizer model in your Amazon Web Services account.
', 'StartEntitiesDetectionJobRequest$EntityRecognizerArn' => 'The Amazon Resource Name (ARN) that identifies the specific entity recognizer to be used by the StartEntitiesDetectionJob
. This ARN is optional and is only used for a custom entity recognition job.
The ARN of the custom entity recognition model.
', 'StopTrainingEntityRecognizerRequest$EntityRecognizerArn' => 'The Amazon Resource Name (ARN) that identifies the entity recognizer currently being trained.
', ], ], 'EntityRecognizerAugmentedManifestsList' => [ 'base' => NULL, 'refs' => [ 'EntityRecognizerInputDataConfig$AugmentedManifests' => 'A list of augmented manifest files that provide training data for your custom model. An augmented manifest file is a labeled dataset that is produced by Amazon SageMaker Ground Truth.
This parameter is required if you set DataFormat
to AUGMENTED_MANIFEST
.
The format of your training data:
COMPREHEND_CSV
: A CSV file that supplements your training documents. The CSV file contains information about the custom entities that your trained model will detect. The required format of the file depends on whether you are providing annotations or an entity list.
If you use this value, you must provide your CSV file by using either the Annotations
or EntityList
parameters. You must provide your training documents by using the Documents
parameter.
AUGMENTED_MANIFEST
: A labeled dataset that is produced by Amazon SageMaker Ground Truth. This file is in JSON lines format. Each line is a complete JSON object that contains a training document and its labels. Each label annotates a named entity in the training document.
If you use this value, you must provide the AugmentedManifests
parameter in your request.
If you don\'t specify a value, Amazon Comprehend uses COMPREHEND_CSV
as the default.
Describes the training documents submitted with an entity recognizer.
', 'refs' => [ 'EntityRecognizerInputDataConfig$Documents' => 'The S3 location of the folder that contains the training documents for your custom entity recognizer.
This parameter is required if you set DataFormat
to COMPREHEND_CSV
.
The Amazon Resource Name of an endpoint that is associated with a custom entity recognition model. Provide an endpoint if you want to detect entities by using your own custom model instead of the default model that is used by Amazon Comprehend.
If you specify an endpoint, Amazon Comprehend uses the language of your custom model, and it ignores any language code that you provide in your request.
For information about endpoints, see Managing endpoints.
', ], ], 'EntityRecognizerEntityList' => [ 'base' => 'Describes the entity list submitted with an entity recognizer.
', 'refs' => [ 'EntityRecognizerInputDataConfig$EntityList' => 'The S3 location of the CSV file that has the entity list for your custom entity recognizer.
', ], ], 'EntityRecognizerEvaluationMetrics' => [ 'base' => 'Detailed information about the accuracy of an entity recognizer.
', 'refs' => [ 'EntityRecognizerMetadata$EvaluationMetrics' => 'Detailed information about the accuracy of an entity recognizer.
', ], ], 'EntityRecognizerFilter' => [ 'base' => 'Provides information for filtering a list of entity recognizers. You can only specify one filtering parameter in a request. For more information, see the ListEntityRecognizers
operation./>
Filters the list of entities returned. You can filter on Status
, SubmitTimeBefore
, or SubmitTimeAfter
. You can only set one filter at a time.
Specifies the format and location of the input data.
', 'refs' => [ 'CreateEntityRecognizerRequest$InputDataConfig' => 'Specifies the format and location of the input data. The S3 bucket containing the input data must be located in the same Region as the entity recognizer being created.
', 'EntityRecognizerProperties$InputDataConfig' => 'The input data properties of an entity recognizer.
', ], ], 'EntityRecognizerMetadata' => [ 'base' => 'Detailed information about an entity recognizer.
', 'refs' => [ 'EntityRecognizerProperties$RecognizerMetadata' => 'Provides information about an entity recognizer.
', ], ], 'EntityRecognizerMetadataEntityTypesList' => [ 'base' => NULL, 'refs' => [ 'EntityRecognizerMetadata$EntityTypes' => 'Entity types from the metadata of an entity recognizer.
', ], ], 'EntityRecognizerMetadataEntityTypesListItem' => [ 'base' => 'Individual item from the list of entity types in the metadata of an entity recognizer.
', 'refs' => [ 'EntityRecognizerMetadataEntityTypesList$member' => NULL, ], ], 'EntityRecognizerOutputDataConfig' => [ 'base' => 'Output data configuration.
', 'refs' => [ 'EntityRecognizerProperties$OutputDataConfig' => 'Output data configuration.
', ], ], 'EntityRecognizerProperties' => [ 'base' => 'Describes information about an entity recognizer.
', 'refs' => [ 'DescribeEntityRecognizerResponse$EntityRecognizerProperties' => 'Describes information associated with an entity recognizer.
', 'EntityRecognizerPropertiesList$member' => NULL, ], ], 'EntityRecognizerPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListEntityRecognizersResponse$EntityRecognizerPropertiesList' => 'The list of properties of an entity recognizer.
', ], ], 'EntityRecognizerSummariesList' => [ 'base' => NULL, 'refs' => [ 'ListEntityRecognizerSummariesResponse$EntityRecognizerSummariesList' => 'The list entity recognizer summaries.
', ], ], 'EntityRecognizerSummary' => [ 'base' => 'Describes the information about an entity recognizer and its versions.
', 'refs' => [ 'EntityRecognizerSummariesList$member' => NULL, ], ], 'EntityType' => [ 'base' => NULL, 'refs' => [ 'Entity$Type' => 'The entity type. For entity detection using the built-in model, this field contains one of the standard entity types listed below.
For custom entity detection, this field contains one of the entity types that you specified when you trained your custom model.
', ], ], 'EntityTypeName' => [ 'base' => NULL, 'refs' => [ 'EntityTypesListItem$Type' => 'An entity type within a labeled training dataset that Amazon Comprehend uses to train a custom entity recognizer.
Entity types must not contain the following invalid characters: \\n (line break), \\\\n (escaped line break, \\r (carriage return), \\\\r (escaped carriage return), \\t (tab), \\\\t (escaped tab), space, and , (comma).
', ], ], 'EntityTypesEvaluationMetrics' => [ 'base' => 'Detailed information about the accuracy of an entity recognizer for a specific entity type.
', 'refs' => [ 'EntityRecognizerMetadataEntityTypesListItem$EvaluationMetrics' => 'Detailed information about the accuracy of the entity recognizer for a specific item on the list of entity types.
', ], ], 'EntityTypesList' => [ 'base' => NULL, 'refs' => [ 'EntityRecognitionConfig$EntityTypes' => 'Up to 25 entity types that the model is trained to recognize.
', 'EntityRecognizerInputDataConfig$EntityTypes' => 'The entity types in the labeled training data that Amazon Comprehend uses to train the custom entity recognizer. Any entity types that you don\'t specify are ignored.
A maximum of 25 entity types can be used at one time to train an entity recognizer. Entity types must not contain the following invalid characters: \\n (line break), \\\\n (escaped line break), \\r (carriage return), \\\\r (escaped carriage return), \\t (tab), \\\\t (escaped tab), space, and , (comma).
', ], ], 'EntityTypesListItem' => [ 'base' => 'An entity type within a labeled training dataset that Amazon Comprehend uses to train a custom entity recognizer.
', 'refs' => [ 'EntityTypesList$member' => NULL, ], ], 'ErrorsListItem' => [ 'base' => 'Text extraction encountered one or more page-level errors in the input document.
The ErrorCode
contains one of the following values:
TEXTRACT_BAD_PAGE - Amazon Textract cannot read the page. For more information about page limits in Amazon Textract, see Page Quotas in Amazon Textract.
TEXTRACT_PROVISIONED_THROUGHPUT_EXCEEDED - The number of requests exceeded your throughput limit. For more information about throughput quotas in Amazon Textract, see Default quotas in Amazon Textract.
PAGE_CHARACTERS_EXCEEDED - Too many text characters on the page (10,000 characters maximum).
PAGE_SIZE_EXCEEDED - The maximum page size is 10 MB.
INTERNAL_SERVER_ERROR - The request encountered a service issue. Try the API request again.
Provides information for filtering a list of event detection jobs.
', 'refs' => [ 'ListEventsDetectionJobsRequest$Filter' => 'Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'EventsDetectionJobProperties' => [ 'base' => 'Provides information about an events detection job.
', 'refs' => [ 'DescribeEventsDetectionJobResponse$EventsDetectionJobProperties' => 'An object that contains the properties associated with an event detection job.
', 'EventsDetectionJobPropertiesList$member' => NULL, ], ], 'EventsDetectionJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListEventsDetectionJobsResponse$EventsDetectionJobPropertiesList' => 'A list containing the properties of each job that is returned.
', ], ], 'ExtractedCharactersListItem' => [ 'base' => 'Array of the number of characters extracted from each page.
', 'refs' => [ 'ListOfExtractedCharacters$member' => NULL, ], ], 'Float' => [ 'base' => NULL, 'refs' => [ 'BoundingBox$Height' => 'The height of the bounding box as a ratio of the overall document page height.
', 'BoundingBox$Left' => 'The left coordinate of the bounding box as a ratio of overall document page width.
', 'BoundingBox$Top' => 'The top coordinate of the bounding box as a ratio of overall document page height.
', 'BoundingBox$Width' => 'The width of the bounding box as a ratio of the overall document page width.
', 'DocumentClass$Score' => 'The confidence score that Amazon Comprehend has this class correctly attributed.
', 'DocumentLabel$Score' => 'The confidence score that Amazon Comprehend has this label correctly attributed.
', 'DominantLanguage$Score' => 'The level of confidence that Amazon Comprehend has in the accuracy of the detection.
', 'Entity$Score' => 'The level of confidence that Amazon Comprehend has in the accuracy of the detection.
', 'EntityLabel$Score' => 'The level of confidence that Amazon Comprehend has in the accuracy of the detection.
', 'KeyPhrase$Score' => 'The level of confidence that Amazon Comprehend has in the accuracy of the detection.
', 'PartOfSpeechTag$Score' => 'The confidence that Amazon Comprehend has that the part of speech was correctly identified.
', 'PiiEntity$Score' => 'The level of confidence that Amazon Comprehend has in the accuracy of the detection.
', 'Point$X' => 'The value of the X coordinate for a point on a polygon
', 'Point$Y' => 'The value of the Y coordinate for a point on a polygon
', 'SentimentScore$Positive' => 'The level of confidence that Amazon Comprehend has in the accuracy of its detection of the POSITIVE
sentiment.
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the NEGATIVE
sentiment.
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the NEUTRAL
sentiment.
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the MIXED
sentiment.
Model confidence that the entity is relevant. Value range is zero to one, where one is highest confidence.
', 'TargetedSentimentMention$GroupScore' => 'The confidence that all the entities mentioned in the group relate to the same entity.
', ], ], 'FlywheelFilter' => [ 'base' => 'Filter the flywheels based on creation time or flywheel status.
', 'refs' => [ 'ListFlywheelsRequest$Filter' => 'Filters the flywheels that are returned. You can filter flywheels on their status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'FlywheelIterationFilter' => [ 'base' => 'Filter the flywheel iterations based on creation time.
', 'refs' => [ 'ListFlywheelIterationHistoryRequest$Filter' => 'Filter the flywheel iteration history based on creation time.
', ], ], 'FlywheelIterationId' => [ 'base' => NULL, 'refs' => [ 'DescribeFlywheelIterationRequest$FlywheelIterationId' => '', 'FlywheelIterationProperties$FlywheelIterationId' => '', 'FlywheelProperties$LatestFlywheelIteration' => 'The most recent flywheel iteration.
', 'FlywheelSummary$LatestFlywheelIteration' => 'The most recent flywheel iteration.
', 'StartFlywheelIterationResponse$FlywheelIterationId' => '', ], ], 'FlywheelIterationProperties' => [ 'base' => 'The configuration properties of a flywheel iteration.
', 'refs' => [ 'DescribeFlywheelIterationResponse$FlywheelIterationProperties' => 'The configuration properties of a flywheel iteration.
', 'FlywheelIterationPropertiesList$member' => NULL, ], ], 'FlywheelIterationPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListFlywheelIterationHistoryResponse$FlywheelIterationPropertiesList' => 'List of flywheel iteration properties
', ], ], 'FlywheelIterationStatus' => [ 'base' => NULL, 'refs' => [ 'FlywheelIterationProperties$Status' => 'The status of the flywheel iteration.
', ], ], 'FlywheelModelEvaluationMetrics' => [ 'base' => 'The evaluation metrics associated with the evaluated model.
', 'refs' => [ 'FlywheelIterationProperties$EvaluatedModelMetrics' => NULL, 'FlywheelIterationProperties$TrainedModelMetrics' => 'The metrics associated with the trained model.
', ], ], 'FlywheelProperties' => [ 'base' => 'The flywheel properties.
', 'refs' => [ 'DescribeFlywheelResponse$FlywheelProperties' => 'The flywheel properties.
', 'UpdateFlywheelResponse$FlywheelProperties' => 'The flywheel properties.
', ], ], 'FlywheelS3Uri' => [ 'base' => NULL, 'refs' => [ 'CreateFlywheelRequest$DataLakeS3Uri' => 'Enter the S3 location for the data lake. You can specify a new S3 bucket or a new folder of an existing S3 bucket. The flywheel creates the data lake at this location.
', ], ], 'FlywheelStatus' => [ 'base' => NULL, 'refs' => [ 'FlywheelFilter$Status' => 'Filter the flywheels based on the flywheel status.
', 'FlywheelProperties$Status' => 'The status of the flywheel.
', 'FlywheelSummary$Status' => 'The status of the flywheel.
', ], ], 'FlywheelSummary' => [ 'base' => 'Flywheel summary information.
', 'refs' => [ 'FlywheelSummaryList$member' => NULL, ], ], 'FlywheelSummaryList' => [ 'base' => NULL, 'refs' => [ 'ListFlywheelsResponse$FlywheelSummaryList' => 'A list of flywheel properties retrieved by the service in response to the request.
', ], ], 'Geometry' => [ 'base' => 'Information about the location of items on a document page.
For additional information, see Geometry in the Amazon Textract API reference.
', 'refs' => [ 'Block$Geometry' => 'Co-ordinates of the rectangle or polygon that contains the text.
', ], ], 'IamRoleArn' => [ 'base' => NULL, 'refs' => [ 'CreateDocumentClassifierRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'CreateEndpointRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to trained custom models encrypted with a customer managed key (ModelKmsKeyId).
', 'CreateEntityRecognizerRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'CreateFlywheelRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend the permissions required to access the flywheel data in the data lake.
', 'DocumentClassificationJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'DocumentClassifierProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'DominantLanguageDetectionJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'EndpointProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to trained custom models encrypted with a customer managed key (ModelKmsKeyId).
', 'EndpointProperties$DesiredDataAccessRoleArn' => 'Data access role ARN to use in case the new model is encrypted with a customer KMS key.
', 'EntitiesDetectionJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'EntityRecognizerProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'EventsDetectionJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'FlywheelProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend permission to access the flywheel data.
', 'ImportModelRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend permission to use Amazon Key Management Service (KMS) to encrypt or decrypt the custom model.
', 'KeyPhrasesDetectionJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'PiiEntitiesDetectionJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'SentimentDetectionJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'StartDocumentClassificationJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'StartDominantLanguageDetectionJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data. For more information, see Role-based permissions.
', 'StartEntitiesDetectionJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data. For more information, see Role-based permissions.
', 'StartEventsDetectionJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'StartKeyPhrasesDetectionJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data. For more information, see Role-based permissions.
', 'StartPiiEntitiesDetectionJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'StartSentimentDetectionJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data. For more information, see Role-based permissions.
', 'StartTargetedSentimentDetectionJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data. For more information, see Role-based permissions.
', 'StartTopicsDetectionJobRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data. For more information, see Role-based permissions.
', 'TargetedSentimentDetectionJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your input data.
', 'TopicsDetectionJobProperties$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend read access to your job data.
', 'UpdateEndpointRequest$DesiredDataAccessRoleArn' => 'Data access role ARN to use in case the new model is encrypted with a customer CMK.
', 'UpdateFlywheelRequest$DataAccessRoleArn' => 'The Amazon Resource Name (ARN) of the IAM role that grants Amazon Comprehend permission to access the flywheel data.
', ], ], 'ImportModelRequest' => [ 'base' => NULL, 'refs' => [], ], 'ImportModelResponse' => [ 'base' => NULL, 'refs' => [], ], 'InferenceUnitsInteger' => [ 'base' => NULL, 'refs' => [ 'CreateEndpointRequest$DesiredInferenceUnits' => 'The desired number of inference units to be used by the model using this endpoint. Each inference unit represents of a throughput of 100 characters per second.
', 'EndpointProperties$DesiredInferenceUnits' => 'The desired number of inference units to be used by the model using this endpoint. Each inference unit represents of a throughput of 100 characters per second.
', 'EndpointProperties$CurrentInferenceUnits' => 'The number of inference units currently used by the model using this endpoint.
', 'UpdateEndpointRequest$DesiredInferenceUnits' => 'The desired number of inference units to be used by the model using this endpoint. Each inference unit represents of a throughput of 100 characters per second.
', ], ], 'InputDataConfig' => [ 'base' => 'The input properties for an inference job. The document reader config field applies only to non-text inputs for custom analysis.
', 'refs' => [ 'DocumentClassificationJobProperties$InputDataConfig' => 'The input data configuration that you supplied when you created the document classification job.
', 'DominantLanguageDetectionJobProperties$InputDataConfig' => 'The input data configuration that you supplied when you created the dominant language detection job.
', 'EntitiesDetectionJobProperties$InputDataConfig' => 'The input data configuration that you supplied when you created the entities detection job.
', 'EventsDetectionJobProperties$InputDataConfig' => 'The input data configuration that you supplied when you created the events detection job.
', 'KeyPhrasesDetectionJobProperties$InputDataConfig' => 'The input data configuration that you supplied when you created the key phrases detection job.
', 'PiiEntitiesDetectionJobProperties$InputDataConfig' => 'The input properties for a PII entities detection job.
', 'SentimentDetectionJobProperties$InputDataConfig' => 'The input data configuration that you supplied when you created the sentiment detection job.
', 'StartDocumentClassificationJobRequest$InputDataConfig' => 'Specifies the format and location of the input data for the job.
', 'StartDominantLanguageDetectionJobRequest$InputDataConfig' => 'Specifies the format and location of the input data for the job.
', 'StartEntitiesDetectionJobRequest$InputDataConfig' => 'Specifies the format and location of the input data for the job.
', 'StartEventsDetectionJobRequest$InputDataConfig' => 'Specifies the format and location of the input data for the job.
', 'StartKeyPhrasesDetectionJobRequest$InputDataConfig' => 'Specifies the format and location of the input data for the job.
', 'StartPiiEntitiesDetectionJobRequest$InputDataConfig' => 'The input properties for a PII entities detection job.
', 'StartSentimentDetectionJobRequest$InputDataConfig' => 'Specifies the format and location of the input data for the job.
', 'StartTargetedSentimentDetectionJobRequest$InputDataConfig' => NULL, 'StartTopicsDetectionJobRequest$InputDataConfig' => 'Specifies the format and location of the input data for the job.
', 'TargetedSentimentDetectionJobProperties$InputDataConfig' => NULL, 'TopicsDetectionJobProperties$InputDataConfig' => 'The input data configuration supplied when you created the topic detection job.
', ], ], 'InputFormat' => [ 'base' => NULL, 'refs' => [ 'DatasetEntityRecognizerDocuments$InputFormat' => 'Specifies how the text in an input file should be processed. This is optional, and the default is ONE_DOC_PER_LINE. ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers. ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
', 'EntityRecognizerDocuments$InputFormat' => 'Specifies how the text in an input file should be processed. This is optional, and the default is ONE_DOC_PER_LINE. ONE_DOC_PER_FILE - Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers. ONE_DOC_PER_LINE - Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
', 'InputDataConfig$InputFormat' => 'Specifies how the text in an input file should be processed:
ONE_DOC_PER_FILE
- Each file is considered a separate document. Use this option when you are processing large documents, such as newspaper articles or scientific papers.
ONE_DOC_PER_LINE
- Each line in a file is considered a separate document. Use this option when you are processing many short documents, such as text messages.
The zero-based index of the document in the input list.
', 'BatchDetectEntitiesItemResult$Index' => 'The zero-based index of the document in the input list.
', 'BatchDetectKeyPhrasesItemResult$Index' => 'The zero-based index of the document in the input list.
', 'BatchDetectSentimentItemResult$Index' => 'The zero-based index of the document in the input list.
', 'BatchDetectSyntaxItemResult$Index' => 'The zero-based index of the document in the input list.
', 'BatchDetectTargetedSentimentItemResult$Index' => 'The zero-based index of this result in the input list.
', 'BatchItemError$Index' => 'The zero-based index of the document in the input list.
', 'Block$Page' => 'Page number where the block appears.
', 'BlockReference$BeginOffset' => 'Offset of the start of the block within its parent block.
', 'BlockReference$EndOffset' => 'Offset of the end of the block within its parent block.
', 'ChildBlock$BeginOffset' => 'Offset of the start of the child block within its parent block.
', 'ChildBlock$EndOffset' => 'Offset of the end of the child block within its parent block.
', 'ClassifierMetadata$NumberOfLabels' => 'The number of labels in the input data.
', 'ClassifierMetadata$NumberOfTrainedDocuments' => 'The number of documents in the input data that were used to train the classifier. Typically this is 80 to 90 percent of the input documents.
', 'ClassifierMetadata$NumberOfTestDocuments' => 'The number of documents in the input data that were used to test the classifier. Typically this is 10 to 20 percent of the input documents, up to 10,000 documents.
', 'DocumentClass$Page' => 'Page number in the input document. This field is present in the response only if your request includes the Byte
parameter.
The number of versions you created.
', 'DocumentLabel$Page' => 'Page number where the label occurs. This field is present in the response only if your request includes the Byte
parameter.
Number of pages in the document.
', 'DocumentTypeListItem$Page' => 'Page number.
', 'Entity$BeginOffset' => 'The zero-based offset from the beginning of the source text to the first character in the entity.
This field is empty for non-text input.
', 'Entity$EndOffset' => 'The zero-based offset from the beginning of the source text to the last character in the entity.
This field is empty for non-text input.
', 'EntityRecognizerMetadata$NumberOfTrainedDocuments' => 'The number of documents in the input data that were used to train the entity recognizer. Typically this is 80 to 90 percent of the input documents.
', 'EntityRecognizerMetadata$NumberOfTestDocuments' => 'The number of documents in the input data that were used to test the entity recognizer. Typically this is 10 to 20 percent of the input documents.
', 'EntityRecognizerMetadataEntityTypesListItem$NumberOfTrainMentions' => 'Indicates the number of times the given entity type was seen in the training data.
', 'EntityRecognizerSummary$NumberOfVersions' => 'The number of versions you created.
', 'ErrorsListItem$Page' => 'Page number where the error occurred.
', 'ExtractedCharactersListItem$Page' => 'Page number.
', 'ExtractedCharactersListItem$Count' => 'Number of characters extracted from each page.
', 'KeyPhrase$BeginOffset' => 'The zero-based offset from the beginning of the source text to the first character in the key phrase.
', 'KeyPhrase$EndOffset' => 'The zero-based offset from the beginning of the source text to the last character in the key phrase.
', 'ListOfDescriptiveMentionIndices$member' => NULL, 'PiiEntity$BeginOffset' => 'The zero-based offset from the beginning of the source text to the first character in the entity.
', 'PiiEntity$EndOffset' => 'The zero-based offset from the beginning of the source text to the last character in the entity.
', 'SyntaxToken$TokenId' => 'A unique identifier for a token.
', 'SyntaxToken$BeginOffset' => 'The zero-based offset from the beginning of the source text to the first character in the word.
', 'SyntaxToken$EndOffset' => 'The zero-based offset from the beginning of the source text to the last character in the word.
', 'TargetedSentimentMention$BeginOffset' => 'The offset into the document text where the mention begins.
', 'TargetedSentimentMention$EndOffset' => 'The offset into the document text where the mention ends.
', 'TopicsDetectionJobProperties$NumberOfTopics' => 'The number of topics to detect supplied when you created the topic detection job. The default is 10.
', 'WarningsListItem$Page' => 'Page number in the input document.
', ], ], 'InternalServerException' => [ 'base' => 'An internal server error occurred. Retry your request.
', 'refs' => [], ], 'InvalidFilterException' => [ 'base' => 'The filter specified for the operation is invalid. Specify a different filter.
', 'refs' => [], ], 'InvalidRequestDetail' => [ 'base' => 'Provides additional detail about why the request failed:
Document size is too large - Check the size of your file and resubmit the request.
Document type is not supported - Check the file type and resubmit the request.
Too many pages in the document - Check the number of pages in your file and resubmit the request.
Access denied to Amazon Textract - Verify that your account has permission to use Amazon Textract API operations and resubmit the request.
Reason code is INVALID_DOCUMENT
.
The request is invalid.
', 'refs' => [], ], 'InvalidRequestReason' => [ 'base' => NULL, 'refs' => [ 'InvalidRequestException$Reason' => NULL, ], ], 'JobId' => [ 'base' => NULL, 'refs' => [ 'DescribeDocumentClassificationJobRequest$JobId' => 'The identifier that Amazon Comprehend generated for the job. The StartDocumentClassificationJob
operation returns this identifier in its response.
The identifier that Amazon Comprehend generated for the job. The StartDominantLanguageDetectionJob
operation returns this identifier in its response.
The identifier that Amazon Comprehend generated for the job. The StartEntitiesDetectionJob
operation returns this identifier in its response.
The identifier of the events detection job.
', 'DescribeKeyPhrasesDetectionJobRequest$JobId' => 'The identifier that Amazon Comprehend generated for the job. The StartKeyPhrasesDetectionJob
operation returns this identifier in its response.
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
', 'DescribeSentimentDetectionJobRequest$JobId' => 'The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
', 'DescribeTargetedSentimentDetectionJobRequest$JobId' => 'The identifier that Amazon Comprehend generated for the job. The StartTargetedSentimentDetectionJob
operation returns this identifier in its response.
The identifier assigned by the user to the detection job.
', 'DocumentClassificationJobProperties$JobId' => 'The identifier assigned to the document classification job.
', 'DominantLanguageDetectionJobProperties$JobId' => 'The identifier assigned to the dominant language detection job.
', 'EntitiesDetectionJobProperties$JobId' => 'The identifier assigned to the entities detection job.
', 'EventsDetectionJobProperties$JobId' => 'The identifier assigned to the events detection job.
', 'KeyPhrasesDetectionJobProperties$JobId' => 'The identifier assigned to the key phrases detection job.
', 'PiiEntitiesDetectionJobProperties$JobId' => 'The identifier assigned to the PII entities detection job.
', 'SentimentDetectionJobProperties$JobId' => 'The identifier assigned to the sentiment detection job.
', 'StartDocumentClassificationJobResponse$JobId' => 'The identifier generated for the job. To get the status of the job, use this identifier with the DescribeDocumentClassificationJob
operation.
The identifier generated for the job. To get the status of a job, use this identifier with the operation.
', 'StartEntitiesDetectionJobResponse$JobId' => 'The identifier generated for the job. To get the status of job, use this identifier with the operation.
', 'StartEventsDetectionJobResponse$JobId' => 'An unique identifier for the request. If you don\'t set the client request token, Amazon Comprehend generates one.
', 'StartKeyPhrasesDetectionJobResponse$JobId' => 'The identifier generated for the job. To get the status of a job, use this identifier with the operation.
', 'StartPiiEntitiesDetectionJobResponse$JobId' => 'The identifier generated for the job.
', 'StartSentimentDetectionJobResponse$JobId' => 'The identifier generated for the job. To get the status of a job, use this identifier with the operation.
', 'StartTargetedSentimentDetectionJobResponse$JobId' => 'The identifier generated for the job. To get the status of a job, use this identifier with the DescribeTargetedSentimentDetectionJob
operation.
The identifier generated for the job. To get the status of the job, use this identifier with the DescribeTopicDetectionJob
operation.
The identifier of the dominant language detection job to stop.
', 'StopDominantLanguageDetectionJobResponse$JobId' => 'The identifier of the dominant language detection job to stop.
', 'StopEntitiesDetectionJobRequest$JobId' => 'The identifier of the entities detection job to stop.
', 'StopEntitiesDetectionJobResponse$JobId' => 'The identifier of the entities detection job to stop.
', 'StopEventsDetectionJobRequest$JobId' => 'The identifier of the events detection job to stop.
', 'StopEventsDetectionJobResponse$JobId' => 'The identifier of the events detection job to stop.
', 'StopKeyPhrasesDetectionJobRequest$JobId' => 'The identifier of the key phrases detection job to stop.
', 'StopKeyPhrasesDetectionJobResponse$JobId' => 'The identifier of the key phrases detection job to stop.
', 'StopPiiEntitiesDetectionJobRequest$JobId' => 'The identifier of the PII entities detection job to stop.
', 'StopPiiEntitiesDetectionJobResponse$JobId' => 'The identifier of the PII entities detection job to stop.
', 'StopSentimentDetectionJobRequest$JobId' => 'The identifier of the sentiment detection job to stop.
', 'StopSentimentDetectionJobResponse$JobId' => 'The identifier of the sentiment detection job to stop.
', 'StopTargetedSentimentDetectionJobRequest$JobId' => 'The identifier of the targeted sentiment detection job to stop.
', 'StopTargetedSentimentDetectionJobResponse$JobId' => 'The identifier of the targeted sentiment detection job to stop.
', 'TargetedSentimentDetectionJobProperties$JobId' => 'The identifier assigned to the targeted sentiment detection job.
', 'TopicsDetectionJobProperties$JobId' => 'The identifier assigned to the topic detection job.
', ], ], 'JobName' => [ 'base' => NULL, 'refs' => [ 'DocumentClassificationJobFilter$JobName' => 'Filters on the name of the job.
', 'DocumentClassificationJobProperties$JobName' => 'The name that you assigned to the document classification job.
', 'DominantLanguageDetectionJobFilter$JobName' => 'Filters on the name of the job.
', 'DominantLanguageDetectionJobProperties$JobName' => 'The name that you assigned to the dominant language detection job.
', 'EntitiesDetectionJobFilter$JobName' => 'Filters on the name of the job.
', 'EntitiesDetectionJobProperties$JobName' => 'The name that you assigned the entities detection job.
', 'EventsDetectionJobFilter$JobName' => 'Filters on the name of the events detection job.
', 'EventsDetectionJobProperties$JobName' => 'The name you assigned the events detection job.
', 'KeyPhrasesDetectionJobFilter$JobName' => 'Filters on the name of the job.
', 'KeyPhrasesDetectionJobProperties$JobName' => 'The name that you assigned the key phrases detection job.
', 'PiiEntitiesDetectionJobFilter$JobName' => 'Filters on the name of the job.
', 'PiiEntitiesDetectionJobProperties$JobName' => 'The name that you assigned the PII entities detection job.
', 'SentimentDetectionJobFilter$JobName' => 'Filters on the name of the job.
', 'SentimentDetectionJobProperties$JobName' => 'The name that you assigned to the sentiment detection job
', 'StartDocumentClassificationJobRequest$JobName' => 'The identifier of the job.
', 'StartDominantLanguageDetectionJobRequest$JobName' => 'An identifier for the job.
', 'StartEntitiesDetectionJobRequest$JobName' => 'The identifier of the job.
', 'StartEventsDetectionJobRequest$JobName' => 'The identifier of the events detection job.
', 'StartKeyPhrasesDetectionJobRequest$JobName' => 'The identifier of the job.
', 'StartPiiEntitiesDetectionJobRequest$JobName' => 'The identifier of the job.
', 'StartSentimentDetectionJobRequest$JobName' => 'The identifier of the job.
', 'StartTargetedSentimentDetectionJobRequest$JobName' => 'The identifier of the job.
', 'StartTopicsDetectionJobRequest$JobName' => 'The identifier of the job.
', 'TargetedSentimentDetectionJobFilter$JobName' => 'Filters on the name of the job.
', 'TargetedSentimentDetectionJobProperties$JobName' => 'The name that you assigned to the targeted sentiment detection job.
', 'TopicsDetectionJobFilter$JobName' => '', 'TopicsDetectionJobProperties$JobName' => 'The name of the topic detection job.
', ], ], 'JobNotFoundException' => [ 'base' => 'The specified job was not found. Check the job ID and try again.
', 'refs' => [], ], 'JobStatus' => [ 'base' => NULL, 'refs' => [ 'DocumentClassificationJobFilter$JobStatus' => 'Filters the list based on job status. Returns only jobs with the specified status.
', 'DocumentClassificationJobProperties$JobStatus' => 'The current status of the document classification job. If the status is FAILED
, the Message
field shows the reason for the failure.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
', 'DominantLanguageDetectionJobProperties$JobStatus' => 'The current status of the dominant language detection job. If the status is FAILED
, the Message
field shows the reason for the failure.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
', 'EntitiesDetectionJobProperties$JobStatus' => 'The current status of the entities detection job. If the status is FAILED
, the Message
field shows the reason for the failure.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
', 'EventsDetectionJobProperties$JobStatus' => 'The current status of the events detection job.
', 'KeyPhrasesDetectionJobFilter$JobStatus' => 'Filters the list of jobs based on job status. Returns only jobs with the specified status.
', 'KeyPhrasesDetectionJobProperties$JobStatus' => 'The current status of the key phrases detection job. If the status is FAILED
, the Message
field shows the reason for the failure.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
', 'PiiEntitiesDetectionJobProperties$JobStatus' => 'The current status of the PII entities detection job. If the status is FAILED
, the Message
field shows the reason for the failure.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
', 'SentimentDetectionJobProperties$JobStatus' => 'The current status of the sentiment detection job. If the status is FAILED
, the Messages
field shows the reason for the failure.
The status of the job:
SUBMITTED - The job has been received and queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. For details, use the DescribeDocumentClassificationJob
operation.
STOP_REQUESTED - Amazon Comprehend has received a stop request for the job and is processing the request.
STOPPED - The job was successfully stopped without completing.
The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the operation.
The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the operation.
STOP_REQUESTED - Amazon Comprehend has received a stop request for the job and is processing the request.
STOPPED - The job was successfully stopped without completing.
The status of the events detection job.
', 'StartKeyPhrasesDetectionJobResponse$JobStatus' => 'The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the operation.
The status of the job.
', 'StartSentimentDetectionJobResponse$JobStatus' => 'The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the operation.
The status of the job.
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the DescribeTargetedSentimentDetectionJob
operation.
The status of the job:
SUBMITTED - The job has been received and is queued for processing.
IN_PROGRESS - Amazon Comprehend is processing the job.
COMPLETED - The job was successfully completed and the output is available.
FAILED - The job did not complete. To get details, use the DescribeTopicDetectionJob
operation.
Either STOP_REQUESTED
if the job is currently running, or STOPPED
if the job was previously stopped with the StopDominantLanguageDetectionJob
operation.
Either STOP_REQUESTED
if the job is currently running, or STOPPED
if the job was previously stopped with the StopEntitiesDetectionJob
operation.
The status of the events detection job.
', 'StopKeyPhrasesDetectionJobResponse$JobStatus' => 'Either STOP_REQUESTED
if the job is currently running, or STOPPED
if the job was previously stopped with the StopKeyPhrasesDetectionJob
operation.
The status of the PII entities detection job.
', 'StopSentimentDetectionJobResponse$JobStatus' => 'Either STOP_REQUESTED
if the job is currently running, or STOPPED
if the job was previously stopped with the StopSentimentDetectionJob
operation.
Either STOP_REQUESTED
if the job is currently running, or STOPPED
if the job was previously stopped with the StopSentimentDetectionJob
operation.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
', 'TargetedSentimentDetectionJobProperties$JobStatus' => 'The current status of the targeted sentiment detection job. If the status is FAILED
, the Messages
field shows the reason for the failure.
Filters the list of topic detection jobs based on job status. Returns only jobs with the specified status.
', 'TopicsDetectionJobProperties$JobStatus' => 'The current status of the topic detection job. If the status is Failed
, the reason for the failure is shown in the Message
field.
Describes a key noun phrase.
', 'refs' => [ 'ListOfKeyPhrases$member' => NULL, ], ], 'KeyPhrasesDetectionJobFilter' => [ 'base' => 'Provides information for filtering a list of dominant language detection jobs. For more information, see the operation.
', 'refs' => [ 'ListKeyPhrasesDetectionJobsRequest$Filter' => 'Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'KeyPhrasesDetectionJobProperties' => [ 'base' => 'Provides information about a key phrases detection job.
', 'refs' => [ 'DescribeKeyPhrasesDetectionJobResponse$KeyPhrasesDetectionJobProperties' => 'An object that contains the properties associated with a key phrases detection job.
', 'KeyPhrasesDetectionJobPropertiesList$member' => NULL, ], ], 'KeyPhrasesDetectionJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListKeyPhrasesDetectionJobsResponse$KeyPhrasesDetectionJobPropertiesList' => 'A list containing the properties of each job that is returned.
', ], ], 'KmsKeyId' => [ 'base' => NULL, 'refs' => [ 'CreateDocumentClassifierRequest$VolumeKmsKeyId' => 'ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt trained custom models. The ModelKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt trained custom models. The ModelKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt trained custom models. The ModelKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt the volume.
', 'DataSecurityConfig$DataLakeKmsKeyId' => 'ID for the KMS key that Amazon Comprehend uses to encrypt the data in the data lake.
', 'DocumentClassificationJobProperties$VolumeKmsKeyId' => 'ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt the output results from an analysis job. The KmsKeyId can be one of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
KMS Key Alias: "alias/ExampleAlias"
ARN of a KMS Key Alias: "arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt trained custom models. The ModelKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt trained custom models. The ModelKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt trained custom models. The ModelKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt the output results from an analysis job. The KmsKeyId can be one of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
KMS Key Alias: "alias/ExampleAlias"
ARN of a KMS Key Alias: "arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt the output results from an analysis job.
', 'SentimentDetectionJobProperties$VolumeKmsKeyId' => 'ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt the data on the storage volume attached to the ML compute instance(s) that process the targeted sentiment detection job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the Amazon Web Services Key Management Service (KMS) key that Amazon Comprehend uses to encrypt data on the storage volume attached to the ML compute instance(s) that process the analysis job. The VolumeKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt trained custom models. The ModelKmsKeyId can be either of the following formats:
KMS Key ID: "1234abcd-12ab-34cd-56ef-1234567890ab"
Amazon Resource Name (ARN) of a KMS Key: "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
ID for the KMS key that Amazon Comprehend uses to encrypt the volume.
', ], ], 'KmsKeyValidationException' => [ 'base' => 'The KMS customer managed key (CMK) entered cannot be validated. Verify the key and re-enter it.
', 'refs' => [], ], 'LabelDelimiter' => [ 'base' => NULL, 'refs' => [ 'DatasetDocumentClassifierInputDataConfig$LabelDelimiter' => 'Indicates the delimiter used to separate each label for training a multi-label classifier. The default delimiter between labels is a pipe (|). You can use a different character as a delimiter (if it\'s an allowed character) by specifying it under Delimiter for labels. If the training documents use a delimiter other than the default or the delimiter you specify, the labels on that line will be combined to make a single unique label, such as LABELLABELLABEL.
', 'DocumentClassifierInputDataConfig$LabelDelimiter' => 'Indicates the delimiter used to separate each label for training a multi-label classifier. The default delimiter between labels is a pipe (|). You can use a different character as a delimiter (if it\'s an allowed character) by specifying it under Delimiter for labels. If the training documents use a delimiter other than the default or the delimiter you specify, the labels on that line will be combined to make a single unique label, such as LABELLABELLABEL.
', ], ], 'LabelListItem' => [ 'base' => NULL, 'refs' => [ 'LabelsList$member' => NULL, ], ], 'LabelsList' => [ 'base' => NULL, 'refs' => [ 'DocumentClassificationConfig$Labels' => 'One or more labels to associate with the custom classifier.
', ], ], 'LanguageCode' => [ 'base' => NULL, 'refs' => [ 'BatchDetectEntitiesRequest$LanguageCode' => 'The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. All documents must be in the same language.
', 'BatchDetectKeyPhrasesRequest$LanguageCode' => 'The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. All documents must be in the same language.
', 'BatchDetectSentimentRequest$LanguageCode' => 'The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. All documents must be in the same language.
', 'BatchDetectTargetedSentimentRequest$LanguageCode' => 'The language of the input documents. Currently, English is the only supported language.
', 'ContainsPiiEntitiesRequest$LanguageCode' => 'The language of the input documents. Currently, English is the only valid language.
', 'CreateDocumentClassifierRequest$LanguageCode' => 'The language of the input documents. You can specify any of the languages supported by Amazon Comprehend. All documents must be in the same language.
', 'CreateEntityRecognizerRequest$LanguageCode' => 'You can specify any of the following languages: English ("en"), Spanish ("es"), French ("fr"), Italian ("it"), German ("de"), or Portuguese ("pt"). If you plan to use this entity recognizer with PDF, Word, or image input files, you must specify English as the language. All training documents must be in the same language.
', 'DetectEntitiesRequest$LanguageCode' => 'The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. If your request includes the endpoint for a custom entity recognition model, Amazon Comprehend uses the language of your custom model, and it ignores any language code that you specify here.
All input documents must be in the same language.
', 'DetectKeyPhrasesRequest$LanguageCode' => 'The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. All documents must be in the same language.
', 'DetectPiiEntitiesRequest$LanguageCode' => 'The language of the input documents. Currently, English is the only valid language.
', 'DetectSentimentRequest$LanguageCode' => 'The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. All documents must be in the same language.
', 'DetectTargetedSentimentRequest$LanguageCode' => 'The language of the input documents. Currently, English is the only supported language.
', 'DocumentClassifierProperties$LanguageCode' => 'The language code for the language of the documents that the classifier was trained on.
', 'EntitiesDetectionJobProperties$LanguageCode' => 'The language code of the input documents.
', 'EntityRecognizerProperties$LanguageCode' => 'The language of the input documents. All documents must be in the same language. Only English ("en") is currently supported.
', 'EventsDetectionJobProperties$LanguageCode' => 'The language code of the input documents.
', 'KeyPhrasesDetectionJobProperties$LanguageCode' => 'The language code of the input documents.
', 'PiiEntitiesDetectionJobProperties$LanguageCode' => 'The language code of the input documents
', 'SentimentDetectionJobProperties$LanguageCode' => 'The language code of the input documents.
', 'StartEntitiesDetectionJobRequest$LanguageCode' => 'The language of the input documents. All documents must be in the same language. You can specify any of the languages supported by Amazon Comprehend. If custom entities recognition is used, this parameter is ignored and the language used for training the model is used instead.
', 'StartEventsDetectionJobRequest$LanguageCode' => 'The language code of the input documents.
', 'StartKeyPhrasesDetectionJobRequest$LanguageCode' => 'The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. All documents must be in the same language.
', 'StartPiiEntitiesDetectionJobRequest$LanguageCode' => 'The language of the input documents. Currently, English is the only valid language.
', 'StartSentimentDetectionJobRequest$LanguageCode' => 'The language of the input documents. You can specify any of the primary languages supported by Amazon Comprehend. All documents must be in the same language.
', 'StartTargetedSentimentDetectionJobRequest$LanguageCode' => 'The language of the input documents. Currently, English is the only supported language.
', 'TargetedSentimentDetectionJobProperties$LanguageCode' => 'The language code of the input documents.
', 'TaskConfig$LanguageCode' => 'Language code for the language that the model supports.
', ], ], 'ListDatasetsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListDocumentClassificationJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDocumentClassificationJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListDocumentClassifierSummariesRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDocumentClassifierSummariesResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListDocumentClassifiersRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDocumentClassifiersResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListDominantLanguageDetectionJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDominantLanguageDetectionJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListEndpointsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListEndpointsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListEntitiesDetectionJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListEntitiesDetectionJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListEntityRecognizerSummariesRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListEntityRecognizerSummariesResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListEntityRecognizersRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListEntityRecognizersResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListEventsDetectionJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListEventsDetectionJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListFlywheelIterationHistoryRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListFlywheelIterationHistoryResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListFlywheelsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListFlywheelsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListKeyPhrasesDetectionJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListKeyPhrasesDetectionJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListOfBlockReferences' => [ 'base' => NULL, 'refs' => [ 'Entity$BlockReferences' => 'A reference to each block for this entity. This field is empty for plain-text input.
', ], ], 'ListOfBlocks' => [ 'base' => NULL, 'refs' => [ 'DetectEntitiesResponse$Blocks' => 'Information about each block of text in the input document. Blocks are nested. A page block contains a block for each line of text, which contains a block for each word.
The Block
content for a Word input document does not include a Geometry
field.
The Block
field is not present in the response for plain-text inputs.
List of child blocks within this block.
', ], ], 'ListOfClasses' => [ 'base' => NULL, 'refs' => [ 'ClassifyDocumentResponse$Classes' => 'The classes used by the document being analyzed. These are used for multi-class trained models. Individual classes are mutually exclusive and each document is expected to have only a single class assigned to it. For example, an animal can be a dog or a cat, but not both at the same time.
', ], ], 'ListOfDescriptiveMentionIndices' => [ 'base' => NULL, 'refs' => [ 'TargetedSentimentEntity$DescriptiveMentionIndex' => 'One or more index into the Mentions array that provides the best name for the entity group.
', ], ], 'ListOfDetectDominantLanguageResult' => [ 'base' => NULL, 'refs' => [ 'BatchDetectDominantLanguageResponse$ResultList' => 'A list of objects containing the results of the operation. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList
is empty.
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList
is empty.
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList
is empty.
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList
is empty.
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList
is empty.
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index
field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList
is empty.
Specifies the type of Amazon Textract features to apply. If you chose TEXTRACT_ANALYZE_DOCUMENT
as the read action, you must specify one or both of the following values:
TABLES
- Returns information about any tables that are detected in the input document.
FORMS
- Returns information and the data from any forms that are detected in the input document.
The document type for each page in the input document. This field is present in the response only if your request includes the Byte
parameter.
The document type for each page in the input document. This field is present in the response only if your request used the Byte
parameter.
One or more DominantLanguage objects describing the dominant languages in the document.
', 'DetectDominantLanguageResponse$Languages' => 'Array of languages that Amazon Comprehend detected in the input text. The array is sorted in descending order of the score (the dominant language is always the first element in the array).
For each language, the response returns the RFC 5646 language code and the level of confidence that Amazon Comprehend has in the accuracy of its inference. For more information about RFC 5646, see Tags for Identifying Languages on the IETF Tools web site.
', ], ], 'ListOfEntities' => [ 'base' => NULL, 'refs' => [ 'BatchDetectEntitiesItemResult$Entities' => 'One or more Entity objects, one for each entity detected in the document.
', 'DetectEntitiesResponse$Entities' => 'A collection of entities identified in the input text. For each entity, the response provides the entity text, entity type, where the entity text begins and ends, and the level of confidence that Amazon Comprehend has in the detection.
If your request uses a custom entity recognition model, Amazon Comprehend detects the entities that the model is trained to recognize. Otherwise, it detects the default entity types. For a list of default entity types, see Entities in the Comprehend Developer Guide.
', ], ], 'ListOfEntityLabels' => [ 'base' => NULL, 'refs' => [ 'ContainsPiiEntitiesResponse$Labels' => 'The labels used in the document being analyzed. Individual labels represent personally identifiable information (PII) entity types.
', ], ], 'ListOfErrors' => [ 'base' => NULL, 'refs' => [ 'ClassifyDocumentResponse$Errors' => 'Page-level errors that the system detected while processing the input document. The field is empty if the system encountered no errors.
', 'DetectEntitiesResponse$Errors' => 'Page-level errors that the system detected while processing the input document. The field is empty if the system encountered no errors.
', ], ], 'ListOfExtractedCharacters' => [ 'base' => NULL, 'refs' => [ 'DocumentMetadata$ExtractedCharacters' => 'List of pages in the document, with the number of characters extracted from each page.
', ], ], 'ListOfKeyPhrases' => [ 'base' => NULL, 'refs' => [ 'BatchDetectKeyPhrasesItemResult$KeyPhrases' => 'One or more KeyPhrase objects, one for each key phrase detected in the document.
', 'DetectKeyPhrasesResponse$KeyPhrases' => 'A collection of key phrases that Amazon Comprehend identified in the input text. For each key phrase, the response provides the text of the key phrase, where the key phrase begins and ends, and the level of confidence that Amazon Comprehend has in the accuracy of the detection.
', ], ], 'ListOfLabels' => [ 'base' => NULL, 'refs' => [ 'ClassifyDocumentResponse$Labels' => 'The labels used the document being analyzed. These are used for multi-label trained models. Individual labels represent different categories that are related in some manner and are not mutually exclusive. For example, a movie can be just an action movie, or it can be an action movie, a science fiction movie, and a comedy, all at the same time.
', ], ], 'ListOfMentions' => [ 'base' => NULL, 'refs' => [ 'TargetedSentimentEntity$Mentions' => 'An array of mentions of the entity in the document. The array represents a co-reference group. See Co-reference group for an example.
', ], ], 'ListOfPiiEntities' => [ 'base' => NULL, 'refs' => [ 'DetectPiiEntitiesResponse$Entities' => 'A collection of PII entities identified in the input text. For each entity, the response provides the entity type, where the entity text begins and ends, and the level of confidence that Amazon Comprehend has in the detection.
', ], ], 'ListOfPiiEntityTypes' => [ 'base' => NULL, 'refs' => [ 'RedactionConfig$PiiEntityTypes' => 'An array of the types of PII entities that Amazon Comprehend detects in the input text for your request.
', ], ], 'ListOfRelationships' => [ 'base' => NULL, 'refs' => [ 'Block$Relationships' => 'A list of child blocks of the current block. For example, a LINE object has child blocks for each WORD block that\'s part of the line of text.
', ], ], 'ListOfSyntaxTokens' => [ 'base' => NULL, 'refs' => [ 'BatchDetectSyntaxItemResult$SyntaxTokens' => 'The syntax tokens for the words in the document, one token for each word.
', 'DetectSyntaxResponse$SyntaxTokens' => 'A collection of syntax tokens describing the text. For each token, the response provides the text, the token type, where the text begins and ends, and the level of confidence that Amazon Comprehend has that the token is correct. For a list of token types, see Syntax in the Comprehend Developer Guide.
', ], ], 'ListOfTargetedSentimentEntities' => [ 'base' => NULL, 'refs' => [ 'BatchDetectTargetedSentimentItemResult$Entities' => 'An array of targeted sentiment entities.
', 'DetectTargetedSentimentResponse$Entities' => 'Targeted sentiment analysis for each of the entities identified in the input text.
', ], ], 'ListOfWarnings' => [ 'base' => NULL, 'refs' => [ 'ClassifyDocumentResponse$Warnings' => 'Warnings detected while processing the input document. The response includes a warning if there is a mismatch between the input document type and the model type associated with the endpoint that you specified. The response can also include warnings for individual pages that have a mismatch.
The field is empty if the system generated no warnings.
', ], ], 'ListPiiEntitiesDetectionJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListPiiEntitiesDetectionJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListSentimentDetectionJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListSentimentDetectionJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListTagsForResourceRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListTagsForResourceResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListTargetedSentimentDetectionJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListTargetedSentimentDetectionJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListTopicsDetectionJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListTopicsDetectionJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'MaskCharacter' => [ 'base' => NULL, 'refs' => [ 'RedactionConfig$MaskCharacter' => 'A character that replaces each character in the redacted PII entity.
', ], ], 'MaxResultsInteger' => [ 'base' => NULL, 'refs' => [ 'ListDatasetsRequest$MaxResults' => 'Maximum number of results to return in a response. The default is 100.
', 'ListDocumentClassificationJobsRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', 'ListDocumentClassifierSummariesRequest$MaxResults' => 'The maximum number of results to return on each page. The default is 100.
', 'ListDocumentClassifiersRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', 'ListDominantLanguageDetectionJobsRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', 'ListEndpointsRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', 'ListEntitiesDetectionJobsRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', 'ListEntityRecognizerSummariesRequest$MaxResults' => 'The maximum number of results to return on each page. The default is 100.
', 'ListEntityRecognizersRequest$MaxResults' => 'The maximum number of results to return on each page. The default is 100.
', 'ListEventsDetectionJobsRequest$MaxResults' => 'The maximum number of results to return in each page.
', 'ListFlywheelIterationHistoryRequest$MaxResults' => 'Maximum number of iteration history results to return
', 'ListFlywheelsRequest$MaxResults' => 'Maximum number of results to return in a response. The default is 100.
', 'ListKeyPhrasesDetectionJobsRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', 'ListPiiEntitiesDetectionJobsRequest$MaxResults' => 'The maximum number of results to return in each page.
', 'ListSentimentDetectionJobsRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', 'ListTargetedSentimentDetectionJobsRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', 'ListTopicsDetectionJobsRequest$MaxResults' => 'The maximum number of results to return in each page. The default is 100.
', ], ], 'MentionSentiment' => [ 'base' => 'Contains the sentiment and sentiment score for one mention of an entity.
For more information about targeted sentiment, see Targeted sentiment.
', 'refs' => [ 'TargetedSentimentMention$MentionSentiment' => 'Contains the sentiment and sentiment score for the mention.
', ], ], 'ModelStatus' => [ 'base' => NULL, 'refs' => [ 'DocumentClassifierFilter$Status' => 'Filters the list of classifiers based on status.
', 'DocumentClassifierProperties$Status' => 'The status of the document classifier. If the status is TRAINED
the classifier is ready to use. If the status is TRAINED_WITH_WARNINGS
the classifier training succeeded, but you should review the warnings returned in the CreateDocumentClassifier
response.
If the status is FAILED
you can see additional information about why the classifier wasn\'t trained in the Message
field.
Provides the status of the latest document classifier version.
', 'EntityRecognizerFilter$Status' => 'The status of an entity recognizer.
', 'EntityRecognizerProperties$Status' => 'Provides the status of the entity recognizer.
', 'EntityRecognizerSummary$LatestVersionStatus' => 'Provides the status of the latest entity recognizer version.
', ], ], 'ModelType' => [ 'base' => NULL, 'refs' => [ 'CreateFlywheelRequest$ModelType' => 'The model type.
', 'FlywheelProperties$ModelType' => 'Model type of the flywheel\'s model.
', 'FlywheelSummary$ModelType' => 'Model type of the flywheel\'s model.
', ], ], 'NumberOfDocuments' => [ 'base' => NULL, 'refs' => [ 'DatasetProperties$NumberOfDocuments' => 'The number of documents in the dataset.
', ], ], 'NumberOfTopicsInteger' => [ 'base' => NULL, 'refs' => [ 'StartTopicsDetectionJobRequest$NumberOfTopics' => 'The number of topics to detect.
', ], ], 'OutputDataConfig' => [ 'base' => 'Provides configuration parameters for the output of inference jobs.
', 'refs' => [ 'DocumentClassificationJobProperties$OutputDataConfig' => 'The output data configuration that you supplied when you created the document classification job.
', 'DominantLanguageDetectionJobProperties$OutputDataConfig' => 'The output data configuration that you supplied when you created the dominant language detection job.
', 'EntitiesDetectionJobProperties$OutputDataConfig' => 'The output data configuration that you supplied when you created the entities detection job.
', 'EventsDetectionJobProperties$OutputDataConfig' => 'The output data configuration that you supplied when you created the events detection job.
', 'KeyPhrasesDetectionJobProperties$OutputDataConfig' => 'The output data configuration that you supplied when you created the key phrases detection job.
', 'SentimentDetectionJobProperties$OutputDataConfig' => 'The output data configuration that you supplied when you created the sentiment detection job.
', 'StartDocumentClassificationJobRequest$OutputDataConfig' => 'Specifies where to send the output files.
', 'StartDominantLanguageDetectionJobRequest$OutputDataConfig' => 'Specifies where to send the output files.
', 'StartEntitiesDetectionJobRequest$OutputDataConfig' => 'Specifies where to send the output files.
', 'StartEventsDetectionJobRequest$OutputDataConfig' => 'Specifies where to send the output files.
', 'StartKeyPhrasesDetectionJobRequest$OutputDataConfig' => 'Specifies where to send the output files.
', 'StartPiiEntitiesDetectionJobRequest$OutputDataConfig' => 'Provides configuration parameters for the output of PII entity detection jobs.
', 'StartSentimentDetectionJobRequest$OutputDataConfig' => 'Specifies where to send the output files.
', 'StartTargetedSentimentDetectionJobRequest$OutputDataConfig' => 'Specifies where to send the output files.
', 'StartTopicsDetectionJobRequest$OutputDataConfig' => 'Specifies where to send the output files. The output is a compressed archive with two files, topic-terms.csv
that lists the terms associated with each topic, and doc-topics.csv
that lists the documents associated with each topic
The output data configuration supplied when you created the topic detection job.
', ], ], 'PageBasedErrorCode' => [ 'base' => NULL, 'refs' => [ 'ErrorsListItem$ErrorCode' => 'Error code for the cause of the error.
', ], ], 'PageBasedWarningCode' => [ 'base' => NULL, 'refs' => [ 'WarningsListItem$WarnCode' => 'The type of warning.
', ], ], 'PartOfSpeechTag' => [ 'base' => 'Identifies the part of speech represented by the token and gives the confidence that Amazon Comprehend has that the part of speech was correctly identified. For more information about the parts of speech that Amazon Comprehend can identify, see Syntax in the Comprehend Developer Guide.
', 'refs' => [ 'SyntaxToken$PartOfSpeech' => 'Provides the part of speech label and the confidence level that Amazon Comprehend has that the part of speech was correctly identified. For more information, see Syntax in the Comprehend Developer Guide.
', ], ], 'PartOfSpeechTagType' => [ 'base' => NULL, 'refs' => [ 'PartOfSpeechTag$Tag' => 'Identifies the part of speech that the token represents.
', ], ], 'PiiEntitiesDetectionJobFilter' => [ 'base' => 'Provides information for filtering a list of PII entity detection jobs.
', 'refs' => [ 'ListPiiEntitiesDetectionJobsRequest$Filter' => 'Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'PiiEntitiesDetectionJobProperties' => [ 'base' => 'Provides information about a PII entities detection job.
', 'refs' => [ 'DescribePiiEntitiesDetectionJobResponse$PiiEntitiesDetectionJobProperties' => NULL, 'PiiEntitiesDetectionJobPropertiesList$member' => NULL, ], ], 'PiiEntitiesDetectionJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListPiiEntitiesDetectionJobsResponse$PiiEntitiesDetectionJobPropertiesList' => 'A list containing the properties of each job that is returned.
', ], ], 'PiiEntitiesDetectionMaskMode' => [ 'base' => NULL, 'refs' => [ 'RedactionConfig$MaskMode' => 'Specifies whether the PII entity is redacted with the mask character or the entity type.
', ], ], 'PiiEntitiesDetectionMode' => [ 'base' => NULL, 'refs' => [ 'PiiEntitiesDetectionJobProperties$Mode' => 'Specifies whether the output provides the locations (offsets) of PII entities or a file in which PII entities are redacted.
', 'StartPiiEntitiesDetectionJobRequest$Mode' => 'Specifies whether the output provides the locations (offsets) of PII entities or a file in which PII entities are redacted.
', ], ], 'PiiEntity' => [ 'base' => 'Provides information about a PII entity.
', 'refs' => [ 'ListOfPiiEntities$member' => NULL, ], ], 'PiiEntityType' => [ 'base' => NULL, 'refs' => [ 'EntityLabel$Name' => 'The name of the label.
', 'ListOfPiiEntityTypes$member' => NULL, 'PiiEntity$Type' => 'The entity\'s type.
', ], ], 'PiiOutputDataConfig' => [ 'base' => 'Provides configuration parameters for the output of PII entity detection jobs.
', 'refs' => [ 'PiiEntitiesDetectionJobProperties$OutputDataConfig' => 'The output data configuration that you supplied when you created the PII entities detection job.
', ], ], 'Point' => [ 'base' => 'The X and Y coordinates of a point on a document page.
For additional information, see Point in the Amazon Textract API reference.
', 'refs' => [ 'Polygon$member' => NULL, ], ], 'Policy' => [ 'base' => NULL, 'refs' => [ 'CreateDocumentClassifierRequest$ModelPolicy' => 'The resource-based policy to attach to your custom document classifier model. You can use this policy to allow another Amazon Web Services account to import your custom model.
Provide your policy as a JSON body that you enter as a UTF-8 encoded string without line breaks. To provide valid JSON, enclose the attribute names and values in double quotes. If the JSON body is also enclosed in double quotes, then you must escape the double quotes that are inside the policy:
"{\\"attribute\\": \\"value\\", \\"attribute\\": [\\"value\\"]}"
To avoid escaping quotes, you can use single quotes to enclose the policy and double quotes to enclose the JSON names and values:
\'{"attribute": "value", "attribute": ["value"]}\'
The JSON resource-based policy to attach to your custom entity recognizer model. You can use this policy to allow another Amazon Web Services account to import your custom model.
Provide your JSON as a UTF-8 encoded string without line breaks. To provide valid JSON for your policy, enclose the attribute names and values in double quotes. If the JSON body is also enclosed in double quotes, then you must escape the double quotes that are inside the policy:
"{\\"attribute\\": \\"value\\", \\"attribute\\": [\\"value\\"]}"
To avoid escaping quotes, you can use single quotes to enclose the policy and double quotes to enclose the JSON names and values:
\'{"attribute": "value", "attribute": ["value"]}\'
The JSON body of the resource-based policy.
', 'PutResourcePolicyRequest$ResourcePolicy' => 'The JSON resource-based policy to attach to your custom model. Provide your JSON as a UTF-8 encoded string without line breaks. To provide valid JSON for your policy, enclose the attribute names and values in double quotes. If the JSON body is also enclosed in double quotes, then you must escape the double quotes that are inside the policy:
"{\\"attribute\\": \\"value\\", \\"attribute\\": [\\"value\\"]}"
To avoid escaping quotes, you can use single quotes to enclose the policy and double quotes to enclose the JSON names and values:
\'{"attribute": "value", "attribute": ["value"]}\'
The revision ID of the policy to delete.
', 'DescribeResourcePolicyResponse$PolicyRevisionId' => 'The revision ID of the policy. Each time you modify a policy, Amazon Comprehend assigns a new revision ID, and it deletes the prior version of the policy.
', 'PutResourcePolicyRequest$PolicyRevisionId' => 'The revision ID that Amazon Comprehend assigned to the policy that you are updating. If you are creating a new policy that has no prior version, don\'t use this parameter. Amazon Comprehend creates the revision ID for you.
', 'PutResourcePolicyResponse$PolicyRevisionId' => 'The revision ID of the policy. Each time you modify a policy, Amazon Comprehend assigns a new revision ID, and it deletes the prior version of the policy.
', ], ], 'Polygon' => [ 'base' => NULL, 'refs' => [ 'Geometry$Polygon' => 'Within the bounding box, a fine-grained polygon around the recognized item.
', ], ], 'PutResourcePolicyRequest' => [ 'base' => NULL, 'refs' => [], ], 'PutResourcePolicyResponse' => [ 'base' => NULL, 'refs' => [], ], 'RedactionConfig' => [ 'base' => 'Provides configuration parameters for PII entity redaction.
', 'refs' => [ 'PiiEntitiesDetectionJobProperties$RedactionConfig' => 'Provides configuration parameters for PII entity redaction.
This parameter is required if you set the Mode
parameter to ONLY_REDACTION
. In that case, you must provide a RedactionConfig
definition that includes the PiiEntityTypes
parameter.
Provides configuration parameters for PII entity redaction.
This parameter is required if you set the Mode
parameter to ONLY_REDACTION
. In that case, you must provide a RedactionConfig
definition that includes the PiiEntityTypes
parameter.
Only supported relationship is a child relationship.
', ], ], 'RelationshipsListItem' => [ 'base' => 'List of child blocks for the current block.
', 'refs' => [ 'ListOfRelationships$member' => NULL, ], ], 'ResourceInUseException' => [ 'base' => 'The specified resource name is already in use. Use a different name and try your request again.
', 'refs' => [], ], 'ResourceLimitExceededException' => [ 'base' => 'The maximum number of resources per account has been exceeded. Review the resources, and then try your request again.
', 'refs' => [], ], 'ResourceNotFoundException' => [ 'base' => 'The specified resource ARN was not found. Check the ARN and try your request again.
', 'refs' => [], ], 'ResourceUnavailableException' => [ 'base' => 'The specified resource is not available. Check the resource and try your request again.
', 'refs' => [], ], 'S3Uri' => [ 'base' => NULL, 'refs' => [ 'AugmentedManifestsListItem$S3Uri' => 'The Amazon S3 location of the augmented manifest file.
', 'AugmentedManifestsListItem$AnnotationDataS3Uri' => 'The S3 prefix to the annotation files that are referred in the augmented manifest file.
', 'AugmentedManifestsListItem$SourceDocumentsS3Uri' => 'The S3 prefix to the source files (PDFs) that are referred to in the augmented manifest file.
', 'DatasetAugmentedManifestsListItem$S3Uri' => 'The Amazon S3 location of the augmented manifest file.
', 'DatasetAugmentedManifestsListItem$AnnotationDataS3Uri' => 'The S3 prefix to the annotation files that are referred in the augmented manifest file.
', 'DatasetAugmentedManifestsListItem$SourceDocumentsS3Uri' => 'The S3 prefix to the source files (PDFs) that are referred to in the augmented manifest file.
', 'DatasetDocumentClassifierInputDataConfig$S3Uri' => 'The Amazon S3 URI for the input data. The S3 bucket must be in the same Region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of input files.
For example, if you use the URI S3://bucketName/prefix
, if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
This parameter is required if you set DataFormat
to COMPREHEND_CSV
.
Specifies the Amazon S3 location where the training documents for an entity recognizer are located. The URI must be in the same Region as the API endpoint that you are calling.
', 'DatasetEntityRecognizerDocuments$S3Uri' => 'Specifies the Amazon S3 location where the documents for the dataset are located.
', 'DatasetEntityRecognizerEntityList$S3Uri' => 'Specifies the Amazon S3 location where the entity list is located.
', 'DatasetProperties$DatasetS3Uri' => 'The S3 URI where the dataset is stored.
', 'DocumentClassifierDocuments$S3Uri' => 'The S3 URI location of the training documents specified in the S3Uri CSV file.
', 'DocumentClassifierDocuments$TestS3Uri' => 'The S3 URI location of the test documents included in the TestS3Uri CSV file. This field is not required if you do not specify a test CSV file.
', 'DocumentClassifierInputDataConfig$S3Uri' => 'The Amazon S3 URI for the input data. The S3 bucket must be in the same Region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of input files.
For example, if you use the URI S3://bucketName/prefix
, if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
This parameter is required if you set DataFormat
to COMPREHEND_CSV
.
This specifies the Amazon S3 location where the test annotations for an entity recognizer are located. The URI must be in the same Amazon Web Services Region as the API endpoint that you are calling.
', 'DocumentClassifierOutputDataConfig$S3Uri' => 'When you use the OutputDataConfig
object while creating a custom classifier, you specify the Amazon S3 location where you want to write the confusion matrix and other output files. The URI must be in the same Region as the API endpoint that you are calling. The location is used as the prefix for the actual location of this output file.
When the custom classifier job is finished, the service creates the output file in a directory specific to the job. The S3Uri
field contains the location of the output file, called output.tar.gz
. It is a compressed archive that contains the confusion matrix.
The Amazon S3 prefix for the data lake location of the flywheel statistics.
', 'EntityRecognizerAnnotations$S3Uri' => 'Specifies the Amazon S3 location where the annotations for an entity recognizer are located. The URI must be in the same Region as the API endpoint that you are calling.
', 'EntityRecognizerAnnotations$TestS3Uri' => 'Specifies the Amazon S3 location where the test annotations for an entity recognizer are located. The URI must be in the same Region as the API endpoint that you are calling.
', 'EntityRecognizerDocuments$S3Uri' => 'Specifies the Amazon S3 location where the training documents for an entity recognizer are located. The URI must be in the same Region as the API endpoint that you are calling.
', 'EntityRecognizerDocuments$TestS3Uri' => 'Specifies the Amazon S3 location where the test documents for an entity recognizer are located. The URI must be in the same Amazon Web Services Region as the API endpoint that you are calling.
', 'EntityRecognizerEntityList$S3Uri' => 'Specifies the Amazon S3 location where the entity list is located. The URI must be in the same Region as the API endpoint that you are calling.
', 'EntityRecognizerOutputDataConfig$FlywheelStatsS3Prefix' => 'The Amazon S3 prefix for the data lake location of the flywheel statistics.
', 'FlywheelIterationProperties$EvaluationManifestS3Prefix' => '', 'FlywheelProperties$DataLakeS3Uri' => 'Amazon S3 URI of the data lake location.
', 'FlywheelSummary$DataLakeS3Uri' => 'Amazon S3 URI of the data lake location.
', 'InputDataConfig$S3Uri' => 'The Amazon S3 URI for the input data. The URI must be in same Region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix
, if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
When you use the OutputDataConfig
object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same Region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri
field contains the location of the output file, called output.tar.gz
. It is a compressed archive that contains the ouput of the operation.
For a PII entity detection job, the output file is plain text, not a compressed archive. The output file name is the same as the input file, with .out
appended at the end.
When you use the PiiOutputDataConfig
object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data.
For a PII entity detection job, the output file is plain text, not a compressed archive. The output file name is the same as the input file, with .out
appended at the end.
The ID number for a security group on an instance of your private VPC. Security groups on your VPC function serve as a virtual firewall to control inbound and outbound traffic and provides security for the resources that you’ll be accessing on the VPC. This ID number is preceded by "sg-", for instance: "sg-03b388029b0a285ea". For more information, see Security Groups for your VPC.
', ], ], 'SemiStructuredDocumentBlob' => [ 'base' => NULL, 'refs' => [ 'ClassifyDocumentRequest$Bytes' => 'Use the Bytes
parameter to input a text, PDF, Word or image file. You can also use the Bytes
parameter to input an Amazon Textract DetectDocumentText
or AnalyzeDocument
output file.
Provide the input document as a sequence of base64-encoded bytes. If your code uses an Amazon Web Services SDK to classify documents, the SDK may encode the document file bytes for you.
The maximum length of this field depends on the input document type. For details, see Inputs for real-time custom analysis in the Comprehend Developer Guide.
If you use the Bytes
parameter, do not use the Text
parameter.
This field applies only when you use a custom entity recognition model that was trained with PDF annotations. For other cases, enter your text input in the Text
field.
Use the Bytes
parameter to input a text, PDF, Word or image file. Using a plain-text file in the Bytes
parameter is equivelent to using the Text
parameter (the Entities
field in the response is identical).
You can also use the Bytes
parameter to input an Amazon Textract DetectDocumentText
or AnalyzeDocument
output file.
Provide the input document as a sequence of base64-encoded bytes. If your code uses an Amazon Web Services SDK to detect entities, the SDK may encode the document file bytes for you.
The maximum length of this field depends on the input document type. For details, see Inputs for real-time custom analysis in the Comprehend Developer Guide.
If you use the Bytes
parameter, do not use the Text
parameter.
Provides information for filtering a list of dominant language detection jobs. For more information, see the operation.
', 'refs' => [ 'ListSentimentDetectionJobsRequest$Filter' => 'Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'SentimentDetectionJobProperties' => [ 'base' => 'Provides information about a sentiment detection job.
', 'refs' => [ 'DescribeSentimentDetectionJobResponse$SentimentDetectionJobProperties' => 'An object that contains the properties associated with a sentiment detection job.
', 'SentimentDetectionJobPropertiesList$member' => NULL, ], ], 'SentimentDetectionJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListSentimentDetectionJobsResponse$SentimentDetectionJobPropertiesList' => 'A list containing the properties of each job that is returned.
', ], ], 'SentimentScore' => [ 'base' => 'Describes the level of confidence that Amazon Comprehend has in the accuracy of its detection of sentiments.
', 'refs' => [ 'BatchDetectSentimentItemResult$SentimentScore' => 'The level of confidence that Amazon Comprehend has in the accuracy of its sentiment detection.
', 'DetectSentimentResponse$SentimentScore' => 'An object that lists the sentiments, and their corresponding confidence levels.
', 'MentionSentiment$SentimentScore' => NULL, ], ], 'SentimentType' => [ 'base' => NULL, 'refs' => [ 'BatchDetectSentimentItemResult$Sentiment' => 'The sentiment detected in the document.
', 'DetectSentimentResponse$Sentiment' => 'The inferred sentiment that Amazon Comprehend has the highest level of confidence in.
', 'MentionSentiment$Sentiment' => 'The sentiment of the mention.
', ], ], 'Split' => [ 'base' => NULL, 'refs' => [ 'AugmentedManifestsListItem$Split' => 'The purpose of the data you\'ve provided in the augmented manifest. You can either train or test this data. If you don\'t specify, the default is train.
TRAIN - all of the documents in the manifest will be used for training. If no test documents are provided, Amazon Comprehend will automatically reserve a portion of the training documents for testing.
TEST - all of the documents in the manifest will be used for testing.
', ], ], 'StartDocumentClassificationJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartDocumentClassificationJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartDominantLanguageDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartDominantLanguageDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartEntitiesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartEntitiesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartEventsDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartEventsDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartFlywheelIterationRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartFlywheelIterationResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartKeyPhrasesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartKeyPhrasesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartPiiEntitiesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartPiiEntitiesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartSentimentDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartSentimentDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartTargetedSentimentDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartTargetedSentimentDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StartTopicsDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartTopicsDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopDominantLanguageDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopDominantLanguageDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopEntitiesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopEntitiesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopEventsDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopEventsDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopKeyPhrasesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopKeyPhrasesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopPiiEntitiesDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopPiiEntitiesDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopSentimentDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopSentimentDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopTargetedSentimentDetectionJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopTargetedSentimentDetectionJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopTrainingDocumentClassifierRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopTrainingDocumentClassifierResponse' => [ 'base' => NULL, 'refs' => [], ], 'StopTrainingEntityRecognizerRequest' => [ 'base' => NULL, 'refs' => [], ], 'StopTrainingEntityRecognizerResponse' => [ 'base' => NULL, 'refs' => [], ], 'String' => [ 'base' => NULL, 'refs' => [ 'BatchItemError$ErrorCode' => 'The numeric error code of the error.
', 'BatchItemError$ErrorMessage' => 'A text description of the error.
', 'BatchSizeLimitExceededException$Message' => NULL, 'Block$Id' => 'Unique identifier for the block.
', 'Block$Text' => 'The word or line of text extracted from the block.
', 'BlockReference$BlockId' => 'Unique identifier for the block.
', 'ChildBlock$ChildBlockId' => 'Unique identifier for the child block.
', 'ConcurrentModificationException$Message' => NULL, 'ContainsPiiEntitiesRequest$Text' => 'A UTF-8 text string. The maximum string size is 100 KB.
', 'DetectPiiEntitiesRequest$Text' => 'A UTF-8 text string. The maximum string size is 100 KB.
', 'DocumentClass$Name' => 'The name of the class.
', 'DocumentLabel$Name' => 'The name of the label.
', 'DominantLanguage$LanguageCode' => 'The RFC 5646 language code for the dominant language. For more information about RFC 5646, see Tags for Identifying Languages on the IETF Tools web site.
', 'Entity$Text' => 'The text of the entity.
', 'ErrorsListItem$ErrorMessage' => 'Text message explaining the reason for the error.
', 'InternalServerException$Message' => NULL, 'InvalidFilterException$Message' => NULL, 'InvalidRequestException$Message' => NULL, 'JobNotFoundException$Message' => NULL, 'KeyPhrase$Text' => 'The text of a key noun phrase.
', 'KmsKeyValidationException$Message' => NULL, 'ListDatasetsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListDatasetsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListDocumentClassificationJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListDocumentClassificationJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListDocumentClassifierSummariesRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListDocumentClassifierSummariesResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListDocumentClassifiersRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListDocumentClassifiersResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListDominantLanguageDetectionJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListDominantLanguageDetectionJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListEndpointsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListEndpointsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListEntitiesDetectionJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListEntitiesDetectionJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListEntityRecognizerSummariesRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListEntityRecognizerSummariesResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListEntityRecognizersRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListEntityRecognizersResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListEventsDetectionJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListEventsDetectionJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListFlywheelIterationHistoryRequest$NextToken' => 'Next token
', 'ListFlywheelIterationHistoryResponse$NextToken' => 'Next token
', 'ListFlywheelsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListFlywheelsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListKeyPhrasesDetectionJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListKeyPhrasesDetectionJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListPiiEntitiesDetectionJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListPiiEntitiesDetectionJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListSentimentDetectionJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListSentimentDetectionJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListTargetedSentimentDetectionJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListTargetedSentimentDetectionJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ListTopicsDetectionJobsRequest$NextToken' => 'Identifies the next page of results to return.
', 'ListTopicsDetectionJobsResponse$NextToken' => 'Identifies the next page of results to return.
', 'ResourceInUseException$Message' => NULL, 'ResourceLimitExceededException$Message' => NULL, 'ResourceNotFoundException$Message' => NULL, 'ResourceUnavailableException$Message' => NULL, 'StringList$member' => NULL, 'SyntaxToken$Text' => 'The word that was recognized in the source text.
', 'TargetedSentimentMention$Text' => 'The text in the document that identifies the entity.
', 'TextSizeLimitExceededException$Message' => NULL, 'TooManyRequestsException$Message' => NULL, 'TooManyTagKeysException$Message' => NULL, 'TooManyTagsException$Message' => NULL, 'UnsupportedLanguageException$Message' => NULL, 'WarningsListItem$WarnMessage' => 'Text message associated with the warning.
', ], ], 'StringList' => [ 'base' => NULL, 'refs' => [ 'RelationshipsListItem$Ids' => 'Identifers of the child blocks.
', ], ], 'SubnetId' => [ 'base' => NULL, 'refs' => [ 'Subnets$member' => NULL, ], ], 'Subnets' => [ 'base' => NULL, 'refs' => [ 'VpcConfig$Subnets' => 'The ID for each subnet being used in your private VPC. This subnet is a subset of the a range of IPv4 addresses used by the VPC and is specific to a given availability zone in the VPC’s Region. This ID number is preceded by "subnet-", for instance: "subnet-04ccf456919e69055". For more information, see VPCs and Subnets.
', ], ], 'SyntaxLanguageCode' => [ 'base' => NULL, 'refs' => [ 'BatchDetectSyntaxRequest$LanguageCode' => 'The language of the input documents. You can specify any of the following languages supported by Amazon Comprehend: German ("de"), English ("en"), Spanish ("es"), French ("fr"), Italian ("it"), or Portuguese ("pt"). All documents must be in the same language.
', 'DetectSyntaxRequest$LanguageCode' => 'The language code of the input documents. You can specify any of the following languages supported by Amazon Comprehend: German ("de"), English ("en"), Spanish ("es"), French ("fr"), Italian ("it"), or Portuguese ("pt").
', ], ], 'SyntaxToken' => [ 'base' => 'Represents a work in the input text that was recognized and assigned a part of speech. There is one syntax token record for each word in the source text.
', 'refs' => [ 'ListOfSyntaxTokens$member' => NULL, ], ], 'Tag' => [ 'base' => 'A key-value pair that adds as a metadata to a resource used by Amazon Comprehend. For example, a tag with the key-value pair ‘Department’:’Sales’ might be added to a resource to indicate its use by a particular department.
', 'refs' => [ 'TagList$member' => NULL, ], ], 'TagKey' => [ 'base' => NULL, 'refs' => [ 'Tag$Key' => 'The initial part of a key-value pair that forms a tag associated with a given resource. For instance, if you want to show which resources are used by which departments, you might use “Department” as the key portion of the pair, with multiple possible values such as “sales,” “legal,” and “administration.”
', 'TagKeyList$member' => NULL, ], ], 'TagKeyList' => [ 'base' => NULL, 'refs' => [ 'UntagResourceRequest$TagKeys' => 'The initial part of a key-value pair that forms a tag being removed from a given resource. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department. Keys must be unique and cannot be duplicated for a particular resource.
', ], ], 'TagList' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetRequest$Tags' => 'Tags for the dataset.
', 'CreateDocumentClassifierRequest$Tags' => 'Tags to associate with the document classifier. A tag is a key-value pair that adds as a metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'CreateEndpointRequest$Tags' => 'Tags to associate with the endpoint. A tag is a key-value pair that adds metadata to the endpoint. For example, a tag with "Sales" as the key might be added to an endpoint to indicate its use by the sales department.
', 'CreateEntityRecognizerRequest$Tags' => 'Tags to associate with the entity recognizer. A tag is a key-value pair that adds as a metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'CreateFlywheelRequest$Tags' => 'The tags to associate with this flywheel.
', 'ImportModelRequest$Tags' => 'Tags to associate with the custom model that is created by this import. A tag is a key-value pair that adds as a metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'ListTagsForResourceResponse$Tags' => 'Tags associated with the Amazon Comprehend resource being queried. A tag is a key-value pair that adds as a metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartDocumentClassificationJobRequest$Tags' => 'Tags to associate with the document classification job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartDominantLanguageDetectionJobRequest$Tags' => 'Tags to associate with the dominant language detection job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartEntitiesDetectionJobRequest$Tags' => 'Tags to associate with the entities detection job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartEventsDetectionJobRequest$Tags' => 'Tags to associate with the events detection job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartKeyPhrasesDetectionJobRequest$Tags' => 'Tags to associate with the key phrases detection job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartPiiEntitiesDetectionJobRequest$Tags' => 'Tags to associate with the PII entities detection job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartSentimentDetectionJobRequest$Tags' => 'Tags to associate with the sentiment detection job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartTargetedSentimentDetectionJobRequest$Tags' => 'Tags to associate with the targeted sentiment detection job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'StartTopicsDetectionJobRequest$Tags' => 'Tags to associate with the topics detection job. A tag is a key-value pair that adds metadata to a resource used by Amazon Comprehend. For example, a tag with "Sales" as the key might be added to a resource to indicate its use by the sales department.
', 'TagResourceRequest$Tags' => 'Tags being associated with a specific Amazon Comprehend resource. There can be a maximum of 50 tags (both existing and pending) associated with a specific resource.
', ], ], 'TagResourceRequest' => [ 'base' => NULL, 'refs' => [], ], 'TagResourceResponse' => [ 'base' => NULL, 'refs' => [], ], 'TagValue' => [ 'base' => NULL, 'refs' => [ 'Tag$Value' => 'The second part of a key-value pair that forms a tag associated with a given resource. For instance, if you want to show which resources are used by which departments, you might use “Department” as the initial (key) portion of the pair, with a value of “sales” to indicate the sales department.
', ], ], 'TargetEventTypes' => [ 'base' => NULL, 'refs' => [ 'EventsDetectionJobProperties$TargetEventTypes' => 'The types of events that are detected by the job.
', 'StartEventsDetectionJobRequest$TargetEventTypes' => 'The types of events to detect in the input documents.
', ], ], 'TargetedSentimentDetectionJobFilter' => [ 'base' => 'Provides information for filtering a list of dominant language detection jobs. For more information, see the ListTargetedSentimentDetectionJobs
operation.
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
', ], ], 'TargetedSentimentDetectionJobProperties' => [ 'base' => 'Provides information about a targeted sentiment detection job.
', 'refs' => [ 'DescribeTargetedSentimentDetectionJobResponse$TargetedSentimentDetectionJobProperties' => 'An object that contains the properties associated with a targeted sentiment detection job.
', 'TargetedSentimentDetectionJobPropertiesList$member' => NULL, ], ], 'TargetedSentimentDetectionJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListTargetedSentimentDetectionJobsResponse$TargetedSentimentDetectionJobPropertiesList' => 'A list containing the properties of each job that is returned.
', ], ], 'TargetedSentimentEntity' => [ 'base' => 'Information about one of the entities found by targeted sentiment analysis.
For more information about targeted sentiment, see Targeted sentiment.
', 'refs' => [ 'ListOfTargetedSentimentEntities$member' => NULL, ], ], 'TargetedSentimentEntityType' => [ 'base' => NULL, 'refs' => [ 'TargetedSentimentMention$Type' => 'The type of the entity. Amazon Comprehend supports a variety of entity types.
', ], ], 'TargetedSentimentMention' => [ 'base' => 'Information about one mention of an entity. The mention information includes the location of the mention in the text and the sentiment of the mention.
For more information about targeted sentiment, see Targeted sentiment.
', 'refs' => [ 'ListOfMentions$member' => NULL, ], ], 'TaskConfig' => [ 'base' => 'Configuration about the custom classifier associated with the flywheel.
', 'refs' => [ 'CreateFlywheelRequest$TaskConfig' => 'Configuration about the custom classifier associated with the flywheel.
', 'FlywheelProperties$TaskConfig' => 'Configuration about the custom classifier associated with the flywheel.
', ], ], 'TextSizeLimitExceededException' => [ 'base' => 'The size of the input text exceeds the limit. Use a smaller document.
', 'refs' => [], ], 'Timestamp' => [ 'base' => NULL, 'refs' => [ 'DatasetFilter$CreationTimeAfter' => 'Filter the datasets to include datasets created after the specified time.
', 'DatasetFilter$CreationTimeBefore' => 'Filter the datasets to include datasets created before the specified time.
', 'DatasetProperties$CreationTime' => 'Creation time of the dataset.
', 'DatasetProperties$EndTime' => 'Time when the data from the dataset becomes available in the data lake.
', 'DescribeResourcePolicyResponse$CreationTime' => 'The time at which the policy was created.
', 'DescribeResourcePolicyResponse$LastModifiedTime' => 'The time at which the policy was last modified.
', 'DocumentClassificationJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
', 'DocumentClassificationJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
', 'DocumentClassificationJobProperties$SubmitTime' => 'The time that the document classification job was submitted for processing.
', 'DocumentClassificationJobProperties$EndTime' => 'The time that the document classification job completed.
', 'DocumentClassifierFilter$SubmitTimeBefore' => 'Filters the list of classifiers based on the time that the classifier was submitted for processing. Returns only classifiers submitted before the specified time. Classifiers are returned in ascending order, oldest to newest.
', 'DocumentClassifierFilter$SubmitTimeAfter' => 'Filters the list of classifiers based on the time that the classifier was submitted for processing. Returns only classifiers submitted after the specified time. Classifiers are returned in descending order, newest to oldest.
', 'DocumentClassifierProperties$SubmitTime' => 'The time that the document classifier was submitted for training.
', 'DocumentClassifierProperties$EndTime' => 'The time that training the document classifier completed.
', 'DocumentClassifierProperties$TrainingStartTime' => 'Indicates the time when the training starts on documentation classifiers. You are billed for the time interval between this time and the value of TrainingEndTime.
', 'DocumentClassifierProperties$TrainingEndTime' => 'The time that training of the document classifier was completed. Indicates the time when the training completes on documentation classifiers. You are billed for the time interval between this time and the value of TrainingStartTime.
', 'DocumentClassifierSummary$LatestVersionCreatedAt' => 'The time that the latest document classifier version was submitted for processing.
', 'DominantLanguageDetectionJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
', 'DominantLanguageDetectionJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
', 'DominantLanguageDetectionJobProperties$SubmitTime' => 'The time that the dominant language detection job was submitted for processing.
', 'DominantLanguageDetectionJobProperties$EndTime' => 'The time that the dominant language detection job completed.
', 'EndpointFilter$CreationTimeBefore' => 'Specifies a date before which the returned endpoint or endpoints were created.
', 'EndpointFilter$CreationTimeAfter' => 'Specifies a date after which the returned endpoint or endpoints were created.
', 'EndpointProperties$CreationTime' => 'The creation date and time of the endpoint.
', 'EndpointProperties$LastModifiedTime' => 'The date and time that the endpoint was last modified.
', 'EntitiesDetectionJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
', 'EntitiesDetectionJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
', 'EntitiesDetectionJobProperties$SubmitTime' => 'The time that the entities detection job was submitted for processing.
', 'EntitiesDetectionJobProperties$EndTime' => 'The time that the entities detection job completed
', 'EntityRecognizerFilter$SubmitTimeBefore' => 'Filters the list of entities based on the time that the list was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
', 'EntityRecognizerFilter$SubmitTimeAfter' => 'Filters the list of entities based on the time that the list was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
', 'EntityRecognizerProperties$SubmitTime' => 'The time that the recognizer was submitted for processing.
', 'EntityRecognizerProperties$EndTime' => 'The time that the recognizer creation completed.
', 'EntityRecognizerProperties$TrainingStartTime' => 'The time that training of the entity recognizer started.
', 'EntityRecognizerProperties$TrainingEndTime' => 'The time that training of the entity recognizer was completed.
', 'EntityRecognizerSummary$LatestVersionCreatedAt' => 'The time that the latest entity recognizer version was submitted for processing.
', 'EventsDetectionJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
', 'EventsDetectionJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
', 'EventsDetectionJobProperties$SubmitTime' => 'The time that the events detection job was submitted for processing.
', 'EventsDetectionJobProperties$EndTime' => 'The time that the events detection job completed.
', 'FlywheelFilter$CreationTimeAfter' => 'Filter the flywheels to include flywheels created after the specified time.
', 'FlywheelFilter$CreationTimeBefore' => 'Filter the flywheels to include flywheels created before the specified time.
', 'FlywheelIterationFilter$CreationTimeAfter' => 'Filter the flywheel iterations to include iterations created after the specified time.
', 'FlywheelIterationFilter$CreationTimeBefore' => 'Filter the flywheel iterations to include iterations created before the specified time.
', 'FlywheelIterationProperties$CreationTime' => 'The creation start time of the flywheel iteration.
', 'FlywheelIterationProperties$EndTime' => 'The completion time of this flywheel iteration.
', 'FlywheelProperties$CreationTime' => 'Creation time of the flywheel.
', 'FlywheelProperties$LastModifiedTime' => 'Last modified time for the flywheel.
', 'FlywheelSummary$CreationTime' => 'Creation time of the flywheel.
', 'FlywheelSummary$LastModifiedTime' => 'Last modified time for the flywheel.
', 'KeyPhrasesDetectionJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
', 'KeyPhrasesDetectionJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
', 'KeyPhrasesDetectionJobProperties$SubmitTime' => 'The time that the key phrases detection job was submitted for processing.
', 'KeyPhrasesDetectionJobProperties$EndTime' => 'The time that the key phrases detection job completed.
', 'PiiEntitiesDetectionJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
', 'PiiEntitiesDetectionJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
', 'PiiEntitiesDetectionJobProperties$SubmitTime' => 'The time that the PII entities detection job was submitted for processing.
', 'PiiEntitiesDetectionJobProperties$EndTime' => 'The time that the PII entities detection job completed.
', 'SentimentDetectionJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
', 'SentimentDetectionJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
', 'SentimentDetectionJobProperties$SubmitTime' => 'The time that the sentiment detection job was submitted for processing.
', 'SentimentDetectionJobProperties$EndTime' => 'The time that the sentiment detection job ended.
', 'TargetedSentimentDetectionJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
', 'TargetedSentimentDetectionJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
', 'TargetedSentimentDetectionJobProperties$SubmitTime' => 'The time that the targeted sentiment detection job was submitted for processing.
', 'TargetedSentimentDetectionJobProperties$EndTime' => 'The time that the targeted sentiment detection job ended.
', 'TopicsDetectionJobFilter$SubmitTimeBefore' => 'Filters the list of jobs based on the time that the job was submitted for processing. Only returns jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
', 'TopicsDetectionJobFilter$SubmitTimeAfter' => 'Filters the list of jobs based on the time that the job was submitted for processing. Only returns jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
', 'TopicsDetectionJobProperties$SubmitTime' => 'The time that the topic detection job was submitted for processing.
', 'TopicsDetectionJobProperties$EndTime' => 'The time that the topic detection job was completed.
', ], ], 'TooManyRequestsException' => [ 'base' => 'The number of requests exceeds the limit. Resubmit your request later.
', 'refs' => [], ], 'TooManyTagKeysException' => [ 'base' => 'The request contains more tag keys than can be associated with a resource (50 tag keys per resource).
', 'refs' => [], ], 'TooManyTagsException' => [ 'base' => 'The request contains more tags than can be associated with a resource (50 tags per resource). The maximum number of tags includes both existing tags and those included in your current request.
', 'refs' => [], ], 'TopicsDetectionJobFilter' => [ 'base' => 'Provides information for filtering topic detection jobs. For more information, see .
', 'refs' => [ 'ListTopicsDetectionJobsRequest$Filter' => 'Filters the jobs that are returned. Jobs can be filtered on their name, status, or the date and time that they were submitted. You can set only one filter at a time.
', ], ], 'TopicsDetectionJobProperties' => [ 'base' => 'Provides information about a topic detection job.
', 'refs' => [ 'DescribeTopicsDetectionJobResponse$TopicsDetectionJobProperties' => 'The list of properties for the requested job.
', 'TopicsDetectionJobPropertiesList$member' => NULL, ], ], 'TopicsDetectionJobPropertiesList' => [ 'base' => NULL, 'refs' => [ 'ListTopicsDetectionJobsResponse$TopicsDetectionJobPropertiesList' => 'A list containing the properties of each job that is returned.
', ], ], 'UnsupportedLanguageException' => [ 'base' => 'Amazon Comprehend can\'t process the language of the input text. For custom entity recognition APIs, only English, Spanish, French, Italian, German, or Portuguese are accepted. For a list of supported languages, Supported languages in the Comprehend Developer Guide.
', 'refs' => [], ], 'UntagResourceRequest' => [ 'base' => NULL, 'refs' => [], ], 'UntagResourceResponse' => [ 'base' => NULL, 'refs' => [], ], 'UpdateDataSecurityConfig' => [ 'base' => 'Data security configuration.
', 'refs' => [ 'UpdateFlywheelRequest$DataSecurityConfig' => 'Flywheel data security configuration.
', ], ], 'UpdateEndpointRequest' => [ 'base' => NULL, 'refs' => [], ], 'UpdateEndpointResponse' => [ 'base' => NULL, 'refs' => [], ], 'UpdateFlywheelRequest' => [ 'base' => NULL, 'refs' => [], ], 'UpdateFlywheelResponse' => [ 'base' => NULL, 'refs' => [], ], 'VersionName' => [ 'base' => NULL, 'refs' => [ 'CreateDocumentClassifierRequest$VersionName' => 'The version name given to the newly created classifier. Version names can have a maximum of 256 characters. Alphanumeric characters, hyphens (-) and underscores (_) are allowed. The version name must be unique among all models with the same classifier name in the Amazon Web Services account/Amazon Web Services Region.
', 'CreateEntityRecognizerRequest$VersionName' => 'The version name given to the newly created recognizer. Version names can be a maximum of 256 characters. Alphanumeric characters, hyphens (-) and underscores (_) are allowed. The version name must be unique among all models with the same recognizer name in the account/Region.
', 'DocumentClassifierProperties$VersionName' => 'The version name that you assigned to the document classifier.
', 'DocumentClassifierSummary$LatestVersionName' => 'The version name you assigned to the latest document classifier version.
', 'EntityRecognizerProperties$VersionName' => 'The version name you assigned to the entity recognizer.
', 'EntityRecognizerSummary$LatestVersionName' => 'The version name you assigned to the latest entity recognizer version.
', 'ImportModelRequest$VersionName' => 'The version name given to the custom model that is created by this import. Version names can have a maximum of 256 characters. Alphanumeric characters, hyphens (-) and underscores (_) are allowed. The version name must be unique among all models with the same classifier name in the account/Region.
', ], ], 'VpcConfig' => [ 'base' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for the job. For more information, see Amazon VPC.
', 'refs' => [ 'CreateDocumentClassifierRequest$VpcConfig' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for your custom classifier. For more information, see Amazon VPC.
', 'CreateEntityRecognizerRequest$VpcConfig' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for your custom entity recognizer. For more information, see Amazon VPC.
', 'DataSecurityConfig$VpcConfig' => NULL, 'DocumentClassificationJobProperties$VpcConfig' => 'Configuration parameters for a private Virtual Private Cloud (VPC) containing the resources you are using for your document classification job. For more information, see Amazon VPC.
', 'DocumentClassifierProperties$VpcConfig' => 'Configuration parameters for a private Virtual Private Cloud (VPC) containing the resources you are using for your custom classifier. For more information, see Amazon VPC.
', 'DominantLanguageDetectionJobProperties$VpcConfig' => 'Configuration parameters for a private Virtual Private Cloud (VPC) containing the resources you are using for your dominant language detection job. For more information, see Amazon VPC.
', 'EntitiesDetectionJobProperties$VpcConfig' => 'Configuration parameters for a private Virtual Private Cloud (VPC) containing the resources you are using for your entity detection job. For more information, see Amazon VPC.
', 'EntityRecognizerProperties$VpcConfig' => 'Configuration parameters for a private Virtual Private Cloud (VPC) containing the resources you are using for your custom entity recognizer. For more information, see Amazon VPC.
', 'KeyPhrasesDetectionJobProperties$VpcConfig' => 'Configuration parameters for a private Virtual Private Cloud (VPC) containing the resources you are using for your key phrases detection job. For more information, see Amazon VPC.
', 'SentimentDetectionJobProperties$VpcConfig' => 'Configuration parameters for a private Virtual Private Cloud (VPC) containing the resources you are using for your sentiment detection job. For more information, see Amazon VPC.
', 'StartDocumentClassificationJobRequest$VpcConfig' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for your document classification job. For more information, see Amazon VPC.
', 'StartDominantLanguageDetectionJobRequest$VpcConfig' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for your dominant language detection job. For more information, see Amazon VPC.
', 'StartEntitiesDetectionJobRequest$VpcConfig' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for your entity detection job. For more information, see Amazon VPC.
', 'StartKeyPhrasesDetectionJobRequest$VpcConfig' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for your key phrases detection job. For more information, see Amazon VPC.
', 'StartSentimentDetectionJobRequest$VpcConfig' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for your sentiment detection job. For more information, see Amazon VPC.
', 'StartTargetedSentimentDetectionJobRequest$VpcConfig' => NULL, 'StartTopicsDetectionJobRequest$VpcConfig' => 'Configuration parameters for an optional private Virtual Private Cloud (VPC) containing the resources you are using for your topic detection job. For more information, see Amazon VPC.
', 'TargetedSentimentDetectionJobProperties$VpcConfig' => NULL, 'TopicsDetectionJobProperties$VpcConfig' => 'Configuration parameters for a private Virtual Private Cloud (VPC) containing the resources you are using for your topic detection job. For more information, see Amazon VPC.
', 'UpdateDataSecurityConfig$VpcConfig' => NULL, ], ], 'WarningsListItem' => [ 'base' => 'The system identified one of the following warnings while processing the input document:
The document to classify is plain text, but the classifier is a native model.
The document to classify is semi-structured, but the classifier is a plain-text model.