/* * Copyright 2010-2023 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). * You may not use this file except in compliance with the License. * A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either * express or implied. See the License for the specific language governing * permissions and limitations under the License. */ package com.amazonaws.services.transcribe.model; import java.io.Serializable; import com.amazonaws.AmazonWebServiceRequest; /** *
* Transcribes the audio from a media file and applies any additional Request * Parameters you choose to include in your request. *
*
* To make a StartTranscriptionJob
request, you must first upload
* your media file into an Amazon S3 bucket; you can then specify the Amazon S3
* location of the file using the Media
parameter.
*
* You must include the following parameters in your
* StartTranscriptionJob
request:
*
* region
: The Amazon Web Services Region where you are making your
* request. For a list of Amazon Web Services Regions supported with Amazon
* Transcribe, refer to Amazon
* Transcribe endpoints and quotas.
*
* TranscriptionJobName
: A custom name you create for your
* transcription job that is unique within your Amazon Web Services account.
*
* Media
(MediaFileUri
): The Amazon S3 location of
* your media file.
*
* One of LanguageCode
, IdentifyLanguage
, or
* IdentifyMultipleLanguages
: If you know the language of your
* media file, specify it using the LanguageCode
parameter; you can
* find all valid language codes in the Supported languages table. If you don't know the languages spoken in
* your media, use either IdentifyLanguage
or
* IdentifyMultipleLanguages
and let Amazon Transcribe identify the
* languages for you.
*
* A unique name, chosen by you, for your transcription job. The name that
* you specify is also used as the default name of your transcription output
* file. If you want to specify a different name for your transcription
* output, use the OutputKey
parameter.
*
* This name is case sensitive, cannot contain spaces, and must be unique
* within an Amazon Web Services account. If you try to create a new job
* with the same name as an existing job, you get a
* ConflictException
error.
*
* Constraints:
* Length: 1 - 200
* Pattern: ^[0-9a-zA-Z._-]+
*/
private String transcriptionJobName;
/**
*
* The language code that represents the language spoken in the input media * file. *
*
* If you're unsure of the language spoken in your media file, consider
* using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic language
* identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
*
* For a list of supported languages and their associated language codes, * refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
*
* Constraints:
* Allowed Values: af-ZA, ar-AE, ar-SA, da-DK, de-CH, de-DE, en-AB,
* en-AU, en-GB, en-IE, en-IN, en-US, en-WL, es-ES, es-US, fa-IR, fr-CA,
* fr-FR, he-IL, hi-IN, id-ID, it-IT, ja-JP, ko-KR, ms-MY, nl-NL, pt-BR,
* pt-PT, ru-RU, ta-IN, te-IN, tr-TR, zh-CN, zh-TW, th-TH, en-ZA, en-NZ,
* vi-VN, sv-SE
*/
private String languageCode;
/**
*
* The sample rate, in hertz, of the audio track in your input media file. *
*
* If you don't specify the media sample rate, Amazon Transcribe determines
* it for you. If you specify the sample rate, it must match the rate
* detected by Amazon Transcribe. If there's a mismatch between the value
* that you specify and the value detected, your job fails. In most cases,
* you can omit MediaSampleRateHertz
and let Amazon Transcribe
* determine the sample rate.
*
* Constraints:
* Range: 8000 - 48000
*/
private Integer mediaSampleRateHertz;
/**
*
* Specify the format of your input media file. *
*
* Constraints:
* Allowed Values: mp3, mp4, wav, flac, ogg, amr, webm
*/
private String mediaFormat;
/**
*
* Describes the Amazon S3 location of the media file you want to use in * your request. *
*/ private Media media; /** *
* The name of the Amazon S3 bucket where you want your transcription output
* stored. Do not include the S3://
prefix of the specified
* bucket.
*
* If you want your output to go to a sub-folder of this bucket, specify it
* using the OutputKey
parameter; OutputBucketName
* only accepts the name of a bucket.
*
* For example, if you want your output stored in
* S3://DOC-EXAMPLE-BUCKET
, set OutputBucketName
* to DOC-EXAMPLE-BUCKET
. However, if you want your output
* stored in S3://DOC-EXAMPLE-BUCKET/test-files/
, set
* OutputBucketName
to DOC-EXAMPLE-BUCKET
and
* OutputKey
to test-files/
.
*
* Note that Amazon Transcribe must have permission to use the specified * location. You can change Amazon S3 permissions using the Amazon Web Services Management * Console. See also Permissions Required for IAM User Roles. *
*
* If you don't specify OutputBucketName
, your transcript is
* placed in a service-managed Amazon S3 bucket and you are provided with a
* URI to access your transcript.
*
* Constraints:
* Length: - 64
* Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9]
*/
private String outputBucketName;
/**
*
* Use in combination with OutputBucketName
to specify the
* output location of your transcript and, optionally, a unique name for
* your output file. The default name for your transcription output is the
* same as the name you specified for your transcription job (
* TranscriptionJobName
).
*
* Here are some examples of how you can use OutputKey
:
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
* and 'my-transcript.json' as the OutputKey
, your
* transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json
* .
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
* and 'test-files/my-transcript.json' as the OutputKey
, your
* transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'test-files/my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json
* .
*
* If you specify the name of an Amazon S3 bucket sub-folder that doesn't * exist, one is created for you. *
*
* Constraints:
* Length: 1 - 1024
* Pattern: [a-zA-Z0-9-_.!*'()/]{1,1024}$
*/
private String outputKey;
/**
*
* The KMS key you want to use to encrypt your transcription output. *
** If using a key located in the current Amazon Web Services account, * you can specify your KMS key in one of four ways: *
*
* Use the KMS key ID itself. For example,
* 1234abcd-12ab-34cd-56ef-1234567890ab
.
*
* Use an alias for the KMS key ID. For example,
* alias/ExampleAlias
.
*
* Use the Amazon Resource Name (ARN) for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If using a key located in a different Amazon Web Services account * than the current Amazon Web Services account, you can specify your KMS * key in one of two ways: *
*
* Use the ARN for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If you don't specify an encryption key, your output is encrypted with the * default Amazon S3 key (SSE-S3). *
*
* If you specify a KMS key to encrypt your output, you must also specify an
* output location using the OutputLocation
parameter.
*
* Note that the role making the request must have permission to use the * specified KMS key. *
*
* Constraints:
* Length: 1 - 2048
* Pattern: ^[A-Za-z0-9][A-Za-z0-9:_/+=,@.-]{0,2048}$
*/
private String outputEncryptionKMSKeyId;
/**
*
* A map of plain text, non-secret key:value pairs, known as encryption * context pairs, that provide an added layer of security for your data. For * more information, see KMS encryption context and Asymmetric keys in KMS. *
*/ private java.util.Map* Specify additional optional settings in your request, including channel * identification, alternative transcriptions, speaker partitioning. You can * use that to apply custom vocabularies and vocabulary filters. *
*
* If you want to include a custom vocabulary or a custom vocabulary filter
* (or both) with your request but do not want to use automatic
* language identification, use Settings
with the
* VocabularyName
or VocabularyFilterName
(or
* both) sub-parameter.
*
* If you're using automatic language identification with your request and
* want to include a custom language model, a custom vocabulary, or a custom
* vocabulary filter, use instead the
* parameter with the
LanguageModelName
,
* VocabularyName
or VocabularyFilterName
* sub-parameters.
*
* Specify the custom language model you want to include with your
* transcription job. If you include ModelSettings
in your
* request, you must include the LanguageModelName
* sub-parameter.
*
* For more information, see Custom language models. *
*/ private ModelSettings modelSettings; /** *
* Makes it possible to control how your transcription job is processed.
* Currently, the only JobExecutionSettings
modification you
* can choose is enabling job queueing using the
* AllowDeferredExecution
sub-parameter.
*
* If you include JobExecutionSettings
in your request, you
* must also include the sub-parameters: AllowDeferredExecution
* and DataAccessRoleArn
.
*
* Makes it possible to redact or flag specified personally identifiable
* information (PII) in your transcript. If you use
* ContentRedaction
, you must also include the sub-parameters:
* PiiEntityTypes
, RedactionOutput
, and
* RedactionType
.
*
* Enables automatic language identification in your transcription job
* request. Use this parameter if your media file contains only one
* language. If your media contains multiple languages, use
* IdentifyMultipleLanguages
instead.
*
* If you include IdentifyLanguage
, you can optionally include
* a list of language codes, using LanguageOptions
, that you
* think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
*
* If you want to apply a custom language model, a custom vocabulary, or a
* custom vocabulary filter to your automatic language identification
* request, include LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* If you include LanguageIdSettings
, also include
* LanguageOptions
.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
*
* Enables automatic multi-language identification in your transcription job
* request. Use this parameter if your media file contains more than one
* language. If your media contains only one language, use
* IdentifyLanguage
instead.
*
* If you include IdentifyMultipleLanguages
, you can optionally
* include a list of language codes, using LanguageOptions
,
* that you think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
*
* If you want to apply a custom vocabulary or a custom vocabulary filter to
* your automatic language identification request, include
* LanguageIdSettings
with the relevant sub-parameters (
* VocabularyName
and VocabularyFilterName
). If
* you include LanguageIdSettings
, also include
* LanguageOptions
.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
*
* You can specify two or more language codes that represent the languages * you think may be present in your media. Including more than five is not * recommended. If you're unsure what languages are present, do not include * this parameter. *
*
* If you include LanguageOptions
in your request, you must
* also include IdentifyLanguage
.
*
* For more information, refer to Supported languages. *
*
* To transcribe speech in Modern Standard Arabic (ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
*
* Produces subtitle files for your input media. You can specify WebVTT * (*.vtt) and SubRip (*.srt) formats. *
*/ private Subtitles subtitles; /** ** Adds one or more custom tags, each in the form of a key:value pair, to a * new transcription job at the time you start this new job. *
** To learn more about using tags with Amazon Transcribe, refer to Tagging resources. *
*/ private java.util.List
* If using automatic language identification in your request and you want
* to apply a custom language model, a custom vocabulary, or a custom
* vocabulary filter, include LanguageIdSettings
with the
* relevant sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* Note that multi-language identification (
* IdentifyMultipleLanguages
) doesn't support custom language
* models.
*
* LanguageIdSettings
supports two to five language codes. Each
* language code you include can have an associated custom language model,
* custom vocabulary, and custom vocabulary filter. The language codes that
* you specify must match the languages of the associated custom language
* models, custom vocabularies, and custom vocabulary filters.
*
* It's recommended that you include LanguageOptions
when using
* LanguageIdSettings
to ensure that the correct language
* dialect is identified. For example, if you specify a custom vocabulary
* that is in en-US
but Amazon Transcribe determines that the
* language spoken in your media is en-AU
, your custom
* vocabulary is not applied to your transcription. If you include
* LanguageOptions
and include en-US
as the only
* English language dialect, your custom vocabulary is applied to
* your transcription.
*
* If you want to include a custom language model with your request but
* do not want to use automatic language identification, use instead
* the parameter with the
LanguageModelName
* sub-parameter. If you want to include a custom vocabulary or a custom
* vocabulary filter (or both) with your request but do not want to
* use automatic language identification, use instead the
* parameter with the
VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
*
* A unique name, chosen by you, for your transcription job. The name that
* you specify is also used as the default name of your transcription output
* file. If you want to specify a different name for your transcription
* output, use the OutputKey
parameter.
*
* This name is case sensitive, cannot contain spaces, and must be unique
* within an Amazon Web Services account. If you try to create a new job
* with the same name as an existing job, you get a
* ConflictException
error.
*
* Constraints:
* Length: 1 - 200
* Pattern: ^[0-9a-zA-Z._-]+
*
* @return
* A unique name, chosen by you, for your transcription job. The
* name that you specify is also used as the default name of your
* transcription output file. If you want to specify a different
* name for your transcription output, use the
* OutputKey
parameter.
*
* This name is case sensitive, cannot contain spaces, and must be
* unique within an Amazon Web Services account. If you try to
* create a new job with the same name as an existing job, you get a
* ConflictException
error.
*
* A unique name, chosen by you, for your transcription job. The name that
* you specify is also used as the default name of your transcription output
* file. If you want to specify a different name for your transcription
* output, use the OutputKey
parameter.
*
* This name is case sensitive, cannot contain spaces, and must be unique
* within an Amazon Web Services account. If you try to create a new job
* with the same name as an existing job, you get a
* ConflictException
error.
*
* Constraints:
* Length: 1 - 200
* Pattern: ^[0-9a-zA-Z._-]+
*
* @param transcriptionJobName
* A unique name, chosen by you, for your transcription job. The
* name that you specify is also used as the default name of your
* transcription output file. If you want to specify a different
* name for your transcription output, use the
* OutputKey
parameter.
*
* This name is case sensitive, cannot contain spaces, and must
* be unique within an Amazon Web Services account. If you try to
* create a new job with the same name as an existing job, you
* get a ConflictException
error.
*
* A unique name, chosen by you, for your transcription job. The name that
* you specify is also used as the default name of your transcription output
* file. If you want to specify a different name for your transcription
* output, use the OutputKey
parameter.
*
* This name is case sensitive, cannot contain spaces, and must be unique
* within an Amazon Web Services account. If you try to create a new job
* with the same name as an existing job, you get a
* ConflictException
error.
*
* Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Length: 1 - 200
* Pattern: ^[0-9a-zA-Z._-]+
*
* @param transcriptionJobName
* A unique name, chosen by you, for your transcription job. The
* name that you specify is also used as the default name of your
* transcription output file. If you want to specify a different
* name for your transcription output, use the
* OutputKey
parameter.
*
* This name is case sensitive, cannot contain spaces, and must
* be unique within an Amazon Web Services account. If you try to
* create a new job with the same name as an existing job, you
* get a ConflictException
error.
*
* The language code that represents the language spoken in the input media * file. *
*
* If you're unsure of the language spoken in your media file, consider
* using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic language
* identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
*
* For a list of supported languages and their associated language codes, * refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
*
* Constraints:
* Allowed Values: af-ZA, ar-AE, ar-SA, da-DK, de-CH, de-DE, en-AB,
* en-AU, en-GB, en-IE, en-IN, en-US, en-WL, es-ES, es-US, fa-IR, fr-CA,
* fr-FR, he-IL, hi-IN, id-ID, it-IT, ja-JP, ko-KR, ms-MY, nl-NL, pt-BR,
* pt-PT, ru-RU, ta-IN, te-IN, tr-TR, zh-CN, zh-TW, th-TH, en-ZA, en-NZ,
* vi-VN, sv-SE
*
* @return
* The language code that represents the language spoken in the * input media file. *
*
* If you're unsure of the language spoken in your media file,
* consider using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic
* language identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription job
* fails.
*
* For a list of supported languages and their associated language * codes, refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (
* ar-SA
), your media file must be encoded at a sample
* rate of 16,000 Hz or higher.
*
* The language code that represents the language spoken in the input media * file. *
*
* If you're unsure of the language spoken in your media file, consider
* using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic language
* identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
*
* For a list of supported languages and their associated language codes, * refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
*
* Constraints:
* Allowed Values: af-ZA, ar-AE, ar-SA, da-DK, de-CH, de-DE, en-AB,
* en-AU, en-GB, en-IE, en-IN, en-US, en-WL, es-ES, es-US, fa-IR, fr-CA,
* fr-FR, he-IL, hi-IN, id-ID, it-IT, ja-JP, ko-KR, ms-MY, nl-NL, pt-BR,
* pt-PT, ru-RU, ta-IN, te-IN, tr-TR, zh-CN, zh-TW, th-TH, en-ZA, en-NZ,
* vi-VN, sv-SE
*
* @param languageCode
* The language code that represents the language spoken in the * input media file. *
*
* If you're unsure of the language spoken in your media file,
* consider using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic
* language identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription
* job fails.
*
* For a list of supported languages and their associated * language codes, refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (
* ar-SA
), your media file must be encoded at a
* sample rate of 16,000 Hz or higher.
*
* The language code that represents the language spoken in the input media * file. *
*
* If you're unsure of the language spoken in your media file, consider
* using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic language
* identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
*
* For a list of supported languages and their associated language codes, * refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
*
* Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Allowed Values: af-ZA, ar-AE, ar-SA, da-DK, de-CH, de-DE, en-AB,
* en-AU, en-GB, en-IE, en-IN, en-US, en-WL, es-ES, es-US, fa-IR, fr-CA,
* fr-FR, he-IL, hi-IN, id-ID, it-IT, ja-JP, ko-KR, ms-MY, nl-NL, pt-BR,
* pt-PT, ru-RU, ta-IN, te-IN, tr-TR, zh-CN, zh-TW, th-TH, en-ZA, en-NZ,
* vi-VN, sv-SE
*
* @param languageCode
* The language code that represents the language spoken in the * input media file. *
*
* If you're unsure of the language spoken in your media file,
* consider using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic
* language identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription
* job fails.
*
* For a list of supported languages and their associated * language codes, refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (
* ar-SA
), your media file must be encoded at a
* sample rate of 16,000 Hz or higher.
*
* The language code that represents the language spoken in the input media * file. *
*
* If you're unsure of the language spoken in your media file, consider
* using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic language
* identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
*
* For a list of supported languages and their associated language codes, * refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
*
* Constraints:
* Allowed Values: af-ZA, ar-AE, ar-SA, da-DK, de-CH, de-DE, en-AB,
* en-AU, en-GB, en-IE, en-IN, en-US, en-WL, es-ES, es-US, fa-IR, fr-CA,
* fr-FR, he-IL, hi-IN, id-ID, it-IT, ja-JP, ko-KR, ms-MY, nl-NL, pt-BR,
* pt-PT, ru-RU, ta-IN, te-IN, tr-TR, zh-CN, zh-TW, th-TH, en-ZA, en-NZ,
* vi-VN, sv-SE
*
* @param languageCode
* The language code that represents the language spoken in the * input media file. *
*
* If you're unsure of the language spoken in your media file,
* consider using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic
* language identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription
* job fails.
*
* For a list of supported languages and their associated * language codes, refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (
* ar-SA
), your media file must be encoded at a
* sample rate of 16,000 Hz or higher.
*
* The language code that represents the language spoken in the input media * file. *
*
* If you're unsure of the language spoken in your media file, consider
* using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic language
* identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
*
* For a list of supported languages and their associated language codes, * refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
*
* Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Allowed Values: af-ZA, ar-AE, ar-SA, da-DK, de-CH, de-DE, en-AB,
* en-AU, en-GB, en-IE, en-IN, en-US, en-WL, es-ES, es-US, fa-IR, fr-CA,
* fr-FR, he-IL, hi-IN, id-ID, it-IT, ja-JP, ko-KR, ms-MY, nl-NL, pt-BR,
* pt-PT, ru-RU, ta-IN, te-IN, tr-TR, zh-CN, zh-TW, th-TH, en-ZA, en-NZ,
* vi-VN, sv-SE
*
* @param languageCode
* The language code that represents the language spoken in the * input media file. *
*
* If you're unsure of the language spoken in your media file,
* consider using IdentifyLanguage
or
* IdentifyMultipleLanguages
to enable automatic
* language identification.
*
* Note that you must include one of LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription
* job fails.
*
* For a list of supported languages and their associated * language codes, refer to the Supported languages table. *
*
* To transcribe speech in Modern Standard Arabic (
* ar-SA
), your media file must be encoded at a
* sample rate of 16,000 Hz or higher.
*
* The sample rate, in hertz, of the audio track in your input media file. *
*
* If you don't specify the media sample rate, Amazon Transcribe determines
* it for you. If you specify the sample rate, it must match the rate
* detected by Amazon Transcribe. If there's a mismatch between the value
* that you specify and the value detected, your job fails. In most cases,
* you can omit MediaSampleRateHertz
and let Amazon Transcribe
* determine the sample rate.
*
* Constraints:
* Range: 8000 - 48000
*
* @return
* The sample rate, in hertz, of the audio track in your input media * file. *
*
* If you don't specify the media sample rate, Amazon Transcribe
* determines it for you. If you specify the sample rate, it must
* match the rate detected by Amazon Transcribe. If there's a
* mismatch between the value that you specify and the value
* detected, your job fails. In most cases, you can omit
* MediaSampleRateHertz
and let Amazon Transcribe
* determine the sample rate.
*
* The sample rate, in hertz, of the audio track in your input media file. *
*
* If you don't specify the media sample rate, Amazon Transcribe determines
* it for you. If you specify the sample rate, it must match the rate
* detected by Amazon Transcribe. If there's a mismatch between the value
* that you specify and the value detected, your job fails. In most cases,
* you can omit MediaSampleRateHertz
and let Amazon Transcribe
* determine the sample rate.
*
* Constraints:
* Range: 8000 - 48000
*
* @param mediaSampleRateHertz
* The sample rate, in hertz, of the audio track in your input * media file. *
*
* If you don't specify the media sample rate, Amazon Transcribe
* determines it for you. If you specify the sample rate, it must
* match the rate detected by Amazon Transcribe. If there's a
* mismatch between the value that you specify and the value
* detected, your job fails. In most cases, you can omit
* MediaSampleRateHertz
and let Amazon Transcribe
* determine the sample rate.
*
* The sample rate, in hertz, of the audio track in your input media file. *
*
* If you don't specify the media sample rate, Amazon Transcribe determines
* it for you. If you specify the sample rate, it must match the rate
* detected by Amazon Transcribe. If there's a mismatch between the value
* that you specify and the value detected, your job fails. In most cases,
* you can omit MediaSampleRateHertz
and let Amazon Transcribe
* determine the sample rate.
*
* Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Range: 8000 - 48000
*
* @param mediaSampleRateHertz
* The sample rate, in hertz, of the audio track in your input * media file. *
*
* If you don't specify the media sample rate, Amazon Transcribe
* determines it for you. If you specify the sample rate, it must
* match the rate detected by Amazon Transcribe. If there's a
* mismatch between the value that you specify and the value
* detected, your job fails. In most cases, you can omit
* MediaSampleRateHertz
and let Amazon Transcribe
* determine the sample rate.
*
* Specify the format of your input media file. *
*
* Constraints:
* Allowed Values: mp3, mp4, wav, flac, ogg, amr, webm
*
* @return
* Specify the format of your input media file. *
* @see MediaFormat */ public String getMediaFormat() { return mediaFormat; } /** ** Specify the format of your input media file. *
*
* Constraints:
* Allowed Values: mp3, mp4, wav, flac, ogg, amr, webm
*
* @param mediaFormat
* Specify the format of your input media file. *
* @see MediaFormat */ public void setMediaFormat(String mediaFormat) { this.mediaFormat = mediaFormat; } /** ** Specify the format of your input media file. *
** Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Allowed Values: mp3, mp4, wav, flac, ogg, amr, webm
*
* @param mediaFormat
* Specify the format of your input media file. *
* @return A reference to this updated object so that method calls can be * chained together. * @see MediaFormat */ public StartTranscriptionJobRequest withMediaFormat(String mediaFormat) { this.mediaFormat = mediaFormat; return this; } /** ** Specify the format of your input media file. *
*
* Constraints:
* Allowed Values: mp3, mp4, wav, flac, ogg, amr, webm
*
* @param mediaFormat
* Specify the format of your input media file. *
* @see MediaFormat */ public void setMediaFormat(MediaFormat mediaFormat) { this.mediaFormat = mediaFormat.toString(); } /** ** Specify the format of your input media file. *
** Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Allowed Values: mp3, mp4, wav, flac, ogg, amr, webm
*
* @param mediaFormat
* Specify the format of your input media file. *
* @return A reference to this updated object so that method calls can be * chained together. * @see MediaFormat */ public StartTranscriptionJobRequest withMediaFormat(MediaFormat mediaFormat) { this.mediaFormat = mediaFormat.toString(); return this; } /** ** Describes the Amazon S3 location of the media file you want to use in * your request. *
* * @return* Describes the Amazon S3 location of the media file you want to * use in your request. *
*/ public Media getMedia() { return media; } /** ** Describes the Amazon S3 location of the media file you want to use in * your request. *
* * @param media* Describes the Amazon S3 location of the media file you want to * use in your request. *
*/ public void setMedia(Media media) { this.media = media; } /** ** Describes the Amazon S3 location of the media file you want to use in * your request. *
** Returns a reference to this object so that method calls can be chained * together. * * @param media
* Describes the Amazon S3 location of the media file you want to * use in your request. *
* @return A reference to this updated object so that method calls can be * chained together. */ public StartTranscriptionJobRequest withMedia(Media media) { this.media = media; return this; } /** *
* The name of the Amazon S3 bucket where you want your transcription output
* stored. Do not include the S3://
prefix of the specified
* bucket.
*
* If you want your output to go to a sub-folder of this bucket, specify it
* using the OutputKey
parameter; OutputBucketName
* only accepts the name of a bucket.
*
* For example, if you want your output stored in
* S3://DOC-EXAMPLE-BUCKET
, set OutputBucketName
* to DOC-EXAMPLE-BUCKET
. However, if you want your output
* stored in S3://DOC-EXAMPLE-BUCKET/test-files/
, set
* OutputBucketName
to DOC-EXAMPLE-BUCKET
and
* OutputKey
to test-files/
.
*
* Note that Amazon Transcribe must have permission to use the specified * location. You can change Amazon S3 permissions using the Amazon Web Services Management * Console. See also Permissions Required for IAM User Roles. *
*
* If you don't specify OutputBucketName
, your transcript is
* placed in a service-managed Amazon S3 bucket and you are provided with a
* URI to access your transcript.
*
* Constraints:
* Length: - 64
* Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9]
*
* @return
* The name of the Amazon S3 bucket where you want your
* transcription output stored. Do not include the
* S3://
prefix of the specified bucket.
*
* If you want your output to go to a sub-folder of this bucket,
* specify it using the OutputKey
parameter;
* OutputBucketName
only accepts the name of a bucket.
*
* For example, if you want your output stored in
* S3://DOC-EXAMPLE-BUCKET
, set
* OutputBucketName
to DOC-EXAMPLE-BUCKET
.
* However, if you want your output stored in
* S3://DOC-EXAMPLE-BUCKET/test-files/
, set
* OutputBucketName
to DOC-EXAMPLE-BUCKET
* and OutputKey
to test-files/
.
*
* Note that Amazon Transcribe must have permission to use the * specified location. You can change Amazon S3 permissions using * the Amazon Web * Services Management Console. See also Permissions Required for IAM User Roles. *
*
* If you don't specify OutputBucketName
, your
* transcript is placed in a service-managed Amazon S3 bucket and
* you are provided with a URI to access your transcript.
*
* The name of the Amazon S3 bucket where you want your transcription output
* stored. Do not include the S3://
prefix of the specified
* bucket.
*
* If you want your output to go to a sub-folder of this bucket, specify it
* using the OutputKey
parameter; OutputBucketName
* only accepts the name of a bucket.
*
* For example, if you want your output stored in
* S3://DOC-EXAMPLE-BUCKET
, set OutputBucketName
* to DOC-EXAMPLE-BUCKET
. However, if you want your output
* stored in S3://DOC-EXAMPLE-BUCKET/test-files/
, set
* OutputBucketName
to DOC-EXAMPLE-BUCKET
and
* OutputKey
to test-files/
.
*
* Note that Amazon Transcribe must have permission to use the specified * location. You can change Amazon S3 permissions using the Amazon Web Services Management * Console. See also Permissions Required for IAM User Roles. *
*
* If you don't specify OutputBucketName
, your transcript is
* placed in a service-managed Amazon S3 bucket and you are provided with a
* URI to access your transcript.
*
* Constraints:
* Length: - 64
* Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9]
*
* @param outputBucketName
* The name of the Amazon S3 bucket where you want your
* transcription output stored. Do not include the
* S3://
prefix of the specified bucket.
*
* If you want your output to go to a sub-folder of this bucket,
* specify it using the OutputKey
parameter;
* OutputBucketName
only accepts the name of a
* bucket.
*
* For example, if you want your output stored in
* S3://DOC-EXAMPLE-BUCKET
, set
* OutputBucketName
to
* DOC-EXAMPLE-BUCKET
. However, if you want your
* output stored in
* S3://DOC-EXAMPLE-BUCKET/test-files/
, set
* OutputBucketName
to
* DOC-EXAMPLE-BUCKET
and OutputKey
to
* test-files/
.
*
* Note that Amazon Transcribe must have permission to use the * specified location. You can change Amazon S3 permissions using * the Amazon Web * Services Management Console. See also Permissions Required for IAM User Roles. *
*
* If you don't specify OutputBucketName
, your
* transcript is placed in a service-managed Amazon S3 bucket and
* you are provided with a URI to access your transcript.
*
* The name of the Amazon S3 bucket where you want your transcription output
* stored. Do not include the S3://
prefix of the specified
* bucket.
*
* If you want your output to go to a sub-folder of this bucket, specify it
* using the OutputKey
parameter; OutputBucketName
* only accepts the name of a bucket.
*
* For example, if you want your output stored in
* S3://DOC-EXAMPLE-BUCKET
, set OutputBucketName
* to DOC-EXAMPLE-BUCKET
. However, if you want your output
* stored in S3://DOC-EXAMPLE-BUCKET/test-files/
, set
* OutputBucketName
to DOC-EXAMPLE-BUCKET
and
* OutputKey
to test-files/
.
*
* Note that Amazon Transcribe must have permission to use the specified * location. You can change Amazon S3 permissions using the Amazon Web Services Management * Console. See also Permissions Required for IAM User Roles. *
*
* If you don't specify OutputBucketName
, your transcript is
* placed in a service-managed Amazon S3 bucket and you are provided with a
* URI to access your transcript.
*
* Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Length: - 64
* Pattern: [a-z0-9][\.\-a-z0-9]{1,61}[a-z0-9]
*
* @param outputBucketName
* The name of the Amazon S3 bucket where you want your
* transcription output stored. Do not include the
* S3://
prefix of the specified bucket.
*
* If you want your output to go to a sub-folder of this bucket,
* specify it using the OutputKey
parameter;
* OutputBucketName
only accepts the name of a
* bucket.
*
* For example, if you want your output stored in
* S3://DOC-EXAMPLE-BUCKET
, set
* OutputBucketName
to
* DOC-EXAMPLE-BUCKET
. However, if you want your
* output stored in
* S3://DOC-EXAMPLE-BUCKET/test-files/
, set
* OutputBucketName
to
* DOC-EXAMPLE-BUCKET
and OutputKey
to
* test-files/
.
*
* Note that Amazon Transcribe must have permission to use the * specified location. You can change Amazon S3 permissions using * the Amazon Web * Services Management Console. See also Permissions Required for IAM User Roles. *
*
* If you don't specify OutputBucketName
, your
* transcript is placed in a service-managed Amazon S3 bucket and
* you are provided with a URI to access your transcript.
*
* Use in combination with OutputBucketName
to specify the
* output location of your transcript and, optionally, a unique name for
* your output file. The default name for your transcription output is the
* same as the name you specified for your transcription job (
* TranscriptionJobName
).
*
* Here are some examples of how you can use OutputKey
:
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
* and 'my-transcript.json' as the OutputKey
, your
* transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json
* .
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
* and 'test-files/my-transcript.json' as the OutputKey
, your
* transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'test-files/my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json
* .
*
* If you specify the name of an Amazon S3 bucket sub-folder that doesn't * exist, one is created for you. *
*
* Constraints:
* Length: 1 - 1024
* Pattern: [a-zA-Z0-9-_.!*'()/]{1,1024}$
*
* @return
* Use in combination with OutputBucketName
to specify
* the output location of your transcript and, optionally, a unique
* name for your output file. The default name for your
* transcription output is the same as the name you specified for
* your transcription job (TranscriptionJobName
).
*
* Here are some examples of how you can use OutputKey
:
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
and 'my-transcript.json' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json
* .
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
and 'test-files/my-transcript.json'
* as the OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json
* .
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'test-files/my-transcript' as
* the OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json
* .
*
* If you specify the name of an Amazon S3 bucket sub-folder that * doesn't exist, one is created for you. *
*/ public String getOutputKey() { return outputKey; } /** *
* Use in combination with OutputBucketName
to specify the
* output location of your transcript and, optionally, a unique name for
* your output file. The default name for your transcription output is the
* same as the name you specified for your transcription job (
* TranscriptionJobName
).
*
* Here are some examples of how you can use OutputKey
:
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
* and 'my-transcript.json' as the OutputKey
, your
* transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json
* .
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
* and 'test-files/my-transcript.json' as the OutputKey
, your
* transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'test-files/my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json
* .
*
* If you specify the name of an Amazon S3 bucket sub-folder that doesn't * exist, one is created for you. *
*
* Constraints:
* Length: 1 - 1024
* Pattern: [a-zA-Z0-9-_.!*'()/]{1,1024}$
*
* @param outputKey
* Use in combination with OutputBucketName
to
* specify the output location of your transcript and,
* optionally, a unique name for your output file. The default
* name for your transcription output is the same as the name you
* specified for your transcription job (
* TranscriptionJobName
).
*
* Here are some examples of how you can use
* OutputKey
:
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
and 'my-transcript.json' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json
* .
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
and
* 'test-files/my-transcript.json' as the OutputKey
,
* your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json
* .
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'test-files/my-transcript'
* as the OutputKey
, your transcription output path
* is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json
* .
*
* If you specify the name of an Amazon S3 bucket sub-folder that * doesn't exist, one is created for you. *
*/ public void setOutputKey(String outputKey) { this.outputKey = outputKey; } /** *
* Use in combination with OutputBucketName
to specify the
* output location of your transcript and, optionally, a unique name for
* your output file. The default name for your transcription output is the
* same as the name you specified for your transcription job (
* TranscriptionJobName
).
*
* Here are some examples of how you can use OutputKey
:
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
* and 'my-transcript.json' as the OutputKey
, your
* transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json
* .
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the OutputBucketName
* and 'test-files/my-transcript.json' as the OutputKey
, your
* transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'test-files/my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json
* .
*
* If you specify the name of an Amazon S3 bucket sub-folder that doesn't * exist, one is created for you. *
** Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Length: 1 - 1024
* Pattern: [a-zA-Z0-9-_.!*'()/]{1,1024}$
*
* @param outputKey
* Use in combination with OutputBucketName
to
* specify the output location of your transcript and,
* optionally, a unique name for your output file. The default
* name for your transcription output is the same as the name you
* specified for your transcription job (
* TranscriptionJobName
).
*
* Here are some examples of how you can use
* OutputKey
:
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
and 'my-transcript.json' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript.json
.
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'my-transcript' as the
* OutputKey
, your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json
* .
*
* If you specify 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
and
* 'test-files/my-transcript.json' as the OutputKey
,
* your transcription output path is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json
* .
*
* If you specify 'my-first-transcription' as the
* TranscriptionJobName
, 'DOC-EXAMPLE-BUCKET' as the
* OutputBucketName
, and 'test-files/my-transcript'
* as the OutputKey
, your transcription output path
* is
* s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json
* .
*
* If you specify the name of an Amazon S3 bucket sub-folder that * doesn't exist, one is created for you. *
* @return A reference to this updated object so that method calls can be * chained together. */ public StartTranscriptionJobRequest withOutputKey(String outputKey) { this.outputKey = outputKey; return this; } /** ** The KMS key you want to use to encrypt your transcription output. *
** If using a key located in the current Amazon Web Services account, * you can specify your KMS key in one of four ways: *
*
* Use the KMS key ID itself. For example,
* 1234abcd-12ab-34cd-56ef-1234567890ab
.
*
* Use an alias for the KMS key ID. For example,
* alias/ExampleAlias
.
*
* Use the Amazon Resource Name (ARN) for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If using a key located in a different Amazon Web Services account * than the current Amazon Web Services account, you can specify your KMS * key in one of two ways: *
*
* Use the ARN for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If you don't specify an encryption key, your output is encrypted with the * default Amazon S3 key (SSE-S3). *
*
* If you specify a KMS key to encrypt your output, you must also specify an
* output location using the OutputLocation
parameter.
*
* Note that the role making the request must have permission to use the * specified KMS key. *
*
* Constraints:
* Length: 1 - 2048
* Pattern: ^[A-Za-z0-9][A-Za-z0-9:_/+=,@.-]{0,2048}$
*
* @return
* The KMS key you want to use to encrypt your transcription output. *
** If using a key located in the current Amazon Web Services * account, you can specify your KMS key in one of four ways: *
*
* Use the KMS key ID itself. For example,
* 1234abcd-12ab-34cd-56ef-1234567890ab
.
*
* Use an alias for the KMS key ID. For example,
* alias/ExampleAlias
.
*
* Use the Amazon Resource Name (ARN) for the KMS key ID. For
* example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If using a key located in a different Amazon Web Services * account than the current Amazon Web Services account, you can * specify your KMS key in one of two ways: *
*
* Use the ARN for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If you don't specify an encryption key, your output is encrypted * with the default Amazon S3 key (SSE-S3). *
*
* If you specify a KMS key to encrypt your output, you must also
* specify an output location using the OutputLocation
* parameter.
*
* Note that the role making the request must have permission to use * the specified KMS key. *
*/ public String getOutputEncryptionKMSKeyId() { return outputEncryptionKMSKeyId; } /** ** The KMS key you want to use to encrypt your transcription output. *
** If using a key located in the current Amazon Web Services account, * you can specify your KMS key in one of four ways: *
*
* Use the KMS key ID itself. For example,
* 1234abcd-12ab-34cd-56ef-1234567890ab
.
*
* Use an alias for the KMS key ID. For example,
* alias/ExampleAlias
.
*
* Use the Amazon Resource Name (ARN) for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If using a key located in a different Amazon Web Services account * than the current Amazon Web Services account, you can specify your KMS * key in one of two ways: *
*
* Use the ARN for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If you don't specify an encryption key, your output is encrypted with the * default Amazon S3 key (SSE-S3). *
*
* If you specify a KMS key to encrypt your output, you must also specify an
* output location using the OutputLocation
parameter.
*
* Note that the role making the request must have permission to use the * specified KMS key. *
*
* Constraints:
* Length: 1 - 2048
* Pattern: ^[A-Za-z0-9][A-Za-z0-9:_/+=,@.-]{0,2048}$
*
* @param outputEncryptionKMSKeyId
* The KMS key you want to use to encrypt your transcription * output. *
** If using a key located in the current Amazon Web * Services account, you can specify your KMS key in one of four * ways: *
*
* Use the KMS key ID itself. For example,
* 1234abcd-12ab-34cd-56ef-1234567890ab
.
*
* Use an alias for the KMS key ID. For example,
* alias/ExampleAlias
.
*
* Use the Amazon Resource Name (ARN) for the KMS key ID. For
* example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If using a key located in a different Amazon Web * Services account than the current Amazon Web Services account, * you can specify your KMS key in one of two ways: *
*
* Use the ARN for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If you don't specify an encryption key, your output is * encrypted with the default Amazon S3 key (SSE-S3). *
*
* If you specify a KMS key to encrypt your output, you must also
* specify an output location using the
* OutputLocation
parameter.
*
* Note that the role making the request must have permission to * use the specified KMS key. *
*/ public void setOutputEncryptionKMSKeyId(String outputEncryptionKMSKeyId) { this.outputEncryptionKMSKeyId = outputEncryptionKMSKeyId; } /** ** The KMS key you want to use to encrypt your transcription output. *
** If using a key located in the current Amazon Web Services account, * you can specify your KMS key in one of four ways: *
*
* Use the KMS key ID itself. For example,
* 1234abcd-12ab-34cd-56ef-1234567890ab
.
*
* Use an alias for the KMS key ID. For example,
* alias/ExampleAlias
.
*
* Use the Amazon Resource Name (ARN) for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If using a key located in a different Amazon Web Services account * than the current Amazon Web Services account, you can specify your KMS * key in one of two ways: *
*
* Use the ARN for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If you don't specify an encryption key, your output is encrypted with the * default Amazon S3 key (SSE-S3). *
*
* If you specify a KMS key to encrypt your output, you must also specify an
* output location using the OutputLocation
parameter.
*
* Note that the role making the request must have permission to use the * specified KMS key. *
** Returns a reference to this object so that method calls can be chained * together. *
* Constraints:
* Length: 1 - 2048
* Pattern: ^[A-Za-z0-9][A-Za-z0-9:_/+=,@.-]{0,2048}$
*
* @param outputEncryptionKMSKeyId
* The KMS key you want to use to encrypt your transcription * output. *
** If using a key located in the current Amazon Web * Services account, you can specify your KMS key in one of four * ways: *
*
* Use the KMS key ID itself. For example,
* 1234abcd-12ab-34cd-56ef-1234567890ab
.
*
* Use an alias for the KMS key ID. For example,
* alias/ExampleAlias
.
*
* Use the Amazon Resource Name (ARN) for the KMS key ID. For
* example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If using a key located in a different Amazon Web * Services account than the current Amazon Web Services account, * you can specify your KMS key in one of two ways: *
*
* Use the ARN for the KMS key ID. For example,
* arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab
* .
*
* Use the ARN for the KMS key alias. For example,
* arn:aws:kms:region:account-ID:alias/ExampleAlias
.
*
* If you don't specify an encryption key, your output is * encrypted with the default Amazon S3 key (SSE-S3). *
*
* If you specify a KMS key to encrypt your output, you must also
* specify an output location using the
* OutputLocation
parameter.
*
* Note that the role making the request must have permission to * use the specified KMS key. *
* @return A reference to this updated object so that method calls can be * chained together. */ public StartTranscriptionJobRequest withOutputEncryptionKMSKeyId(String outputEncryptionKMSKeyId) { this.outputEncryptionKMSKeyId = outputEncryptionKMSKeyId; return this; } /** ** A map of plain text, non-secret key:value pairs, known as encryption * context pairs, that provide an added layer of security for your data. For * more information, see KMS encryption context and Asymmetric keys in KMS. *
* * @return* A map of plain text, non-secret key:value pairs, known as * encryption context pairs, that provide an added layer of security * for your data. For more information, see KMS encryption context and Asymmetric keys in KMS. *
*/ public java.util.Map* A map of plain text, non-secret key:value pairs, known as encryption * context pairs, that provide an added layer of security for your data. For * more information, see KMS encryption context and Asymmetric keys in KMS. *
* * @param kMSEncryptionContext* A map of plain text, non-secret key:value pairs, known as * encryption context pairs, that provide an added layer of * security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS. *
*/ public void setKMSEncryptionContext(java.util.Map* A map of plain text, non-secret key:value pairs, known as encryption * context pairs, that provide an added layer of security for your data. For * more information, see KMS encryption context and Asymmetric keys in KMS. *
** Returns a reference to this object so that method calls can be chained * together. * * @param kMSEncryptionContext
* A map of plain text, non-secret key:value pairs, known as * encryption context pairs, that provide an added layer of * security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS. *
* @return A reference to this updated object so that method calls can be * chained together. */ public StartTranscriptionJobRequest withKMSEncryptionContext( java.util.Map* A map of plain text, non-secret key:value pairs, known as encryption * context pairs, that provide an added layer of security for your data. For * more information, see KMS encryption context and Asymmetric keys in KMS. *
*
* The method adds a new key-value pair into KMSEncryptionContext parameter,
* and returns a reference to this object so that method calls can be
* chained together.
*
* @param key The key of the entry to be added into KMSEncryptionContext.
* @param value The corresponding value of the entry to be added into
* KMSEncryptionContext.
* @return A reference to this updated object so that method calls can be
* chained together.
*/
public StartTranscriptionJobRequest addKMSEncryptionContextEntry(String key, String value) {
if (null == this.kMSEncryptionContext) {
this.kMSEncryptionContext = new java.util.HashMap
* Returns a reference to this object so that method calls can be chained
* together.
*/
public StartTranscriptionJobRequest clearKMSEncryptionContextEntries() {
this.kMSEncryptionContext = null;
return this;
}
/**
*
* Specify additional optional settings in your request, including channel
* identification, alternative transcriptions, speaker partitioning. You can
* use that to apply custom vocabularies and vocabulary filters.
*
* If you want to include a custom vocabulary or a custom vocabulary filter
* (or both) with your request but do not want to use automatic
* language identification, use
* If you're using automatic language identification with your request and
* want to include a custom language model, a custom vocabulary, or a custom
* vocabulary filter, use instead the
*
* Specify additional optional settings in your request, including
* channel identification, alternative transcriptions, speaker
* partitioning. You can use that to apply custom vocabularies and
* vocabulary filters.
*
* If you want to include a custom vocabulary or a custom vocabulary
* filter (or both) with your request but do not want to use
* automatic language identification, use
* If you're using automatic language identification with your
* request and want to include a custom language model, a custom
* vocabulary, or a custom vocabulary filter, use instead the
*
* Specify additional optional settings in your request, including channel
* identification, alternative transcriptions, speaker partitioning. You can
* use that to apply custom vocabularies and vocabulary filters.
*
* If you want to include a custom vocabulary or a custom vocabulary filter
* (or both) with your request but do not want to use automatic
* language identification, use
* If you're using automatic language identification with your request and
* want to include a custom language model, a custom vocabulary, or a custom
* vocabulary filter, use instead the
*
* Specify additional optional settings in your request,
* including channel identification, alternative transcriptions,
* speaker partitioning. You can use that to apply custom
* vocabularies and vocabulary filters.
*
* If you want to include a custom vocabulary or a custom
* vocabulary filter (or both) with your request but do
* not want to use automatic language identification, use
*
* If you're using automatic language identification with your
* request and want to include a custom language model, a custom
* vocabulary, or a custom vocabulary filter, use instead the
*
* Specify additional optional settings in your request, including channel
* identification, alternative transcriptions, speaker partitioning. You can
* use that to apply custom vocabularies and vocabulary filters.
*
* If you want to include a custom vocabulary or a custom vocabulary filter
* (or both) with your request but do not want to use automatic
* language identification, use
* If you're using automatic language identification with your request and
* want to include a custom language model, a custom vocabulary, or a custom
* vocabulary filter, use instead the
*
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param settings
* Specify additional optional settings in your request,
* including channel identification, alternative transcriptions,
* speaker partitioning. You can use that to apply custom
* vocabularies and vocabulary filters.
*
* If you want to include a custom vocabulary or a custom
* vocabulary filter (or both) with your request but do
* not want to use automatic language identification, use
*
* If you're using automatic language identification with your
* request and want to include a custom language model, a custom
* vocabulary, or a custom vocabulary filter, use instead the
*
* Specify the custom language model you want to include with your
* transcription job. If you include
* For more information, see Custom language models.
*
* Specify the custom language model you want to include with your
* transcription job. If you include
* For more information, see Custom language models.
*
* Specify the custom language model you want to include with your
* transcription job. If you include
* For more information, see Custom language models.
*
* Specify the custom language model you want to include with
* your transcription job. If you include
*
* For more information, see Custom language models.
*
* Specify the custom language model you want to include with your
* transcription job. If you include
* For more information, see Custom language models.
*
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param modelSettings
* Specify the custom language model you want to include with
* your transcription job. If you include
*
* For more information, see Custom language models.
*
* Makes it possible to control how your transcription job is processed.
* Currently, the only
* If you include
* Makes it possible to control how your transcription job is
* processed. Currently, the only
* If you include
* Makes it possible to control how your transcription job is processed.
* Currently, the only
* If you include
* Makes it possible to control how your transcription job is
* processed. Currently, the only
*
* If you include
* Makes it possible to control how your transcription job is processed.
* Currently, the only
* If you include
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param jobExecutionSettings
* Makes it possible to control how your transcription job is
* processed. Currently, the only
*
* If you include
* Makes it possible to redact or flag specified personally identifiable
* information (PII) in your transcript. If you use
*
* Makes it possible to redact or flag specified personally
* identifiable information (PII) in your transcript. If you use
*
* Makes it possible to redact or flag specified personally identifiable
* information (PII) in your transcript. If you use
*
* Makes it possible to redact or flag specified personally
* identifiable information (PII) in your transcript. If you use
*
* Makes it possible to redact or flag specified personally identifiable
* information (PII) in your transcript. If you use
*
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param contentRedaction
* Makes it possible to redact or flag specified personally
* identifiable information (PII) in your transcript. If you use
*
* Enables automatic language identification in your transcription job
* request. Use this parameter if your media file contains only one
* language. If your media contains multiple languages, use
*
* If you include
* If you want to apply a custom language model, a custom vocabulary, or a
* custom vocabulary filter to your automatic language identification
* request, include
* Note that you must include one of
* Enables automatic language identification in your transcription
* job request. Use this parameter if your media file contains only
* one language. If your media contains multiple languages, use
*
* If you include
* If you want to apply a custom language model, a custom
* vocabulary, or a custom vocabulary filter to your automatic
* language identification request, include
*
* Note that you must include one of
* Enables automatic language identification in your transcription job
* request. Use this parameter if your media file contains only one
* language. If your media contains multiple languages, use
*
* If you include
* If you want to apply a custom language model, a custom vocabulary, or a
* custom vocabulary filter to your automatic language identification
* request, include
* Note that you must include one of
* Enables automatic language identification in your transcription
* job request. Use this parameter if your media file contains only
* one language. If your media contains multiple languages, use
*
* If you include
* If you want to apply a custom language model, a custom
* vocabulary, or a custom vocabulary filter to your automatic
* language identification request, include
*
* Note that you must include one of
* Enables automatic language identification in your transcription job
* request. Use this parameter if your media file contains only one
* language. If your media contains multiple languages, use
*
* If you include
* If you want to apply a custom language model, a custom vocabulary, or a
* custom vocabulary filter to your automatic language identification
* request, include
* Note that you must include one of
* Enables automatic language identification in your
* transcription job request. Use this parameter if your media
* file contains only one language. If your media contains
* multiple languages, use
* If you include
* If you want to apply a custom language model, a custom
* vocabulary, or a custom vocabulary filter to your automatic
* language identification request, include
*
* Note that you must include one of
* Enables automatic language identification in your transcription job
* request. Use this parameter if your media file contains only one
* language. If your media contains multiple languages, use
*
* If you include
* If you want to apply a custom language model, a custom vocabulary, or a
* custom vocabulary filter to your automatic language identification
* request, include
* Note that you must include one of
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param identifyLanguage
* Enables automatic language identification in your
* transcription job request. Use this parameter if your media
* file contains only one language. If your media contains
* multiple languages, use
* If you include
* If you want to apply a custom language model, a custom
* vocabulary, or a custom vocabulary filter to your automatic
* language identification request, include
*
* Note that you must include one of
* Enables automatic multi-language identification in your transcription job
* request. Use this parameter if your media file contains more than one
* language. If your media contains only one language, use
*
* If you include
* If you want to apply a custom vocabulary or a custom vocabulary filter to
* your automatic language identification request, include
*
* Note that you must include one of
* Enables automatic multi-language identification in your
* transcription job request. Use this parameter if your media file
* contains more than one language. If your media contains only one
* language, use
* If you include
* If you want to apply a custom vocabulary or a custom vocabulary
* filter to your automatic language identification request, include
*
* Note that you must include one of
* Enables automatic multi-language identification in your transcription job
* request. Use this parameter if your media file contains more than one
* language. If your media contains only one language, use
*
* If you include
* If you want to apply a custom vocabulary or a custom vocabulary filter to
* your automatic language identification request, include
*
* Note that you must include one of
* Enables automatic multi-language identification in your
* transcription job request. Use this parameter if your media file
* contains more than one language. If your media contains only one
* language, use
* If you include
* If you want to apply a custom vocabulary or a custom vocabulary
* filter to your automatic language identification request, include
*
* Note that you must include one of
* Enables automatic multi-language identification in your transcription job
* request. Use this parameter if your media file contains more than one
* language. If your media contains only one language, use
*
* If you include
* If you want to apply a custom vocabulary or a custom vocabulary filter to
* your automatic language identification request, include
*
* Note that you must include one of
* Enables automatic multi-language identification in your
* transcription job request. Use this parameter if your media
* file contains more than one language. If your media contains
* only one language, use
* If you include
* If you want to apply a custom vocabulary or a custom
* vocabulary filter to your automatic language identification
* request, include
* Note that you must include one of
* Enables automatic multi-language identification in your transcription job
* request. Use this parameter if your media file contains more than one
* language. If your media contains only one language, use
*
* If you include
* If you want to apply a custom vocabulary or a custom vocabulary filter to
* your automatic language identification request, include
*
* Note that you must include one of
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param identifyMultipleLanguages
* Enables automatic multi-language identification in your
* transcription job request. Use this parameter if your media
* file contains more than one language. If your media contains
* only one language, use
* If you include
* If you want to apply a custom vocabulary or a custom
* vocabulary filter to your automatic language identification
* request, include
* Note that you must include one of
* You can specify two or more language codes that represent the languages
* you think may be present in your media. Including more than five is not
* recommended. If you're unsure what languages are present, do not include
* this parameter.
*
* If you include
* For more information, refer to Supported languages.
*
* To transcribe speech in Modern Standard Arabic (
* You can specify two or more language codes that represent the
* languages you think may be present in your media. Including more
* than five is not recommended. If you're unsure what languages are
* present, do not include this parameter.
*
* If you include
* For more information, refer to Supported languages.
*
* To transcribe speech in Modern Standard Arabic (
*
* You can specify two or more language codes that represent the languages
* you think may be present in your media. Including more than five is not
* recommended. If you're unsure what languages are present, do not include
* this parameter.
*
* If you include
* For more information, refer to Supported languages.
*
* To transcribe speech in Modern Standard Arabic (
* You can specify two or more language codes that represent the
* languages you think may be present in your media. Including
* more than five is not recommended. If you're unsure what
* languages are present, do not include this parameter.
*
* If you include
* For more information, refer to Supported languages.
*
* To transcribe speech in Modern Standard Arabic (
*
* You can specify two or more language codes that represent the languages
* you think may be present in your media. Including more than five is not
* recommended. If you're unsure what languages are present, do not include
* this parameter.
*
* If you include
* For more information, refer to Supported languages.
*
* To transcribe speech in Modern Standard Arabic (
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param languageOptions
* You can specify two or more language codes that represent the
* languages you think may be present in your media. Including
* more than five is not recommended. If you're unsure what
* languages are present, do not include this parameter.
*
* If you include
* For more information, refer to Supported languages.
*
* To transcribe speech in Modern Standard Arabic (
*
* You can specify two or more language codes that represent the languages
* you think may be present in your media. Including more than five is not
* recommended. If you're unsure what languages are present, do not include
* this parameter.
*
* If you include
* For more information, refer to Supported languages.
*
* To transcribe speech in Modern Standard Arabic (
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param languageOptions
* You can specify two or more language codes that represent the
* languages you think may be present in your media. Including
* more than five is not recommended. If you're unsure what
* languages are present, do not include this parameter.
*
* If you include
* For more information, refer to Supported languages.
*
* To transcribe speech in Modern Standard Arabic (
*
* Produces subtitle files for your input media. You can specify WebVTT
* (*.vtt) and SubRip (*.srt) formats.
*
* Produces subtitle files for your input media. You can specify
* WebVTT (*.vtt) and SubRip (*.srt) formats.
*
* Produces subtitle files for your input media. You can specify WebVTT
* (*.vtt) and SubRip (*.srt) formats.
*
* Produces subtitle files for your input media. You can specify
* WebVTT (*.vtt) and SubRip (*.srt) formats.
*
* Produces subtitle files for your input media. You can specify WebVTT
* (*.vtt) and SubRip (*.srt) formats.
*
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param subtitles
* Produces subtitle files for your input media. You can specify
* WebVTT (*.vtt) and SubRip (*.srt) formats.
*
* Adds one or more custom tags, each in the form of a key:value pair, to a
* new transcription job at the time you start this new job.
*
* To learn more about using tags with Amazon Transcribe, refer to Tagging resources.
*
* Adds one or more custom tags, each in the form of a key:value
* pair, to a new transcription job at the time you start this new
* job.
*
* To learn more about using tags with Amazon Transcribe, refer to
* Tagging resources.
*
* Adds one or more custom tags, each in the form of a key:value pair, to a
* new transcription job at the time you start this new job.
*
* To learn more about using tags with Amazon Transcribe, refer to Tagging resources.
*
* Adds one or more custom tags, each in the form of a key:value
* pair, to a new transcription job at the time you start this
* new job.
*
* To learn more about using tags with Amazon Transcribe, refer
* to Tagging resources.
*
* Adds one or more custom tags, each in the form of a key:value pair, to a
* new transcription job at the time you start this new job.
*
* To learn more about using tags with Amazon Transcribe, refer to Tagging resources.
*
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param tags
* Adds one or more custom tags, each in the form of a key:value
* pair, to a new transcription job at the time you start this
* new job.
*
* To learn more about using tags with Amazon Transcribe, refer
* to Tagging resources.
*
* Adds one or more custom tags, each in the form of a key:value pair, to a
* new transcription job at the time you start this new job.
*
* To learn more about using tags with Amazon Transcribe, refer to Tagging resources.
*
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param tags
* Adds one or more custom tags, each in the form of a key:value
* pair, to a new transcription job at the time you start this
* new job.
*
* To learn more about using tags with Amazon Transcribe, refer
* to Tagging resources.
*
* If using automatic language identification in your request and you want
* to apply a custom language model, a custom vocabulary, or a custom
* vocabulary filter, include
*
* It's recommended that you include
* If you want to include a custom language model with your request but
* do not want to use automatic language identification, use instead
* the
* If using automatic language identification in your request and
* you want to apply a custom language model, a custom vocabulary,
* or a custom vocabulary filter, include
*
*
* It's recommended that you include
* If you want to include a custom language model with your request
* but do not want to use automatic language identification,
* use instead the
*
* If using automatic language identification in your request and you want
* to apply a custom language model, a custom vocabulary, or a custom
* vocabulary filter, include
*
* It's recommended that you include
* If you want to include a custom language model with your request but
* do not want to use automatic language identification, use instead
* the
* If using automatic language identification in your request and
* you want to apply a custom language model, a custom
* vocabulary, or a custom vocabulary filter, include
*
*
* It's recommended that you include
* If you want to include a custom language model with your
* request but do not want to use automatic language
* identification, use instead the
*
* If using automatic language identification in your request and you want
* to apply a custom language model, a custom vocabulary, or a custom
* vocabulary filter, include
*
* It's recommended that you include
* If you want to include a custom language model with your request but
* do not want to use automatic language identification, use instead
* the
* Returns a reference to this object so that method calls can be chained
* together.
*
* @param languageIdSettings
* If using automatic language identification in your request and
* you want to apply a custom language model, a custom
* vocabulary, or a custom vocabulary filter, include
*
*
* It's recommended that you include
* If you want to include a custom language model with your
* request but do not want to use automatic language
* identification, use instead the
*
* If using automatic language identification in your request and you want
* to apply a custom language model, a custom vocabulary, or a custom
* vocabulary filter, include
*
* It's recommended that you include
* If you want to include a custom language model with your request but
* do not want to use automatic language identification, use instead
* the
* The method adds a new key-value pair into LanguageIdSettings parameter,
* and returns a reference to this object so that method calls can be
* chained together.
*
* @param key The key of the entry to be added into LanguageIdSettings.
* @param value The corresponding value of the entry to be added into
* LanguageIdSettings.
* @return A reference to this updated object so that method calls can be
* chained together.
*/
public StartTranscriptionJobRequest addLanguageIdSettingsEntry(String key,
LanguageIdSettings value) {
if (null == this.languageIdSettings) {
this.languageIdSettings = new java.util.HashMap
* Returns a reference to this object so that method calls can be chained
* together.
*/
public StartTranscriptionJobRequest clearLanguageIdSettingsEntries() {
this.languageIdSettings = null;
return this;
}
/**
* Returns a string representation of this object; useful for testing and
* debugging.
*
* @return A string representation of this object.
* @see java.lang.Object#toString()
*/
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append("{");
if (getTranscriptionJobName() != null)
sb.append("TranscriptionJobName: " + getTranscriptionJobName() + ",");
if (getLanguageCode() != null)
sb.append("LanguageCode: " + getLanguageCode() + ",");
if (getMediaSampleRateHertz() != null)
sb.append("MediaSampleRateHertz: " + getMediaSampleRateHertz() + ",");
if (getMediaFormat() != null)
sb.append("MediaFormat: " + getMediaFormat() + ",");
if (getMedia() != null)
sb.append("Media: " + getMedia() + ",");
if (getOutputBucketName() != null)
sb.append("OutputBucketName: " + getOutputBucketName() + ",");
if (getOutputKey() != null)
sb.append("OutputKey: " + getOutputKey() + ",");
if (getOutputEncryptionKMSKeyId() != null)
sb.append("OutputEncryptionKMSKeyId: " + getOutputEncryptionKMSKeyId() + ",");
if (getKMSEncryptionContext() != null)
sb.append("KMSEncryptionContext: " + getKMSEncryptionContext() + ",");
if (getSettings() != null)
sb.append("Settings: " + getSettings() + ",");
if (getModelSettings() != null)
sb.append("ModelSettings: " + getModelSettings() + ",");
if (getJobExecutionSettings() != null)
sb.append("JobExecutionSettings: " + getJobExecutionSettings() + ",");
if (getContentRedaction() != null)
sb.append("ContentRedaction: " + getContentRedaction() + ",");
if (getIdentifyLanguage() != null)
sb.append("IdentifyLanguage: " + getIdentifyLanguage() + ",");
if (getIdentifyMultipleLanguages() != null)
sb.append("IdentifyMultipleLanguages: " + getIdentifyMultipleLanguages() + ",");
if (getLanguageOptions() != null)
sb.append("LanguageOptions: " + getLanguageOptions() + ",");
if (getSubtitles() != null)
sb.append("Subtitles: " + getSubtitles() + ",");
if (getTags() != null)
sb.append("Tags: " + getTags() + ",");
if (getLanguageIdSettings() != null)
sb.append("LanguageIdSettings: " + getLanguageIdSettings());
sb.append("}");
return sb.toString();
}
@Override
public int hashCode() {
final int prime = 31;
int hashCode = 1;
hashCode = prime * hashCode
+ ((getTranscriptionJobName() == null) ? 0 : getTranscriptionJobName().hashCode());
hashCode = prime * hashCode
+ ((getLanguageCode() == null) ? 0 : getLanguageCode().hashCode());
hashCode = prime * hashCode
+ ((getMediaSampleRateHertz() == null) ? 0 : getMediaSampleRateHertz().hashCode());
hashCode = prime * hashCode
+ ((getMediaFormat() == null) ? 0 : getMediaFormat().hashCode());
hashCode = prime * hashCode + ((getMedia() == null) ? 0 : getMedia().hashCode());
hashCode = prime * hashCode
+ ((getOutputBucketName() == null) ? 0 : getOutputBucketName().hashCode());
hashCode = prime * hashCode + ((getOutputKey() == null) ? 0 : getOutputKey().hashCode());
hashCode = prime
* hashCode
+ ((getOutputEncryptionKMSKeyId() == null) ? 0 : getOutputEncryptionKMSKeyId()
.hashCode());
hashCode = prime * hashCode
+ ((getKMSEncryptionContext() == null) ? 0 : getKMSEncryptionContext().hashCode());
hashCode = prime * hashCode + ((getSettings() == null) ? 0 : getSettings().hashCode());
hashCode = prime * hashCode
+ ((getModelSettings() == null) ? 0 : getModelSettings().hashCode());
hashCode = prime * hashCode
+ ((getJobExecutionSettings() == null) ? 0 : getJobExecutionSettings().hashCode());
hashCode = prime * hashCode
+ ((getContentRedaction() == null) ? 0 : getContentRedaction().hashCode());
hashCode = prime * hashCode
+ ((getIdentifyLanguage() == null) ? 0 : getIdentifyLanguage().hashCode());
hashCode = prime
* hashCode
+ ((getIdentifyMultipleLanguages() == null) ? 0 : getIdentifyMultipleLanguages()
.hashCode());
hashCode = prime * hashCode
+ ((getLanguageOptions() == null) ? 0 : getLanguageOptions().hashCode());
hashCode = prime * hashCode + ((getSubtitles() == null) ? 0 : getSubtitles().hashCode());
hashCode = prime * hashCode + ((getTags() == null) ? 0 : getTags().hashCode());
hashCode = prime * hashCode
+ ((getLanguageIdSettings() == null) ? 0 : getLanguageIdSettings().hashCode());
return hashCode;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (obj instanceof StartTranscriptionJobRequest == false)
return false;
StartTranscriptionJobRequest other = (StartTranscriptionJobRequest) obj;
if (other.getTranscriptionJobName() == null ^ this.getTranscriptionJobName() == null)
return false;
if (other.getTranscriptionJobName() != null
&& other.getTranscriptionJobName().equals(this.getTranscriptionJobName()) == false)
return false;
if (other.getLanguageCode() == null ^ this.getLanguageCode() == null)
return false;
if (other.getLanguageCode() != null
&& other.getLanguageCode().equals(this.getLanguageCode()) == false)
return false;
if (other.getMediaSampleRateHertz() == null ^ this.getMediaSampleRateHertz() == null)
return false;
if (other.getMediaSampleRateHertz() != null
&& other.getMediaSampleRateHertz().equals(this.getMediaSampleRateHertz()) == false)
return false;
if (other.getMediaFormat() == null ^ this.getMediaFormat() == null)
return false;
if (other.getMediaFormat() != null
&& other.getMediaFormat().equals(this.getMediaFormat()) == false)
return false;
if (other.getMedia() == null ^ this.getMedia() == null)
return false;
if (other.getMedia() != null && other.getMedia().equals(this.getMedia()) == false)
return false;
if (other.getOutputBucketName() == null ^ this.getOutputBucketName() == null)
return false;
if (other.getOutputBucketName() != null
&& other.getOutputBucketName().equals(this.getOutputBucketName()) == false)
return false;
if (other.getOutputKey() == null ^ this.getOutputKey() == null)
return false;
if (other.getOutputKey() != null
&& other.getOutputKey().equals(this.getOutputKey()) == false)
return false;
if (other.getOutputEncryptionKMSKeyId() == null
^ this.getOutputEncryptionKMSKeyId() == null)
return false;
if (other.getOutputEncryptionKMSKeyId() != null
&& other.getOutputEncryptionKMSKeyId().equals(this.getOutputEncryptionKMSKeyId()) == false)
return false;
if (other.getKMSEncryptionContext() == null ^ this.getKMSEncryptionContext() == null)
return false;
if (other.getKMSEncryptionContext() != null
&& other.getKMSEncryptionContext().equals(this.getKMSEncryptionContext()) == false)
return false;
if (other.getSettings() == null ^ this.getSettings() == null)
return false;
if (other.getSettings() != null && other.getSettings().equals(this.getSettings()) == false)
return false;
if (other.getModelSettings() == null ^ this.getModelSettings() == null)
return false;
if (other.getModelSettings() != null
&& other.getModelSettings().equals(this.getModelSettings()) == false)
return false;
if (other.getJobExecutionSettings() == null ^ this.getJobExecutionSettings() == null)
return false;
if (other.getJobExecutionSettings() != null
&& other.getJobExecutionSettings().equals(this.getJobExecutionSettings()) == false)
return false;
if (other.getContentRedaction() == null ^ this.getContentRedaction() == null)
return false;
if (other.getContentRedaction() != null
&& other.getContentRedaction().equals(this.getContentRedaction()) == false)
return false;
if (other.getIdentifyLanguage() == null ^ this.getIdentifyLanguage() == null)
return false;
if (other.getIdentifyLanguage() != null
&& other.getIdentifyLanguage().equals(this.getIdentifyLanguage()) == false)
return false;
if (other.getIdentifyMultipleLanguages() == null
^ this.getIdentifyMultipleLanguages() == null)
return false;
if (other.getIdentifyMultipleLanguages() != null
&& other.getIdentifyMultipleLanguages().equals(this.getIdentifyMultipleLanguages()) == false)
return false;
if (other.getLanguageOptions() == null ^ this.getLanguageOptions() == null)
return false;
if (other.getLanguageOptions() != null
&& other.getLanguageOptions().equals(this.getLanguageOptions()) == false)
return false;
if (other.getSubtitles() == null ^ this.getSubtitles() == null)
return false;
if (other.getSubtitles() != null
&& other.getSubtitles().equals(this.getSubtitles()) == false)
return false;
if (other.getTags() == null ^ this.getTags() == null)
return false;
if (other.getTags() != null && other.getTags().equals(this.getTags()) == false)
return false;
if (other.getLanguageIdSettings() == null ^ this.getLanguageIdSettings() == null)
return false;
if (other.getLanguageIdSettings() != null
&& other.getLanguageIdSettings().equals(this.getLanguageIdSettings()) == false)
return false;
return true;
}
}
Settings
with the
* VocabularyName
or VocabularyFilterName
(or
* both) sub-parameter.
* parameter with the
LanguageModelName
,
* VocabularyName
or VocabularyFilterName
* sub-parameters.
* Settings
with
* the VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* parameter with the
LanguageModelName
,
* VocabularyName
or VocabularyFilterName
* sub-parameters.
* Settings
with the
* VocabularyName
or VocabularyFilterName
(or
* both) sub-parameter.
* parameter with the
LanguageModelName
,
* VocabularyName
or VocabularyFilterName
* sub-parameters.
* Settings
with the VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* parameter with the
LanguageModelName
,
* VocabularyName
or
* VocabularyFilterName
sub-parameters.
* Settings
with the
* VocabularyName
or VocabularyFilterName
(or
* both) sub-parameter.
* parameter with the
LanguageModelName
,
* VocabularyName
or VocabularyFilterName
* sub-parameters.
* Settings
with the VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* parameter with the
LanguageModelName
,
* VocabularyName
or
* VocabularyFilterName
sub-parameters.
* ModelSettings
in your
* request, you must include the LanguageModelName
* sub-parameter.
* ModelSettings
in
* your request, you must include the LanguageModelName
* sub-parameter.
* ModelSettings
in your
* request, you must include the LanguageModelName
* sub-parameter.
* ModelSettings
in your request, you must include
* the LanguageModelName
sub-parameter.
* ModelSettings
in your
* request, you must include the LanguageModelName
* sub-parameter.
* ModelSettings
in your request, you must include
* the LanguageModelName
sub-parameter.
* JobExecutionSettings
modification you
* can choose is enabling job queueing using the
* AllowDeferredExecution
sub-parameter.
* JobExecutionSettings
in your request, you
* must also include the sub-parameters: AllowDeferredExecution
* and DataAccessRoleArn
.
* JobExecutionSettings
* modification you can choose is enabling job queueing using the
* AllowDeferredExecution
sub-parameter.
* JobExecutionSettings
in your request,
* you must also include the sub-parameters:
* AllowDeferredExecution
and
* DataAccessRoleArn
.
* JobExecutionSettings
modification you
* can choose is enabling job queueing using the
* AllowDeferredExecution
sub-parameter.
* JobExecutionSettings
in your request, you
* must also include the sub-parameters: AllowDeferredExecution
* and DataAccessRoleArn
.
* JobExecutionSettings
modification you can choose
* is enabling job queueing using the
* AllowDeferredExecution
sub-parameter.
* JobExecutionSettings
in your
* request, you must also include the sub-parameters:
* AllowDeferredExecution
and
* DataAccessRoleArn
.
* JobExecutionSettings
modification you
* can choose is enabling job queueing using the
* AllowDeferredExecution
sub-parameter.
* JobExecutionSettings
in your request, you
* must also include the sub-parameters: AllowDeferredExecution
* and DataAccessRoleArn
.
* JobExecutionSettings
modification you can choose
* is enabling job queueing using the
* AllowDeferredExecution
sub-parameter.
* JobExecutionSettings
in your
* request, you must also include the sub-parameters:
* AllowDeferredExecution
and
* DataAccessRoleArn
.
* ContentRedaction
, you must also include the sub-parameters:
* PiiEntityTypes
, RedactionOutput
, and
* RedactionType
.
* ContentRedaction
, you must also include the
* sub-parameters: PiiEntityTypes
,
* RedactionOutput
, and RedactionType
.
* ContentRedaction
, you must also include the sub-parameters:
* PiiEntityTypes
, RedactionOutput
, and
* RedactionType
.
* ContentRedaction
, you must also include the
* sub-parameters: PiiEntityTypes
,
* RedactionOutput
, and RedactionType
.
* ContentRedaction
, you must also include the sub-parameters:
* PiiEntityTypes
, RedactionOutput
, and
* RedactionType
.
* ContentRedaction
, you must also include the
* sub-parameters: PiiEntityTypes
,
* RedactionOutput
, and RedactionType
.
* IdentifyMultipleLanguages
instead.
* IdentifyLanguage
, you can optionally include
* a list of language codes, using LanguageOptions
, that you
* think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
* LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* If you include LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
* IdentifyMultipleLanguages
instead.
* IdentifyLanguage
, you can optionally
* include a list of language codes, using
* LanguageOptions
, that you think may be present in
* your media file. Including LanguageOptions
restricts
* IdentifyLanguage
to only the language options that
* you specify, which can improve transcription accuracy.
* LanguageIdSettings
with the relevant sub-parameters
* (VocabularyName
, LanguageModelName
, and
* VocabularyFilterName
). If you include
* LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription job
* fails.
* IdentifyMultipleLanguages
instead.
* IdentifyLanguage
, you can optionally include
* a list of language codes, using LanguageOptions
, that you
* think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
* LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* If you include LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
* IdentifyMultipleLanguages
instead.
* IdentifyLanguage
, you can optionally
* include a list of language codes, using
* LanguageOptions
, that you think may be present in
* your media file. Including LanguageOptions
restricts
* IdentifyLanguage
to only the language options that
* you specify, which can improve transcription accuracy.
* LanguageIdSettings
with the relevant sub-parameters
* (VocabularyName
, LanguageModelName
, and
* VocabularyFilterName
). If you include
* LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription job
* fails.
* IdentifyMultipleLanguages
instead.
* IdentifyLanguage
, you can optionally include
* a list of language codes, using LanguageOptions
, that you
* think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
* LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* If you include LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
* IdentifyMultipleLanguages
* instead.
* IdentifyLanguage
, you can
* optionally include a list of language codes, using
* LanguageOptions
, that you think may be present in
* your media file. Including LanguageOptions
* restricts IdentifyLanguage
to only the language
* options that you specify, which can improve transcription
* accuracy.
* LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and
* VocabularyFilterName
). If you include
* LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription
* job fails.
* IdentifyMultipleLanguages
instead.
* IdentifyLanguage
, you can optionally include
* a list of language codes, using LanguageOptions
, that you
* think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
* LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* If you include LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
* IdentifyMultipleLanguages
* instead.
* IdentifyLanguage
, you can
* optionally include a list of language codes, using
* LanguageOptions
, that you think may be present in
* your media file. Including LanguageOptions
* restricts IdentifyLanguage
to only the language
* options that you specify, which can improve transcription
* accuracy.
* LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and
* VocabularyFilterName
). If you include
* LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription
* job fails.
* IdentifyLanguage
instead.
* IdentifyMultipleLanguages
, you can optionally
* include a list of language codes, using LanguageOptions
,
* that you think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
* LanguageIdSettings
with the relevant sub-parameters (
* VocabularyName
and VocabularyFilterName
). If
* you include LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
* IdentifyLanguage
instead.
* IdentifyMultipleLanguages
, you can
* optionally include a list of language codes, using
* LanguageOptions
, that you think may be present in
* your media file. Including LanguageOptions
restricts
* IdentifyLanguage
to only the language options that
* you specify, which can improve transcription accuracy.
* LanguageIdSettings
with the relevant sub-parameters
* (VocabularyName
and
* VocabularyFilterName
). If you include
* LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription job
* fails.
* IdentifyLanguage
instead.
* IdentifyMultipleLanguages
, you can optionally
* include a list of language codes, using LanguageOptions
,
* that you think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
* LanguageIdSettings
with the relevant sub-parameters (
* VocabularyName
and VocabularyFilterName
). If
* you include LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
* IdentifyLanguage
instead.
* IdentifyMultipleLanguages
, you can
* optionally include a list of language codes, using
* LanguageOptions
, that you think may be present in
* your media file. Including LanguageOptions
restricts
* IdentifyLanguage
to only the language options that
* you specify, which can improve transcription accuracy.
* LanguageIdSettings
with the relevant sub-parameters
* (VocabularyName
and
* VocabularyFilterName
). If you include
* LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription job
* fails.
* IdentifyLanguage
instead.
* IdentifyMultipleLanguages
, you can optionally
* include a list of language codes, using LanguageOptions
,
* that you think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
* LanguageIdSettings
with the relevant sub-parameters (
* VocabularyName
and VocabularyFilterName
). If
* you include LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
* IdentifyLanguage
instead.
* IdentifyMultipleLanguages
, you can
* optionally include a list of language codes, using
* LanguageOptions
, that you think may be present in
* your media file. Including LanguageOptions
* restricts IdentifyLanguage
to only the language
* options that you specify, which can improve transcription
* accuracy.
* LanguageIdSettings
with the
* relevant sub-parameters (VocabularyName
and
* VocabularyFilterName
). If you include
* LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription
* job fails.
* IdentifyLanguage
instead.
* IdentifyMultipleLanguages
, you can optionally
* include a list of language codes, using LanguageOptions
,
* that you think may be present in your media file. Including
* LanguageOptions
restricts IdentifyLanguage
to
* only the language options that you specify, which can improve
* transcription accuracy.
* LanguageIdSettings
with the relevant sub-parameters (
* VocabularyName
and VocabularyFilterName
). If
* you include LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or IdentifyMultipleLanguages
* in your request. If you include more than one of these parameters, your
* transcription job fails.
* IdentifyLanguage
instead.
* IdentifyMultipleLanguages
, you can
* optionally include a list of language codes, using
* LanguageOptions
, that you think may be present in
* your media file. Including LanguageOptions
* restricts IdentifyLanguage
to only the language
* options that you specify, which can improve transcription
* accuracy.
* LanguageIdSettings
with the
* relevant sub-parameters (VocabularyName
and
* VocabularyFilterName
). If you include
* LanguageIdSettings
, also include
* LanguageOptions
.
* LanguageCode
,
* IdentifyLanguage
, or
* IdentifyMultipleLanguages
in your request. If you
* include more than one of these parameters, your transcription
* job fails.
* LanguageOptions
in your request, you must
* also include IdentifyLanguage
.
* ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
* LanguageOptions
in your request, you
* must also include IdentifyLanguage
.
* ar-SA
), your media file must be encoded at a sample
* rate of 16,000 Hz or higher.
* LanguageOptions
in your request, you must
* also include IdentifyLanguage
.
* ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
* LanguageOptions
in your request,
* you must also include IdentifyLanguage
.
* ar-SA
), your media file must be encoded at a
* sample rate of 16,000 Hz or higher.
* LanguageOptions
in your request, you must
* also include IdentifyLanguage
.
* ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
* LanguageOptions
in your request,
* you must also include IdentifyLanguage
.
* ar-SA
), your media file must be encoded at a
* sample rate of 16,000 Hz or higher.
* LanguageOptions
in your request, you must
* also include IdentifyLanguage
.
* ar-SA
), your
* media file must be encoded at a sample rate of 16,000 Hz or higher.
* LanguageOptions
in your request,
* you must also include IdentifyLanguage
.
* ar-SA
), your media file must be encoded at a
* sample rate of 16,000 Hz or higher.
* LanguageIdSettings
with the
* relevant sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* Note that multi-language identification (
* IdentifyMultipleLanguages
) doesn't support custom language
* models.
* LanguageIdSettings
supports two to five language codes. Each
* language code you include can have an associated custom language model,
* custom vocabulary, and custom vocabulary filter. The language codes that
* you specify must match the languages of the associated custom language
* models, custom vocabularies, and custom vocabulary filters.
* LanguageOptions
when using
* LanguageIdSettings
to ensure that the correct language
* dialect is identified. For example, if you specify a custom vocabulary
* that is in en-US
but Amazon Transcribe determines that the
* language spoken in your media is en-AU
, your custom
* vocabulary is not applied to your transcription. If you include
* LanguageOptions
and include en-US
as the only
* English language dialect, your custom vocabulary is applied to
* your transcription.
* parameter with the
LanguageModelName
* sub-parameter. If you want to include a custom vocabulary or a custom
* vocabulary filter (or both) with your request but do not want to
* use automatic language identification, use instead the
* parameter with the
VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* LanguageIdSettings
with the relevant sub-parameters
* (VocabularyName
, LanguageModelName
, and
* VocabularyFilterName
). Note that multi-language
* identification (IdentifyMultipleLanguages
) doesn't
* support custom language models.
* LanguageIdSettings
supports two to five language
* codes. Each language code you include can have an associated
* custom language model, custom vocabulary, and custom vocabulary
* filter. The language codes that you specify must match the
* languages of the associated custom language models, custom
* vocabularies, and custom vocabulary filters.
* LanguageOptions
* when using LanguageIdSettings
to ensure that the
* correct language dialect is identified. For example, if you
* specify a custom vocabulary that is in en-US
but
* Amazon Transcribe determines that the language spoken in your
* media is en-AU
, your custom vocabulary is not
* applied to your transcription. If you include
* LanguageOptions
and include en-US
as
* the only English language dialect, your custom vocabulary
* is applied to your transcription.
* parameter with the
LanguageModelName
* sub-parameter. If you want to include a custom vocabulary or a
* custom vocabulary filter (or both) with your request but do
* not want to use automatic language identification, use
* instead the
* parameter with the
VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* LanguageIdSettings
with the
* relevant sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* Note that multi-language identification (
* IdentifyMultipleLanguages
) doesn't support custom language
* models.
* LanguageIdSettings
supports two to five language codes. Each
* language code you include can have an associated custom language model,
* custom vocabulary, and custom vocabulary filter. The language codes that
* you specify must match the languages of the associated custom language
* models, custom vocabularies, and custom vocabulary filters.
* LanguageOptions
when using
* LanguageIdSettings
to ensure that the correct language
* dialect is identified. For example, if you specify a custom vocabulary
* that is in en-US
but Amazon Transcribe determines that the
* language spoken in your media is en-AU
, your custom
* vocabulary is not applied to your transcription. If you include
* LanguageOptions
and include en-US
as the only
* English language dialect, your custom vocabulary is applied to
* your transcription.
* parameter with the
LanguageModelName
* sub-parameter. If you want to include a custom vocabulary or a custom
* vocabulary filter (or both) with your request but do not want to
* use automatic language identification, use instead the
* parameter with the
VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and
* VocabularyFilterName
). Note that multi-language
* identification (IdentifyMultipleLanguages
)
* doesn't support custom language models.
* LanguageIdSettings
supports two to five language
* codes. Each language code you include can have an associated
* custom language model, custom vocabulary, and custom
* vocabulary filter. The language codes that you specify must
* match the languages of the associated custom language models,
* custom vocabularies, and custom vocabulary filters.
* LanguageOptions
* when using LanguageIdSettings
to ensure that the
* correct language dialect is identified. For example, if you
* specify a custom vocabulary that is in en-US
but
* Amazon Transcribe determines that the language spoken in your
* media is en-AU
, your custom vocabulary is
* not applied to your transcription. If you include
* LanguageOptions
and include en-US
as
* the only English language dialect, your custom vocabulary
* is applied to your transcription.
* parameter with the
LanguageModelName
* sub-parameter. If you want to include a custom vocabulary or a
* custom vocabulary filter (or both) with your request but do
* not want to use automatic language identification, use
* instead the
* parameter with the
VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* LanguageIdSettings
with the
* relevant sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* Note that multi-language identification (
* IdentifyMultipleLanguages
) doesn't support custom language
* models.
* LanguageIdSettings
supports two to five language codes. Each
* language code you include can have an associated custom language model,
* custom vocabulary, and custom vocabulary filter. The language codes that
* you specify must match the languages of the associated custom language
* models, custom vocabularies, and custom vocabulary filters.
* LanguageOptions
when using
* LanguageIdSettings
to ensure that the correct language
* dialect is identified. For example, if you specify a custom vocabulary
* that is in en-US
but Amazon Transcribe determines that the
* language spoken in your media is en-AU
, your custom
* vocabulary is not applied to your transcription. If you include
* LanguageOptions
and include en-US
as the only
* English language dialect, your custom vocabulary is applied to
* your transcription.
* parameter with the
LanguageModelName
* sub-parameter. If you want to include a custom vocabulary or a custom
* vocabulary filter (or both) with your request but do not want to
* use automatic language identification, use instead the
* parameter with the
VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* LanguageIdSettings
with the relevant
* sub-parameters (VocabularyName
,
* LanguageModelName
, and
* VocabularyFilterName
). Note that multi-language
* identification (IdentifyMultipleLanguages
)
* doesn't support custom language models.
* LanguageIdSettings
supports two to five language
* codes. Each language code you include can have an associated
* custom language model, custom vocabulary, and custom
* vocabulary filter. The language codes that you specify must
* match the languages of the associated custom language models,
* custom vocabularies, and custom vocabulary filters.
* LanguageOptions
* when using LanguageIdSettings
to ensure that the
* correct language dialect is identified. For example, if you
* specify a custom vocabulary that is in en-US
but
* Amazon Transcribe determines that the language spoken in your
* media is en-AU
, your custom vocabulary is
* not applied to your transcription. If you include
* LanguageOptions
and include en-US
as
* the only English language dialect, your custom vocabulary
* is applied to your transcription.
* parameter with the
LanguageModelName
* sub-parameter. If you want to include a custom vocabulary or a
* custom vocabulary filter (or both) with your request but do
* not want to use automatic language identification, use
* instead the
* parameter with the
VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
* LanguageIdSettings
with the
* relevant sub-parameters (VocabularyName
,
* LanguageModelName
, and VocabularyFilterName
).
* Note that multi-language identification (
* IdentifyMultipleLanguages
) doesn't support custom language
* models.
* LanguageIdSettings
supports two to five language codes. Each
* language code you include can have an associated custom language model,
* custom vocabulary, and custom vocabulary filter. The language codes that
* you specify must match the languages of the associated custom language
* models, custom vocabularies, and custom vocabulary filters.
* LanguageOptions
when using
* LanguageIdSettings
to ensure that the correct language
* dialect is identified. For example, if you specify a custom vocabulary
* that is in en-US
but Amazon Transcribe determines that the
* language spoken in your media is en-AU
, your custom
* vocabulary is not applied to your transcription. If you include
* LanguageOptions
and include en-US
as the only
* English language dialect, your custom vocabulary is applied to
* your transcription.
* parameter with the
LanguageModelName
* sub-parameter. If you want to include a custom vocabulary or a custom
* vocabulary filter (or both) with your request but do not want to
* use automatic language identification, use instead the
* parameter with the
VocabularyName
or
* VocabularyFilterName
(or both) sub-parameter.
*