/* * Copyright 2018-2023 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with * the License. A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR * CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions * and limitations under the License. */ package com.amazonaws.services.dynamodbv2; import javax.annotation.Generated; import com.amazonaws.*; import com.amazonaws.regions.*; import com.amazonaws.services.dynamodbv2.model.*; import com.amazonaws.services.dynamodbv2.waiters.AmazonDynamoDBWaiters; /** * Interface for accessing DynamoDB. *
* Note: Do not directly implement this interface, new methods are added to it regularly. Extend from * {@link com.amazonaws.services.dynamodbv2.AbstractAmazonDynamoDB} instead. *
*
*
* Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with * seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed * database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software * patching, or cluster scaling. *
** With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of * request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance * degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance * metrics. *
** DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle * your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is * stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web * Services Region, providing built-in high availability and data durability. *
*/ @Generated("com.amazonaws:aws-java-sdk-code-generator") public interface AmazonDynamoDB { /** * The region metadata service name for computing region endpoints. You can use this value to retrieve metadata * (such as supported regions) of the service. * * @see RegionUtils#getRegionsForService(String) */ String ENDPOINT_PREFIX = "dynamodb"; /** * Overrides the default endpoint for this client ("https://dynamodb.us-east-1.amazonaws.com"). Callers can use this * method to control which AWS region they want to work with. ** Callers can pass in just the endpoint (ex: "dynamodb.us-east-1.amazonaws.com") or a full URL, including the * protocol (ex: "https://dynamodb.us-east-1.amazonaws.com"). If the protocol is not specified here, the default * protocol from this client's {@link ClientConfiguration} will be used, which by default is HTTPS. *
* For more information on using AWS regions with the AWS SDK for Java, and a complete list of all available * endpoints for all AWS services, see: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-region-selection.html#region-selection- * choose-endpoint *
* This method is not threadsafe. An endpoint should be configured when the client is created and before any * service requests are made. Changing it afterwards creates inevitable race conditions for any service requests in * transit or retrying. * * @param endpoint * The endpoint (ex: "dynamodb.us-east-1.amazonaws.com") or a full URL, including the protocol (ex: * "https://dynamodb.us-east-1.amazonaws.com") of the region specific AWS endpoint this client will * communicate with. * @deprecated use {@link AwsClientBuilder#setEndpointConfiguration(AwsClientBuilder.EndpointConfiguration)} for * example: * {@code builder.setEndpointConfiguration(new EndpointConfiguration(endpoint, signingRegion));} */ @Deprecated void setEndpoint(String endpoint); /** * An alternative to {@link AmazonDynamoDB#setEndpoint(String)}, sets the regional endpoint for this client's * service calls. Callers can use this method to control which AWS region they want to work with. *
* By default, all service endpoints in all regions use the https protocol. To use http instead, specify it in the * {@link ClientConfiguration} supplied at construction. *
* This method is not threadsafe. A region should be configured when the client is created and before any service * requests are made. Changing it afterwards creates inevitable race conditions for any service requests in transit * or retrying. * * @param region * The region this client will communicate with. See {@link Region#getRegion(com.amazonaws.regions.Regions)} * for accessing a given region. Must not be null and must be a region where the service is available. * * @see Region#getRegion(com.amazonaws.regions.Regions) * @see Region#createClient(Class, com.amazonaws.auth.AWSCredentialsProvider, ClientConfiguration) * @see Region#isServiceSupported(String) * @deprecated use {@link AwsClientBuilder#setRegion(String)} */ @Deprecated void setRegion(Region region); /** *
* This operation allows you to perform batch reads or writes on data stored in DynamoDB, using PartiQL. Each read
* statement in a BatchExecuteStatement
must specify an equality condition on all key attributes. This
* enforces that each SELECT
statement in a batch returns at most a single item.
*
* The entire batch must consist of either read statements or write statements, you cannot mix both in one batch. *
*
* A HTTP 200 response does not mean that all statements in the BatchExecuteStatement succeeded. Error details for
* individual statements can be found under the Error field of the BatchStatementResponse
for each statement.
*
* The BatchGetItem
operation returns the attributes of one or more items from one or more tables. You
* identify requested items by primary key.
*
* A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items.
* BatchGetItem
returns a partial result if the response size limit is exceeded, the table's
* provisioned throughput is exceeded, more than 1MB per partition is requested, or an internal processing failure
* occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys
. You can
* use this value to retry the operation starting with the next item to get.
*
* If you request more than 100 items, BatchGetItem
returns a ValidationException
with the
* message "Too many items requested for the BatchGetItem call."
*
* For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52
* items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys
value so
* you can get the next page of results. If desired, your application can include its own logic to assemble the
* pages of results into one dataset.
*
* If none of the items can be processed due to insufficient provisioned throughput on all of the tables in
* the request, then BatchGetItem
returns a ProvisionedThroughputExceededException
. If
* at least one of the items is successfully processed, then BatchGetItem
completes
* successfully, while returning the keys of the unread items in UnprocessedKeys
.
*
* If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we * strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation * immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If * you delay the batch operation using exponential backoff, the individual requests in the batch are much more * likely to succeed. *
** For more information, see Batch * Operations and Error Handling in the Amazon DynamoDB Developer Guide. *
*
* By default, BatchGetItem
performs eventually consistent reads on every table in the request. If you
* want strongly consistent reads instead, you can set ConsistentRead
to true
for any or
* all tables.
*
* In order to minimize response latency, BatchGetItem
may retrieve items in parallel.
*
* When designing your application, keep in mind that DynamoDB does not return items in any particular order. To
* help parse the response by item, include the primary key values for the items in your request in the
* ProjectionExpression
parameter.
*
* If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the * minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide. *
* * @param batchGetItemRequest * Represents the input of aBatchGetItem
operation.
* @return Result of the BatchGetItem operation returned by the service.
* @throws ProvisionedThroughputExceededException
* Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests
* that receive this exception. Your request is eventually successful, unless your retry queue is too large
* to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.BatchGetItem
* @see AWS API
* Documentation
*/
BatchGetItemResult batchGetItem(BatchGetItemRequest batchGetItemRequest);
/**
* Simplified method form for invoking the BatchGetItem operation.
*
* @see #batchGetItem(BatchGetItemRequest)
*/
BatchGetItemResult batchGetItem(java.util.Map
* The BatchWriteItem
operation puts or deletes multiple items in one or more tables. A single call to
* BatchWriteItem
can transmit up to 16MB of data over the network, consisting of up to 25 item put or
* delete operations. While individual items can be up to 400 KB once stored, it's important to note that an item's
* representation might be greater than 400KB while being sent in DynamoDB's JSON format for the API call. For more
* details on this distinction, see Naming Rules and Data Types.
*
* BatchWriteItem
cannot update items. If you perform a BatchWriteItem
operation on an
* existing item, that item's values will be overwritten by the operation and it will appear like it was updated. To
* update items, we recommend you use the UpdateItem
action.
*
* The individual PutItem
and DeleteItem
operations specified in
* BatchWriteItem
are atomic; however BatchWriteItem
as a whole is not. If any requested
* operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs,
* the failed operations are returned in the UnprocessedItems
response parameter. You can investigate
* and optionally resend the requests. Typically, you would call BatchWriteItem
in a loop. Each
* iteration would check for unprocessed items and submit a new BatchWriteItem
request with those
* unprocessed items until all items have been processed.
*
* If none of the items can be processed due to insufficient provisioned throughput on all of the tables in
* the request, then BatchWriteItem
returns a ProvisionedThroughputExceededException
.
*
* If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we * strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation * immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If * you delay the batch operation using exponential backoff, the individual requests in the batch are much more * likely to succeed. *
** For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide. *
*
* With BatchWriteItem
, you can efficiently write or delete large amounts of data, such as from Amazon
* EMR, or copy data from another database into DynamoDB. In order to improve performance with these large-scale
* operations, BatchWriteItem
does not behave in the same way as individual PutItem
and
* DeleteItem
calls would. For example, you cannot specify conditions on individual put and delete
* requests, and BatchWriteItem
does not return deleted items in the response.
*
* If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your
* application must include the necessary logic to manage the threads. With languages that don't support threading,
* you must update or delete the specified items one at a time. In both situations, BatchWriteItem
* performs the specified put and delete operations in parallel, giving you the power of the thread pool approach
* without having to introduce complexity into your application.
*
* Parallel processing reduces latency, but each specified put and delete request consumes the same number of write * capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one * write capacity unit. *
** If one or more of the following is true, DynamoDB rejects the entire batch write operation: *
*
* One or more tables specified in the BatchWriteItem
request does not exist.
*
* Primary key attributes specified on an item in the request do not match those in the corresponding table's * primary key schema. *
*
* You try to perform multiple operations on the same item in the same BatchWriteItem
request. For
* example, you cannot put and delete the same item in the same BatchWriteItem
request.
*
* Your request contains at least two items with identical hash and range keys (which essentially is two put * operations). *
** There are more than 25 requests in the batch. *
** Any individual item in a batch exceeds 400 KB. *
** The total request size exceeds 16 MB. *
*BatchWriteItem
operation.
* @return Result of the BatchWriteItem operation returned by the service.
* @throws ProvisionedThroughputExceededException
* Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests
* that receive this exception. Your request is eventually successful, unless your retry queue is too large
* to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws ItemCollectionSizeLimitExceededException
* An item collection is too large. This exception is only returned for tables that have one or more local
* secondary indexes.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.BatchWriteItem
* @see AWS API
* Documentation
*/
BatchWriteItemResult batchWriteItem(BatchWriteItemRequest batchWriteItemRequest);
/**
* Simplified method form for invoking the BatchWriteItem operation.
*
* @see #batchWriteItem(BatchWriteItemRequest)
*/
BatchWriteItemResult batchWriteItem(java.util.Map* Creates a backup for an existing table. *
** Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of * on-demand backups that can be taken. *
** When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created * asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup * requests are processed instantaneously and become available for restore within minutes. *
*
* You can call CreateBackup
at a maximum rate of 50 times per second.
*
* All backups in DynamoDB work without consuming any provisioned throughput on the table. *
** If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed * to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data * modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency. *
** Along with data, the following are also included on the backups: *
** Global secondary indexes (GSIs) *
** Local secondary indexes (LSIs) *
** Streams *
** Provisioned read and write capacity *
*TableName
does not currently exist within the subscriber's
* account or the subscriber is operating in the wrong Amazon Web Services Region.
* @throws TableInUseException
* A target table with the specified name is either being created or deleted.
* @throws ContinuousBackupsUnavailableException
* Backups have not yet been enabled for this table.
* @throws BackupInUseException
* There is another ongoing conflicting backup control plane operation on the table. The backup is either
* being created, deleted or restored to a table.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.CreateBackup * @see AWS API * Documentation */ CreateBackupResult createBackup(CreateBackupRequest createBackupRequest); /** *
* Creates a global table from an existing table. A global table creates a replication relationship between two or * more DynamoDB tables with the same table name in the provided Regions. *
** This operation only applies to Version 2017.11.29 * (Legacy) of global tables. We recommend using Version 2019.11.21 * (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes * less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version * 2019.11.21 (Current), see Updating * global tables. *
** If you want to add a new replica table to a global table, each of the following conditions must be true: *
** The table must have the same primary key as all of the other replicas. *
** The table must have the same name as all of the other replicas. *
** The table must have DynamoDB Streams enabled, with the stream containing both the new and the old images of the * item. *
** None of the replica tables in the global table can contain any data. *
** If global secondary indexes are specified, then the following conditions must also be met: *
** The global secondary indexes must have the same name. *
** The global secondary indexes must have the same hash key and sort key (if present). *
** If local secondary indexes are specified, then the following conditions must also be met: *
** The local secondary indexes must have the same name. *
** The local secondary indexes must have the same hash key and sort key (if present). *
** Write capacity settings should be set consistently across your replica tables and secondary indexes. DynamoDB * strongly recommends enabling auto scaling to manage the write capacity settings for all of your global tables * replicas and indexes. *
** If you prefer to manage write capacity settings manually, you should provision equal replicated write capacity * units to your replica tables. You should also provision equal replicated write capacity units to matching * secondary indexes across your global table. *
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
*
* More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may
* result in request throttling.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @throws GlobalTableAlreadyExistsException
* The specified global table already exists.
* @throws TableNotFoundException
* A source table with the name TableName
does not currently exist within the subscriber's
* account or the subscriber is operating in the wrong Amazon Web Services Region.
* @sample AmazonDynamoDB.CreateGlobalTable
* @see AWS API
* Documentation
*/
CreateGlobalTableResult createGlobalTable(CreateGlobalTableRequest createGlobalTableRequest);
/**
*
* The CreateTable
operation adds a new table to your account. In an Amazon Web Services account, table
* names must be unique within each Region. That is, you can have two tables with same name if you create the tables
* in different Regions.
*
* CreateTable
is an asynchronous operation. Upon receiving a CreateTable
request,
* DynamoDB immediately returns a response with a TableStatus
of CREATING
. After the table
* is created, DynamoDB sets the TableStatus
to ACTIVE
. You can perform read and write
* operations only on an ACTIVE
table.
*
* You can optionally define secondary indexes on the new table, as part of the CreateTable
operation.
* If you want to create multiple tables with secondary indexes on them, you must create the tables sequentially.
* Only one table with secondary indexes can be in the CREATING
state at any given time.
*
* You can use the DescribeTable
action to check the table status.
*
CreateTable
operation.
* @return Result of the CreateTable operation returned by the service.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
*
* More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may
* result in request throttling.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.CreateTable
* @see AWS API
* Documentation
*/
CreateTableResult createTable(CreateTableRequest createTableRequest);
/**
* Simplified method form for invoking the CreateTable operation.
*
* @see #createTable(CreateTableRequest)
*/
CreateTableResult createTable(java.util.List
* Deletes an existing backup of a table.
*
* You can call DeleteBackup
at a maximum rate of 10 times per second.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.DeleteBackup * @see AWS API * Documentation */ DeleteBackupResult deleteBackup(DeleteBackupRequest deleteBackupRequest); /** *
* Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the * item if it exists, or if it has an expected attribute value. *
*
* In addition to deleting an item, you can also return the item's attribute values in the same operation, using the
* ReturnValues
parameter.
*
* Unless you specify conditions, the DeleteItem
is an idempotent operation; running it multiple times
* on the same item or attribute does not result in an error response.
*
* Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are * met, DynamoDB performs the delete. Otherwise, the item is not deleted. *
* * @param deleteItemRequest * Represents the input of aDeleteItem
operation.
* @return Result of the DeleteItem operation returned by the service.
* @throws ConditionalCheckFailedException
* A condition specified in the operation could not be evaluated.
* @throws ProvisionedThroughputExceededException
* Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests
* that receive this exception. Your request is eventually successful, unless your retry queue is too large
* to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws ItemCollectionSizeLimitExceededException
* An item collection is too large. This exception is only returned for tables that have one or more local
* secondary indexes.
* @throws TransactionConflictException
* Operation was rejected because there is an ongoing transaction for the item.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.DeleteItem
* @see AWS API
* Documentation
*/
DeleteItemResult deleteItem(DeleteItemRequest deleteItemRequest);
/**
* Simplified method form for invoking the DeleteItem operation.
*
* @see #deleteItem(DeleteItemRequest)
*/
DeleteItemResult deleteItem(String tableName, java.util.Map
* The DeleteTable
operation deletes a table and all of its items. After a DeleteTable
* request, the specified table is in the DELETING
state until DynamoDB completes the deletion. If the
* table is in the ACTIVE
state, you can delete it. If a table is in CREATING
or
* UPDATING
states, then DynamoDB returns a ResourceInUseException
. If the specified table
* does not exist, DynamoDB returns a ResourceNotFoundException
. If table is already in the
* DELETING
state, no error is returned.
*
* This operation only applies to Version 2019.11.21 * (Current) of global tables. *
*
* DynamoDB might continue to accept data read and write operations, such as GetItem
and
* PutItem
, on a table in the DELETING
state until the table deletion is complete.
*
* When you delete a table, any indexes on that table are also deleted. *
*
* If you have DynamoDB Streams enabled on the table, then the corresponding stream on that table goes into the
* DISABLED
state, and the stream is automatically deleted after 24 hours.
*
* Use the DescribeTable
action to check the status of the table.
*
DeleteTable
operation.
* @return Result of the DeleteTable operation returned by the service.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.DeleteTable * @see AWS API * Documentation */ DeleteTableResult deleteTable(DeleteTableRequest deleteTableRequest); /** * Simplified method form for invoking the DeleteTable operation. * * @see #deleteTable(DeleteTableRequest) */ DeleteTableResult deleteTable(String tableName); /** *
* Describes an existing backup of a table. *
*
* You can call DescribeBackup
at a maximum rate of 10 times per second.
*
* Checks the status of continuous backups and point in time recovery on the specified table. Continuous backups are
* ENABLED
on all tables at table creation. If point in time recovery is enabled,
* PointInTimeRecoveryStatus
will be set to ENABLED.
*
* After continuous backups and point in time recovery are enabled, you can restore to any point in time within
* EarliestRestorableDateTime
and LatestRestorableDateTime
.
*
* LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table
* to any point in time during the last 35 days.
*
* You can call DescribeContinuousBackups
at a maximum rate of 10 times per second.
*
TableName
does not currently exist within the subscriber's
* account or the subscriber is operating in the wrong Amazon Web Services Region.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.DescribeContinuousBackups
* @see AWS API Documentation
*/
DescribeContinuousBackupsResult describeContinuousBackups(DescribeContinuousBackupsRequest describeContinuousBackupsRequest);
/**
* * Returns information about contributor insights for a given table or global secondary index. *
* * @param describeContributorInsightsRequest * @return Result of the DescribeContributorInsights operation returned by the service. * @throws ResourceNotFoundException * The operation tried to access a nonexistent table or index. The resource might not be specified * correctly, or its status might not beACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.DescribeContributorInsights
* @see AWS API Documentation
*/
DescribeContributorInsightsResult describeContributorInsights(DescribeContributorInsightsRequest describeContributorInsightsRequest);
/**
* * Returns the regional endpoint information. For more information on policy permissions, please see Internetwork traffic privacy. *
* * @param describeEndpointsRequest * @return Result of the DescribeEndpoints operation returned by the service. * @sample AmazonDynamoDB.DescribeEndpoints * @see AWS API * Documentation */ DescribeEndpointsResult describeEndpoints(DescribeEndpointsRequest describeEndpointsRequest); /** ** Describes an existing table export. *
* * @param describeExportRequest * @return Result of the DescribeExport operation returned by the service. * @throws ExportNotFoundException * The specified export was not found. * @throws LimitExceededException * There is no limit to the number of daily on-demand backups that can be taken. *
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.DescribeExport * @see AWS API * Documentation */ DescribeExportResult describeExport(DescribeExportRequest describeExportRequest); /** *
* Returns information about the specified global table. *
** This operation only applies to Version 2017.11.29 * (Legacy) of global tables. We recommend using Version 2019.11.21 * (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes * less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version * 2019.11.21 (Current), see Updating * global tables. *
** Describes Region-specific settings for a global table. *
** This operation only applies to Version 2017.11.29 * (Legacy) of global tables. We recommend using Version 2019.11.21 * (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes * less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version * 2019.11.21 (Current), see Updating * global tables. *
** Represents the properties of the import. *
* * @param describeImportRequest * @return Result of the DescribeImport operation returned by the service. * @throws ImportNotFoundException * The specified import was not found. * @sample AmazonDynamoDB.DescribeImport * @see AWS API * Documentation */ DescribeImportResult describeImport(DescribeImportRequest describeImportRequest); /** ** Returns information about the status of Kinesis streaming. *
* * @param describeKinesisStreamingDestinationRequest * @return Result of the DescribeKinesisStreamingDestination operation returned by the service. * @throws ResourceNotFoundException * The operation tried to access a nonexistent table or index. The resource might not be specified * correctly, or its status might not beACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.DescribeKinesisStreamingDestination
* @see AWS API Documentation
*/
DescribeKinesisStreamingDestinationResult describeKinesisStreamingDestination(
DescribeKinesisStreamingDestinationRequest describeKinesisStreamingDestinationRequest);
/**
* * Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the * Region as a whole and for any one DynamoDB table that you create there. *
** When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity * units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, * there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table * Quotas page in the Amazon DynamoDB Developer Guide. *
*
* Although you can increase these quotas by filing a case at Amazon Web Services Support Center, obtaining the
* increase is not instantaneous. The DescribeLimits
action lets you write code to compare the capacity
* you are currently using to those quotas imposed by your account so that you have enough time to apply for an
* increase before you hit a quota.
*
* For example, you could use one of the Amazon Web Services SDKs to do the following: *
*
* Call DescribeLimits
for a particular Region to obtain your current account quotas on provisioned
* capacity there.
*
* Create a variable to hold the aggregate read capacity units provisioned for all your tables in that Region, and * one to hold the aggregate write capacity units. Zero them both. *
*
* Call ListTables
to obtain a list of all your DynamoDB tables.
*
* For each table name listed by ListTables
, do the following:
*
* Call DescribeTable
with the table name.
*
* Use the data returned by DescribeTable
to add the read capacity units and write capacity units
* provisioned for the table itself to your variables.
*
* If the table has one or more global secondary indexes (GSIs), loop over these GSIs and add their provisioned * capacity values to your variables as well. *
*
* Report the account quotas for that Region returned by DescribeLimits
, along with the total current
* provisioned capacity levels you have calculated.
*
* This will let you see whether you are getting close to your account-level quotas. *
** The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned * capacity of the new table itself and all its global secondary indexes. *
** For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but * the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot * exceed either of the per-account quotas. *
*
* DescribeLimits
should only be called periodically. You can expect throttling errors if you call it
* more than once in a minute.
*
* The DescribeLimits
Request element has no content.
*
DescribeLimits
operation. Has no content.
* @return Result of the DescribeLimits operation returned by the service.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.DescribeLimits
* @see AWS API
* Documentation
*/
DescribeLimitsResult describeLimits(DescribeLimitsRequest describeLimitsRequest);
/**
* * Returns information about the table, including the current status of the table, when it was created, the primary * key schema, and any indexes on the table. *
** This operation only applies to Version 2019.11.21 * (Current) of global tables. *
*
* If you issue a DescribeTable
request immediately after a CreateTable
request, DynamoDB
* might return a ResourceNotFoundException
. This is because DescribeTable
uses an
* eventually consistent query, and the metadata for your table might not be available at that moment. Wait for a
* few seconds, and then try the DescribeTable
request again.
*
DescribeTable
operation.
* @return Result of the DescribeTable operation returned by the service.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.DescribeTable
* @see AWS API
* Documentation
*/
DescribeTableResult describeTable(DescribeTableRequest describeTableRequest);
/**
* Simplified method form for invoking the DescribeTable operation.
*
* @see #describeTable(DescribeTableRequest)
*/
DescribeTableResult describeTable(String tableName);
/**
* * Describes auto scaling settings across replicas of the global table at once. *
** This operation only applies to Version 2019.11.21 * (Current) of global tables. *
*ACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.DescribeTableReplicaAutoScaling
* @see AWS API Documentation
*/
DescribeTableReplicaAutoScalingResult describeTableReplicaAutoScaling(DescribeTableReplicaAutoScalingRequest describeTableReplicaAutoScalingRequest);
/**
* * Gives a description of the Time to Live (TTL) status on the specified table. *
* * @param describeTimeToLiveRequest * @return Result of the DescribeTimeToLive operation returned by the service. * @throws ResourceNotFoundException * The operation tried to access a nonexistent table or index. The resource might not be specified * correctly, or its status might not beACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.DescribeTimeToLive
* @see AWS
* API Documentation
*/
DescribeTimeToLiveResult describeTimeToLive(DescribeTimeToLiveRequest describeTimeToLiveRequest);
/**
* * Stops replication from the DynamoDB table to the Kinesis data stream. This is done without deleting either of the * resources. *
* * @param disableKinesisStreamingDestinationRequest * @return Result of the DisableKinesisStreamingDestination operation returned by the service. * @throws InternalServerErrorException * An error occurred on the server side. * @throws LimitExceededException * There is no limit to the number of daily on-demand backups that can be taken. *
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
*
* More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may
* result in request throttling.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @sample AmazonDynamoDB.DisableKinesisStreamingDestination
* @see AWS API Documentation
*/
DisableKinesisStreamingDestinationResult disableKinesisStreamingDestination(
DisableKinesisStreamingDestinationRequest disableKinesisStreamingDestinationRequest);
/**
*
* Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable * workflow. If this operation doesn't return results immediately, use DescribeKinesisStreamingDestination to check * if streaming to the Kinesis data stream is ACTIVE. *
* * @param enableKinesisStreamingDestinationRequest * @return Result of the EnableKinesisStreamingDestination operation returned by the service. * @throws InternalServerErrorException * An error occurred on the server side. * @throws LimitExceededException * There is no limit to the number of daily on-demand backups that can be taken. *
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
*
* More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may
* result in request throttling.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @sample AmazonDynamoDB.EnableKinesisStreamingDestination
* @see AWS API Documentation
*/
EnableKinesisStreamingDestinationResult enableKinesisStreamingDestination(EnableKinesisStreamingDestinationRequest enableKinesisStreamingDestinationRequest);
/**
*
* This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL. *
*
* For PartiQL reads (SELECT
statement), if the total number of processed items exceeds the maximum
* dataset size limit of 1 MB, the read stops and results are returned to the user as a
* LastEvaluatedKey
value to continue the read in a subsequent operation. If the filter criteria in
* WHERE
clause does not match any data, the read will return an empty result set.
*
* A single SELECT
statement response can return up to the maximum number of items (if using the Limit
* parameter) or a maximum of 1 MB of data (and then apply any filtering to the results using WHERE
* clause). If LastEvaluatedKey
is present in the response, you need to paginate the result set. If
* NextToken
is present, you need to paginate the result set and include NextToken
.
*
ACTIVE
.
* @throws ItemCollectionSizeLimitExceededException
* An item collection is too large. This exception is only returned for tables that have one or more local
* secondary indexes.
* @throws TransactionConflictException
* Operation was rejected because there is an ongoing transaction for the item.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @throws DuplicateItemException
* There was an attempt to insert an item with the same primary key as an item that already exists in the
* DynamoDB table.
* @sample AmazonDynamoDB.ExecuteStatement
* @see AWS API
* Documentation
*/
ExecuteStatementResult executeStatement(ExecuteStatementRequest executeStatementRequest);
/**
* * This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL. *
*
* The entire transaction must consist of either read statements or write statements, you cannot mix both in one
* transaction. The EXISTS function is an exception and can be used to check the condition of specific attributes of
* the item in a similar manner to ConditionCheck
in the TransactWriteItems API.
*
ACTIVE
.
* @throws TransactionCanceledException
* The entire transaction request was canceled.
*
* DynamoDB cancels a TransactWriteItems
request under the following circumstances:
*
* A condition in one of the condition expressions is not met. *
*
* A table in the TransactWriteItems
request is in a different account or region.
*
* More than one action in the TransactWriteItems
operation targets the same item.
*
* There is insufficient provisioned capacity for the transaction to be completed. *
** An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, * or a similar validation error occurs because of changes made by the transaction. *
** There is a user error, such as an invalid data format. *
*
* DynamoDB cancels a TransactGetItems
request under the following circumstances:
*
* There is an ongoing TransactGetItems
operation that conflicts with a concurrent
* PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
* request. In this case the TransactGetItems
operation fails with a
* TransactionCanceledException
.
*
* A table in the TransactGetItems
request is in a different account or region.
*
* There is insufficient provisioned capacity for the transaction to be completed. *
** There is a user error, such as an invalid data format. *
*
* If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
* This property is not set for other languages. Transaction cancellation reasons are ordered in the order
* of requested items, if an item has no error it will have None
code and Null
* message.
*
* Cancellation reason codes and possible error messages: *
** No Errors: *
*
* Code: None
*
* Message: null
*
* Conditional Check Failed: *
*
* Code: ConditionalCheckFailed
*
* Message: The conditional request failed. *
** Item Collection Size Limit Exceeded: *
*
* Code: ItemCollectionSizeLimitExceeded
*
* Message: Collection size exceeded. *
** Transaction Conflict: *
*
* Code: TransactionConflict
*
* Message: Transaction is ongoing for the item. *
** Provisioned Throughput Exceeded: *
*
* Code: ProvisionedThroughputExceeded
*
* Messages: *
** The level of configured provisioned throughput for the table was exceeded. Consider increasing your * provisioning level with the UpdateTable API. *
** This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table. *
** The level of configured provisioned throughput for one or more global secondary indexes of the table was * exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes * with the UpdateTable API. *
** This message is returned when provisioned throughput is exceeded is on a provisioned GSI. *
** Throttling Error: *
*
* Code: ThrottlingError
*
* Messages: *
** Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your * table or index so please try again shortly. If exceptions persist, check if you have a hot key: * https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html. *
** This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically * scaling the table. *
** Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is * automatically scaling your index so please try again shortly. *
** This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically * scaling the GSI. *
** Validation Error: *
*
* Code: ValidationError
*
* Messages: *
** One or more parameter values were invalid. *
** The update expression attempted to update the secondary index key beyond allowed size limits. *
** The update expression attempted to update the secondary index key to unsupported type. *
** An operand in the update expression has an incorrect data type. *
** Item size to update has exceeded the maximum allowed size. *
** Number overflow. Attempting to store a number with magnitude larger than supported range. *
** Type mismatch for attribute to update. *
** Nesting Levels have exceeded supported limits. *
** The document path provided in the update expression is invalid for update. *
** The provided expression refers to an attribute that does not exist in the item. *
** Recommended Settings *
*
* This is a general recommendation for handling the TransactionInProgressException
. These
* settings help ensure that the client retries will trigger completion of the ongoing
* TransactWriteItems
request.
*
* Set clientExecutionTimeout
to a value that allows at least one retry to be processed after 5
* seconds have elapsed since the first attempt for the TransactWriteItems
operation.
*
* Set socketTimeout
to a value a little lower than the requestTimeout
setting.
*
* requestTimeout
should be set based on the time taken for the individual retries of a single
* HTTP request for your use case, but setting it to 1 second or higher should work well to reduce chances
* of retries and TransactionInProgressException
errors.
*
* Use exponential backoff when retrying and tune backoff if needed. *
** Assuming default retry policy, example timeout settings based on the guidelines above are as follows: *
** Example timeline: *
** 0-1000 first attempt *
** 1000-1500 first sleep/delay (default retry policy uses 500 ms as base delay for 4xx errors) *
** 1500-2500 second attempt *
** 2500-3500 second sleep/delay (500 * 2, exponential backoff) *
** 3500-4500 third attempt *
** 4500-6500 third sleep/delay (500 * 2^2) *
** 6500-7500 fourth attempt (this can trigger inline recovery since 5 seconds have elapsed since the first * attempt reached TC) *
** Exports table data to an S3 bucket. The table must have point in time recovery enabled, and you can export data * from any time within the point in time recovery window. *
* * @param exportTableToPointInTimeRequest * @return Result of the ExportTableToPointInTime operation returned by the service. * @throws TableNotFoundException * A source table with the nameTableName
does not currently exist within the subscriber's
* account or the subscriber is operating in the wrong Amazon Web Services Region.
* @throws PointInTimeRecoveryUnavailableException
* Point in time recovery has not yet been enabled for this source table.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
*
* More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may
* result in request throttling.
* @throws InvalidExportTimeException
* The specified ExportTime
is outside of the point in time recovery window.
* @throws ExportConflictException
* There was a conflict when writing to the specified S3 bucket.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.ExportTableToPointInTime
* @see AWS API Documentation
*/
ExportTableToPointInTimeResult exportTableToPointInTime(ExportTableToPointInTimeRequest exportTableToPointInTimeRequest);
/**
*
* The GetItem
operation returns a set of attributes for the item with the given primary key. If there
* is no matching item, GetItem
does not return any data and there will be no Item
element
* in the response.
*
* GetItem
provides an eventually consistent read by default. If your application requires a strongly
* consistent read, set ConsistentRead
to true
. Although a strongly consistent read might
* take more time than an eventually consistent read, it always returns the last updated value.
*
GetItem
operation.
* @return Result of the GetItem operation returned by the service.
* @throws ProvisionedThroughputExceededException
* Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests
* that receive this exception. Your request is eventually successful, unless your retry queue is too large
* to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.GetItem
* @see AWS API
* Documentation
*/
GetItemResult getItem(GetItemRequest getItemRequest);
/**
* Simplified method form for invoking the GetItem operation.
*
* @see #getItem(GetItemRequest)
*/
GetItemResult getItem(String tableName, java.util.Map* Imports table data from an S3 bucket. *
* * @param importTableRequest * @return Result of the ImportTable operation returned by the service. * @throws ResourceInUseException * The operation conflicts with the resource's availability. For example, you attempted to recreate an * existing table, or tried to delete a table currently in theCREATING
state.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws ImportConflictException * There was a conflict when importing from the specified S3 source. This can occur when the current import * conflicts with a previous import request that had the same client token. * @sample AmazonDynamoDB.ImportTable * @see AWS API * Documentation */ ImportTableResult importTable(ImportTableRequest importTableRequest); /** *
* List backups associated with an Amazon Web Services account. To list backups for a given table, specify
* TableName
. ListBackups
returns a paginated list of results with at most 1 MB worth of
* items in a page. You can also specify a maximum number of entries to be returned in a page.
*
* In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time * at which the original backup was requested. *
*
* You can call ListBackups
a maximum of five times per second.
*
* Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes. *
* * @param listContributorInsightsRequest * @return Result of the ListContributorInsights operation returned by the service. * @throws ResourceNotFoundException * The operation tried to access a nonexistent table or index. The resource might not be specified * correctly, or its status might not beACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.ListContributorInsights
* @see AWS API Documentation
*/
ListContributorInsightsResult listContributorInsights(ListContributorInsightsRequest listContributorInsightsRequest);
/**
* * Lists completed exports within the past 90 days. *
* * @param listExportsRequest * @return Result of the ListExports operation returned by the service. * @throws LimitExceededException * There is no limit to the number of daily on-demand backups that can be taken. *
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.ListExports * @see AWS API * Documentation */ ListExportsResult listExports(ListExportsRequest listExportsRequest); /** *
* Lists all global tables that have a replica in the specified Region. *
** This operation only applies to Version 2017.11.29 * (Legacy) of global tables. We recommend using Version 2019.11.21 * (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes * less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version * 2019.11.21 (Current), see Updating * global tables. *
** Lists completed imports within the past 90 days. *
* * @param listImportsRequest * @return Result of the ListImports operation returned by the service. * @throws LimitExceededException * There is no limit to the number of daily on-demand backups that can be taken. *
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @sample AmazonDynamoDB.ListImports * @see AWS API * Documentation */ ListImportsResult listImports(ListImportsRequest listImportsRequest); /** *
* Returns an array of table names associated with the current account and endpoint. The output from
* ListTables
is paginated, with each page returning a maximum of 100 table names.
*
ListTables
operation.
* @return Result of the ListTables operation returned by the service.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.ListTables
* @see AWS API
* Documentation
*/
ListTablesResult listTables(ListTablesRequest listTablesRequest);
/**
* Simplified method form for invoking the ListTables operation.
*
* @see #listTables(ListTablesRequest)
*/
ListTablesResult listTables();
/**
* Simplified method form for invoking the ListTables operation.
*
* @see #listTables(ListTablesRequest)
*/
ListTablesResult listTables(String exclusiveStartTableName);
/**
* Simplified method form for invoking the ListTables operation.
*
* @see #listTables(ListTablesRequest)
*/
ListTablesResult listTables(String exclusiveStartTableName, Integer limit);
/**
* Simplified method form for invoking the ListTables operation.
*
* @see #listTables(ListTablesRequest)
*/
ListTablesResult listTables(Integer limit);
/**
* * List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per * account. *
** For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in * the Amazon DynamoDB Developer Guide. *
* * @param listTagsOfResourceRequest * @return Result of the ListTagsOfResource operation returned by the service. * @throws ResourceNotFoundException * The operation tried to access a nonexistent table or index. The resource might not be specified * correctly, or its status might not beACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.ListTagsOfResource
* @see AWS
* API Documentation
*/
ListTagsOfResourceResult listTagsOfResource(ListTagsOfResourceRequest listTagsOfResourceRequest);
/**
*
* Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new
* item already exists in the specified table, the new item completely replaces the existing item. You can perform a
* conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an
* existing item if it has certain attribute values. You can return the item's attribute values in the same
* operation, using the ReturnValues
parameter.
*
* When you add an item, the primary key attributes are the only required attributes. *
** Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a * length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes * cannot be empty. *
*
* Invalid Requests with empty values will be rejected with a ValidationException
exception.
*
* To prevent a new item from replacing an existing item, use a conditional expression that contains the
* attribute_not_exists
function with the name of the attribute being used as the partition key for the
* table. Since every record must contain that attribute, the attribute_not_exists
function will only
* succeed if no matching item exists.
*
* For more information about PutItem
, see Working with
* Items in the Amazon DynamoDB Developer Guide.
*
PutItem
operation.
* @return Result of the PutItem operation returned by the service.
* @throws ConditionalCheckFailedException
* A condition specified in the operation could not be evaluated.
* @throws ProvisionedThroughputExceededException
* Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests
* that receive this exception. Your request is eventually successful, unless your retry queue is too large
* to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws ItemCollectionSizeLimitExceededException
* An item collection is too large. This exception is only returned for tables that have one or more local
* secondary indexes.
* @throws TransactionConflictException
* Operation was rejected because there is an ongoing transaction for the item.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.PutItem
* @see AWS API
* Documentation
*/
PutItemResult putItem(PutItemRequest putItemRequest);
/**
* Simplified method form for invoking the PutItem operation.
*
* @see #putItem(PutItemRequest)
*/
PutItemResult putItem(String tableName, java.util.Map
* You must provide the name of the partition key attribute and a single value for that attribute.
* Query
returns all items with that partition key value. Optionally, you can provide a sort key
* attribute and use a comparison operator to refine the search results.
*
* Use the KeyConditionExpression
parameter to provide a specific value for the partition key. The
* Query
operation will return all of the items from the table or index with that partition key value.
* You can optionally narrow the scope of the Query
operation by specifying a sort key value and a
* comparison operator in KeyConditionExpression
. To further refine the Query
results, you
* can optionally provide a FilterExpression
. A FilterExpression
determines which items
* within the results should be returned to you. All of the other results are discarded.
*
* A Query
operation always returns a result set. If no matching items are found, the result set will
* be empty. Queries that do not return results consume the minimum number of read capacity units for that type of
* read operation.
*
* DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that
* is returned to an application. The number of capacity units consumed will be the same whether you request all of
* the attributes (the default behavior) or just some of them (using a projection expression). The number will also
* be the same whether or not you use a FilterExpression
.
*
* Query
results are always sorted by the sort key value. If the data type of the sort key is Number,
* the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By
* default, the sort order is ascending. To reverse the order, set the ScanIndexForward
parameter to
* false.
*
* A single Query
operation will read up to the maximum number of items set (if using the
* Limit
parameter) or a maximum of 1 MB of data and then apply any filtering to the results using
* FilterExpression
. If LastEvaluatedKey
is present in the response, you will need to
* paginate the result set. For more information, see Paginating
* the Results in the Amazon DynamoDB Developer Guide.
*
* FilterExpression
is applied after a Query
finishes, but before the results are
* returned. A FilterExpression
cannot contain partition key or sort key attributes. You need to
* specify those attributes in the KeyConditionExpression
.
*
* A Query
operation can return an empty result set and a LastEvaluatedKey
if all the
* items read for the page of results are filtered out.
*
* You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local
* secondary index, you can set the ConsistentRead
parameter to true
and obtain a strongly
* consistent result. Global secondary indexes support eventually consistent reads only, so do not specify
* ConsistentRead
when querying a global secondary index.
*
Query
operation.
* @return Result of the Query operation returned by the service.
* @throws ProvisionedThroughputExceededException
* Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests
* that receive this exception. Your request is eventually successful, unless your retry queue is too large
* to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.Query
* @see AWS API
* Documentation
*/
QueryResult query(QueryRequest queryRequest);
/**
* * Creates a new table from an existing backup. Any number of users can execute up to 50 concurrent restores (any * type of restore) in a given account. *
*
* You can call RestoreTableFromBackup
at a maximum rate of 10 times per second.
*
* You must manually set up the following on the restored table: *
** Auto scaling policies *
** IAM policies *
** Amazon CloudWatch metrics and alarms *
** Tags *
** Stream settings *
** Time to Live (TTL) settings *
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.RestoreTableFromBackup * @see AWS API Documentation */ RestoreTableFromBackupResult restoreTableFromBackup(RestoreTableFromBackupRequest restoreTableFromBackupRequest); /** *
* Restores the specified table to the specified point in time within EarliestRestorableDateTime
and
* LatestRestorableDateTime
. You can restore your table to any point in time during the last 35 days.
* Any number of users can execute up to 4 concurrent restores (any type of restore) in a given account.
*
* When you restore using point in time recovery, DynamoDB restores your table data to the state based on the * selected date and time (day:hour:minute:second) to a new table. *
** Along with data, the following are also included on the new restored table using point in time recovery: *
** Global secondary indexes (GSIs) *
** Local secondary indexes (LSIs) *
** Provisioned read and write capacity *
** Encryption settings *
** All these settings come from the current settings of the source table at the time of restore. *
** You must manually set up the following on the restored table: *
** Auto scaling policies *
** IAM policies *
** Amazon CloudWatch metrics and alarms *
** Tags *
** Stream settings *
** Time to Live (TTL) settings *
** Point in time recovery settings *
*TableName
does not currently exist within the subscriber's
* account or the subscriber is operating in the wrong Amazon Web Services Region.
* @throws TableInUseException
* A target table with the specified name is either being created or deleted.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InvalidRestoreTimeException * An invalid restore time was specified. RestoreDateTime must be between EarliestRestorableDateTime and * LatestRestorableDateTime. * @throws PointInTimeRecoveryUnavailableException * Point in time recovery has not yet been enabled for this source table. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.RestoreTableToPointInTime * @see AWS API Documentation */ RestoreTableToPointInTimeResult restoreTableToPointInTime(RestoreTableToPointInTimeRequest restoreTableToPointInTimeRequest); /** *
* The Scan
operation returns one or more items and item attributes by accessing every item in a table
* or a secondary index. To have DynamoDB return fewer items, you can provide a FilterExpression
* operation.
*
* If the total size of scanned items exceeds the maximum dataset size limit of 1 MB, the scan completes and results
* are returned to the user. The LastEvaluatedKey
value is also returned and the requestor can use the
* LastEvaluatedKey
to continue the scan in a subsequent operation. Each scan response also includes
* number of items that were scanned (ScannedCount) as part of the request. If using a FilterExpression
* , a scan result can result in no items meeting the criteria and the Count
will result in zero. If
* you did not use a FilterExpression
in the scan request, then Count
is the same as
* ScannedCount
.
*
* Count
and ScannedCount
only return the count of items specific to a single scan request
* and, unless the table is less than 1MB, do not represent the total number of items in the table.
*
* A single Scan
operation first reads up to the maximum number of items set (if using the
* Limit
parameter) or a maximum of 1 MB of data and then applies any filtering to the results if a
* FilterExpression
is provided. If LastEvaluatedKey
is present in the response,
* pagination is required to complete the full table scan. For more information, see Paginating the
* Results in the Amazon DynamoDB Developer Guide.
*
* Scan
operations proceed sequentially; however, for faster performance on a large table or secondary
* index, applications can request a parallel Scan
operation by providing the Segment
and
* TotalSegments
parameters. For more information, see Parallel
* Scan in the Amazon DynamoDB Developer Guide.
*
* By default, a Scan
uses eventually consistent reads when accessing the items in a table. Therefore,
* the results from an eventually consistent Scan
may not include the latest item changes at the time
* the scan iterates through each item in the table. If you require a strongly consistent read of each item as the
* scan iterates through the items in the table, you can set the ConsistentRead
parameter to true.
* Strong consistency only relates to the consistency of the read at the item level.
*
* DynamoDB does not provide snapshot isolation for a scan operation when the ConsistentRead
parameter
* is set to true. Thus, a DynamoDB scan operation does not guarantee that all reads in a scan see a consistent
* snapshot of the table when the scan operation was requested.
*
Scan
operation.
* @return Result of the Scan operation returned by the service.
* @throws ProvisionedThroughputExceededException
* Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests
* that receive this exception. Your request is eventually successful, unless your retry queue is too large
* to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.Scan
* @see AWS API
* Documentation
*/
ScanResult scan(ScanRequest scanRequest);
/**
* Simplified method form for invoking the Scan operation.
*
* @see #scan(ScanRequest)
*/
ScanResult scan(String tableName, java.util.List* Associate a set of tags with an Amazon DynamoDB resource. You can then activate these user-defined tags so that * they appear on the Billing and Cost Management console for cost allocation tracking. You can call TagResource up * to five times per second, per account. *
** For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in * the Amazon DynamoDB Developer Guide. *
* * @param tagResourceRequest * @return Result of the TagResource operation returned by the service. * @throws LimitExceededException * There is no limit to the number of daily on-demand backups that can be taken. *
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
*
* More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may
* result in request throttling.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @sample AmazonDynamoDB.TagResource
* @see AWS API
* Documentation
*/
TagResourceResult tagResource(TagResourceRequest tagResourceRequest);
/**
*
* TransactGetItems
is a synchronous operation that atomically retrieves multiple items from one or
* more tables (but not from indexes) in a single account and Region. A TransactGetItems
call can
* contain up to 100 TransactGetItem
objects, each of which contains a Get
structure that
* specifies an item to retrieve from a table in the account and Region. A call to TransactGetItems
* cannot retrieve items from tables in more than one Amazon Web Services account or Region. The aggregate size of
* the items in the transaction cannot exceed 4 MB.
*
* DynamoDB rejects the entire TransactGetItems
request if any of the following is true:
*
* A conflicting operation is in the process of updating an item to be read. *
** There is insufficient provisioned capacity for the transaction to be completed. *
** There is a user error, such as an invalid data format. *
** The aggregate size of the items in the transaction exceeded 4 MB. *
*ACTIVE
.
* @throws TransactionCanceledException
* The entire transaction request was canceled.
*
* DynamoDB cancels a TransactWriteItems
request under the following circumstances:
*
* A condition in one of the condition expressions is not met. *
*
* A table in the TransactWriteItems
request is in a different account or region.
*
* More than one action in the TransactWriteItems
operation targets the same item.
*
* There is insufficient provisioned capacity for the transaction to be completed. *
** An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, * or a similar validation error occurs because of changes made by the transaction. *
** There is a user error, such as an invalid data format. *
*
* DynamoDB cancels a TransactGetItems
request under the following circumstances:
*
* There is an ongoing TransactGetItems
operation that conflicts with a concurrent
* PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
* request. In this case the TransactGetItems
operation fails with a
* TransactionCanceledException
.
*
* A table in the TransactGetItems
request is in a different account or region.
*
* There is insufficient provisioned capacity for the transaction to be completed. *
** There is a user error, such as an invalid data format. *
*
* If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
* This property is not set for other languages. Transaction cancellation reasons are ordered in the order
* of requested items, if an item has no error it will have None
code and Null
* message.
*
* Cancellation reason codes and possible error messages: *
** No Errors: *
*
* Code: None
*
* Message: null
*
* Conditional Check Failed: *
*
* Code: ConditionalCheckFailed
*
* Message: The conditional request failed. *
** Item Collection Size Limit Exceeded: *
*
* Code: ItemCollectionSizeLimitExceeded
*
* Message: Collection size exceeded. *
** Transaction Conflict: *
*
* Code: TransactionConflict
*
* Message: Transaction is ongoing for the item. *
** Provisioned Throughput Exceeded: *
*
* Code: ProvisionedThroughputExceeded
*
* Messages: *
** The level of configured provisioned throughput for the table was exceeded. Consider increasing your * provisioning level with the UpdateTable API. *
** This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table. *
** The level of configured provisioned throughput for one or more global secondary indexes of the table was * exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes * with the UpdateTable API. *
** This message is returned when provisioned throughput is exceeded is on a provisioned GSI. *
** Throttling Error: *
*
* Code: ThrottlingError
*
* Messages: *
** Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your * table or index so please try again shortly. If exceptions persist, check if you have a hot key: * https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html. *
** This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically * scaling the table. *
** Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is * automatically scaling your index so please try again shortly. *
** This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically * scaling the GSI. *
** Validation Error: *
*
* Code: ValidationError
*
* Messages: *
** One or more parameter values were invalid. *
** The update expression attempted to update the secondary index key beyond allowed size limits. *
** The update expression attempted to update the secondary index key to unsupported type. *
** An operand in the update expression has an incorrect data type. *
** Item size to update has exceeded the maximum allowed size. *
** Number overflow. Attempting to store a number with magnitude larger than supported range. *
** Type mismatch for attribute to update. *
** Nesting Levels have exceeded supported limits. *
** The document path provided in the update expression is invalid for update. *
** The provided expression refers to an attribute that does not exist in the item. *
*
* TransactWriteItems
is a synchronous write operation that groups up to 100 action requests. These
* actions can target items in different tables, but not in different Amazon Web Services accounts or Regions, and
* no two actions can target the same item. For example, you cannot both ConditionCheck
and
* Update
the same item. The aggregate size of the items in the transaction cannot exceed 4 MB.
*
* The actions are completed atomically so that either all of them succeed, or all of them fail. They are defined by * the following objects: *
*
* Put
— Initiates a PutItem
operation to write a new item. This structure specifies
* the primary key of the item to be written, the name of the table to write it in, an optional condition expression
* that must be satisfied for the write to succeed, a list of the item's attributes, and a field indicating whether
* to retrieve the item's attributes if the condition is not met.
*
* Update
— Initiates an UpdateItem
operation to update an existing item. This
* structure specifies the primary key of the item to be updated, the name of the table where it resides, an
* optional condition expression that must be satisfied for the update to succeed, an expression that defines one or
* more attributes to be updated, and a field indicating whether to retrieve the item's attributes if the condition
* is not met.
*
* Delete
— Initiates a DeleteItem
operation to delete an existing item. This structure
* specifies the primary key of the item to be deleted, the name of the table where it resides, an optional
* condition expression that must be satisfied for the deletion to succeed, and a field indicating whether to
* retrieve the item's attributes if the condition is not met.
*
* ConditionCheck
— Applies a condition to an item that is not being modified by the transaction.
* This structure specifies the primary key of the item to be checked, the name of the table where it resides, a
* condition expression that must be satisfied for the transaction to succeed, and a field indicating whether to
* retrieve the item's attributes if the condition is not met.
*
* DynamoDB rejects the entire TransactWriteItems
request if any of the following is true:
*
* A condition in one of the condition expressions is not met. *
** An ongoing operation is in the process of updating the same item. *
** There is insufficient provisioned capacity for the transaction to be completed. *
** An item size becomes too large (bigger than 400 KB), a local secondary index (LSI) becomes too large, or a * similar validation error occurs because of changes made by the transaction. *
** The aggregate size of the items in the transaction exceeds 4 MB. *
** There is a user error, such as an invalid data format. *
*ACTIVE
.
* @throws TransactionCanceledException
* The entire transaction request was canceled.
*
* DynamoDB cancels a TransactWriteItems
request under the following circumstances:
*
* A condition in one of the condition expressions is not met. *
*
* A table in the TransactWriteItems
request is in a different account or region.
*
* More than one action in the TransactWriteItems
operation targets the same item.
*
* There is insufficient provisioned capacity for the transaction to be completed. *
** An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, * or a similar validation error occurs because of changes made by the transaction. *
** There is a user error, such as an invalid data format. *
*
* DynamoDB cancels a TransactGetItems
request under the following circumstances:
*
* There is an ongoing TransactGetItems
operation that conflicts with a concurrent
* PutItem
, UpdateItem
, DeleteItem
or TransactWriteItems
* request. In this case the TransactGetItems
operation fails with a
* TransactionCanceledException
.
*
* A table in the TransactGetItems
request is in a different account or region.
*
* There is insufficient provisioned capacity for the transaction to be completed. *
** There is a user error, such as an invalid data format. *
*
* If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
property.
* This property is not set for other languages. Transaction cancellation reasons are ordered in the order
* of requested items, if an item has no error it will have None
code and Null
* message.
*
* Cancellation reason codes and possible error messages: *
** No Errors: *
*
* Code: None
*
* Message: null
*
* Conditional Check Failed: *
*
* Code: ConditionalCheckFailed
*
* Message: The conditional request failed. *
** Item Collection Size Limit Exceeded: *
*
* Code: ItemCollectionSizeLimitExceeded
*
* Message: Collection size exceeded. *
** Transaction Conflict: *
*
* Code: TransactionConflict
*
* Message: Transaction is ongoing for the item. *
** Provisioned Throughput Exceeded: *
*
* Code: ProvisionedThroughputExceeded
*
* Messages: *
** The level of configured provisioned throughput for the table was exceeded. Consider increasing your * provisioning level with the UpdateTable API. *
** This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table. *
** The level of configured provisioned throughput for one or more global secondary indexes of the table was * exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes * with the UpdateTable API. *
** This message is returned when provisioned throughput is exceeded is on a provisioned GSI. *
** Throttling Error: *
*
* Code: ThrottlingError
*
* Messages: *
** Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your * table or index so please try again shortly. If exceptions persist, check if you have a hot key: * https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html. *
** This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically * scaling the table. *
** Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is * automatically scaling your index so please try again shortly. *
** This message is returned when writes get throttled on an On-Demand GSI as DynamoDB is automatically * scaling the GSI. *
** Validation Error: *
*
* Code: ValidationError
*
* Messages: *
** One or more parameter values were invalid. *
** The update expression attempted to update the secondary index key beyond allowed size limits. *
** The update expression attempted to update the secondary index key to unsupported type. *
** An operand in the update expression has an incorrect data type. *
** Item size to update has exceeded the maximum allowed size. *
** Number overflow. Attempting to store a number with magnitude larger than supported range. *
** Type mismatch for attribute to update. *
** Nesting Levels have exceeded supported limits. *
** The document path provided in the update expression is invalid for update. *
** The provided expression refers to an attribute that does not exist in the item. *
** Recommended Settings *
*
* This is a general recommendation for handling the TransactionInProgressException
. These
* settings help ensure that the client retries will trigger completion of the ongoing
* TransactWriteItems
request.
*
* Set clientExecutionTimeout
to a value that allows at least one retry to be processed after 5
* seconds have elapsed since the first attempt for the TransactWriteItems
operation.
*
* Set socketTimeout
to a value a little lower than the requestTimeout
setting.
*
* requestTimeout
should be set based on the time taken for the individual retries of a single
* HTTP request for your use case, but setting it to 1 second or higher should work well to reduce chances
* of retries and TransactionInProgressException
errors.
*
* Use exponential backoff when retrying and tune backoff if needed. *
** Assuming default retry policy, example timeout settings based on the guidelines above are as follows: *
** Example timeline: *
** 0-1000 first attempt *
** 1000-1500 first sleep/delay (default retry policy uses 500 ms as base delay for 4xx errors) *
** 1500-2500 second attempt *
** 2500-3500 second sleep/delay (500 * 2, exponential backoff) *
** 3500-4500 third attempt *
** 4500-6500 third sleep/delay (500 * 2^2) *
** 6500-7500 fourth attempt (this can trigger inline recovery since 5 seconds have elapsed since the first * attempt reached TC) *
*
* Removes the association of tags from an Amazon DynamoDB resource. You can call UntagResource
up to
* five times per second, per account.
*
* For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in * the Amazon DynamoDB Developer Guide. *
* * @param untagResourceRequest * @return Result of the UntagResource operation returned by the service. * @throws LimitExceededException * There is no limit to the number of daily on-demand backups that can be taken. *
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
*
* More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may
* result in request throttling.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @sample AmazonDynamoDB.UntagResource
* @see AWS API
* Documentation
*/
UntagResourceResult untagResource(UntagResourceRequest untagResourceRequest);
/**
*
* UpdateContinuousBackups
enables or disables point in time recovery for the specified table. A
* successful UpdateContinuousBackups
call returns the current
* ContinuousBackupsDescription
. Continuous backups are ENABLED
on all tables at table
* creation. If point in time recovery is enabled, PointInTimeRecoveryStatus
will be set to ENABLED.
*
* Once continuous backups and point in time recovery are enabled, you can restore to any point in time within
* EarliestRestorableDateTime
and LatestRestorableDateTime
.
*
* LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table
* to any point in time during the last 35 days.
*
TableName
does not currently exist within the subscriber's
* account or the subscriber is operating in the wrong Amazon Web Services Region.
* @throws ContinuousBackupsUnavailableException
* Backups have not yet been enabled for this table.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.UpdateContinuousBackups
* @see AWS API Documentation
*/
UpdateContinuousBackupsResult updateContinuousBackups(UpdateContinuousBackupsRequest updateContinuousBackupsRequest);
/**
* * Updates the status for contributor insights for a specific table or index. CloudWatch Contributor Insights for * DynamoDB graphs display the partition key and (if applicable) sort key of frequently accessed items and * frequently throttled items in plaintext. If you require the use of Amazon Web Services Key Management Service * (KMS) to encrypt this table’s partition key and sort key data with an Amazon Web Services managed key or customer * managed key, you should not enable CloudWatch Contributor Insights for DynamoDB for this table. *
* * @param updateContributorInsightsRequest * @return Result of the UpdateContributorInsights operation returned by the service. * @throws ResourceNotFoundException * The operation tried to access a nonexistent table or index. The resource might not be specified * correctly, or its status might not beACTIVE
.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.UpdateContributorInsights
* @see AWS API Documentation
*/
UpdateContributorInsightsResult updateContributorInsights(UpdateContributorInsightsRequest updateContributorInsightsRequest);
/**
* * Adds or removes replicas in the specified global table. The global table must already exist to be able to use * this operation. Any replica to be added must be empty, have the same name as the global table, have the same key * schema, have DynamoDB Streams enabled, and have the same provisioned and maximum write capacity units. *
** This operation only applies to Version 2017.11.29 * (Legacy) of global tables. We recommend using Version 2019.11.21 * (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes * less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version * 2019.11.21 (Current), see Updating * global tables. *
** This operation only applies to Version * 2017.11.29 of global tables. If you are using global tables Version * 2019.11.21 you can use DescribeTable * instead. *
*
* Although you can use UpdateGlobalTable
to add replicas and remove replicas in a single request, for
* simplicity we recommend that you issue separate requests for adding or removing replicas.
*
* If global secondary indexes are specified, then the following conditions must also be met: *
** The global secondary indexes must have the same name. *
** The global secondary indexes must have the same hash key and sort key (if present). *
** The global secondary indexes must have the same provisioned and maximum write capacity units. *
*TableName
does not currently exist within the subscriber's
* account or the subscriber is operating in the wrong Amazon Web Services Region.
* @sample AmazonDynamoDB.UpdateGlobalTable
* @see AWS API
* Documentation
*/
UpdateGlobalTableResult updateGlobalTable(UpdateGlobalTableRequest updateGlobalTableRequest);
/**
* * Updates settings for a global table. *
** This operation only applies to Version 2017.11.29 * (Legacy) of global tables. We recommend using Version 2019.11.21 * (Current) when creating new global tables, as it provides greater flexibility, higher efficiency and consumes * less write capacity than 2017.11.29 (Legacy). To determine which version you are using, see Determining the version. To update existing global tables from version 2017.11.29 (Legacy) to version * 2019.11.21 (Current), see Updating * global tables. *
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
*
* More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may
* result in request throttling.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.UpdateGlobalTableSettings
* @see AWS API Documentation
*/
UpdateGlobalTableSettingsResult updateGlobalTableSettings(UpdateGlobalTableSettingsRequest updateGlobalTableSettingsRequest);
/**
*
* Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, * delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new * attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected * attribute values). *
*
* You can also return the item's attribute values in the same UpdateItem
operation using the
* ReturnValues
parameter.
*
UpdateItem
operation.
* @return Result of the UpdateItem operation returned by the service.
* @throws ConditionalCheckFailedException
* A condition specified in the operation could not be evaluated.
* @throws ProvisionedThroughputExceededException
* Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests
* that receive this exception. Your request is eventually successful, unless your retry queue is too large
* to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws ItemCollectionSizeLimitExceededException
* An item collection is too large. This exception is only returned for tables that have one or more local
* secondary indexes.
* @throws TransactionConflictException
* Operation was rejected because there is an ongoing transaction for the item.
* @throws RequestLimitExceededException
* Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
* @throws InternalServerErrorException
* An error occurred on the server side.
* @sample AmazonDynamoDB.UpdateItem
* @see AWS API
* Documentation
*/
UpdateItemResult updateItem(UpdateItemRequest updateItemRequest);
/**
* Simplified method form for invoking the UpdateItem operation.
*
* @see #updateItem(UpdateItemRequest)
*/
UpdateItemResult updateItem(String tableName, java.util.Map* Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given * table. *
** This operation only applies to Version 2019.11.21 * (Current) of global tables. *
** You can only perform one of the following operations at once: *
** Modify the provisioned throughput settings of the table. *
** Remove a global secondary index from the table. *
*
* Create a new global secondary index on the table. After the index begins backfilling, you can use
* UpdateTable
to perform other operations.
*
* UpdateTable
is an asynchronous operation; while it is executing, the table status changes from
* ACTIVE
to UPDATING
. While it is UPDATING
, you cannot issue another
* UpdateTable
request. When the table returns to the ACTIVE
state, the
* UpdateTable
operation is complete.
*
UpdateTable
operation.
* @return Result of the UpdateTable operation returned by the service.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.UpdateTable * @see AWS API * Documentation */ UpdateTableResult updateTable(UpdateTableRequest updateTableRequest); /** * Simplified method form for invoking the UpdateTable operation. * * @see #updateTable(UpdateTableRequest) */ UpdateTableResult updateTable(String tableName, ProvisionedThroughput provisionedThroughput); /** *
* Updates auto scaling settings on your global tables at once. *
** This operation only applies to Version 2019.11.21 * (Current) of global tables. *
*ACTIVE
.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.UpdateTableReplicaAutoScaling * @see AWS API Documentation */ UpdateTableReplicaAutoScalingResult updateTableReplicaAutoScaling(UpdateTableReplicaAutoScalingRequest updateTableReplicaAutoScalingRequest); /** *
* The UpdateTimeToLive
method enables or disables Time to Live (TTL) for the specified table. A
* successful UpdateTimeToLive
call returns the current TimeToLiveSpecification
. It can
* take up to one hour for the change to fully process. Any additional UpdateTimeToLive
calls for the
* same table during this one hour duration result in a ValidationException
.
*
* TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the * epoch time value stored in the attribute is less than the current time, the item is marked as expired and * subsequently deleted. *
** The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC. *
** DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data * operations. *
** DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item * gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been * deleted will still show up in reads, queries, and scans. *
** As items are deleted, they are removed from any local secondary index and global secondary index immediately in * the same eventually consistent way as a standard delete operation. *
** For more information, see Time To Live in the Amazon * DynamoDB Developer Guide. *
* * @param updateTimeToLiveRequest * Represents the input of anUpdateTimeToLive
operation.
* @return Result of the UpdateTimeToLive operation returned by the service.
* @throws ResourceInUseException
* The operation conflicts with the resource's availability. For example, you attempted to recreate an
* existing table, or tried to delete a table currently in the CREATING
state.
* @throws ResourceNotFoundException
* The operation tried to access a nonexistent table or index. The resource might not be specified
* correctly, or its status might not be ACTIVE
.
* @throws LimitExceededException
* There is no limit to the number of daily on-demand backups that can be taken.
*
* For most purposes, up to 500 simultaneous table operations are allowed per account. These operations
* include CreateTable
, UpdateTable
, DeleteTable
,
* UpdateTimeToLive
, RestoreTableFromBackup
, and
* RestoreTableToPointInTime
.
*
* When you are creating a table with one or more secondary indexes, you can have up to 250 such requests * running at a time. However, if the table or index specifications are complex, then DynamoDB might * temporarily reduce the number of concurrent operations. *
** When importing into DynamoDB, up to 50 simultaneous import table operations are allowed per account. *
** There is a soft account quota of 2,500 tables. *
** GetRecords was called with a value of more than 1000 for the limit request parameter. *
** More than 2 processes are reading from the same streams shard at the same time. Exceeding this limit may * result in request throttling. * @throws InternalServerErrorException * An error occurred on the server side. * @sample AmazonDynamoDB.UpdateTimeToLive * @see AWS API * Documentation */ UpdateTimeToLiveResult updateTimeToLive(UpdateTimeToLiveRequest updateTimeToLiveRequest); /** * Shuts down this client object, releasing any resources that might be held open. This is an optional method, and * callers are not expected to call it, but can if they want to explicitly release any open resources. Once a client * has been shutdown, it should not be used to make any more requests. */ void shutdown(); /** * Returns additional metadata for a previously executed successful request, typically used for debugging issues * where a service isn't acting as expected. This data isn't considered part of the result data returned by an * operation, so it's available through this separate, diagnostic interface. *
* Response metadata is only cached for a limited period of time, so if you need to access this extra diagnostic * information for an executed request, you should use this method to retrieve it as soon as possible after * executing a request. * * @param request * The originally executed request. * * @return The response metadata for the specified request, or null if none is available. */ ResponseMetadata getCachedResponseMetadata(AmazonWebServiceRequest request); AmazonDynamoDBWaiters waiters(); }