U C^@s>ddlZeeZddZGdddeZGdddeZdS)NcKs|dtdS)Nr)insert TableResource)Z base_classeskwargsr7/tmp/pip-install-6_kvzl1k/boto3/boto3/dynamodb/table.pyregister_table_methodssrcs&eZdZfddZdddZZS)rcstt|j||dSN)superr__init__)selfargsr __class__rrr szTableResource.__init__NcCst|j|jj|dS)aZCreate a batch writer object. This method creates a context manager for writing objects to Amazon DynamoDB in batch. The batch writer will automatically handle buffering and sending items in batches. In addition, the batch writer will also automatically handle any unprocessed items and resend them as needed. All you need to do is call ``put_item`` for any items you want to add, and ``delete_item`` for any items you want to delete. Example usage:: with table.batch_writer() as batch: for _ in xrange(1000000): batch.put_item(Item={'HashKey': '...', 'Otherstuff': '...'}) # You can also delete_items in a batch. batch.delete_item(Key={'HashKey': 'SomeHashKey'}) :type overwrite_by_pkeys: list(string) :param overwrite_by_pkeys: De-duplicate request items in buffer if match new request item on specified primary keys. i.e ``["partition_key1", "sort_key2", "sort_key3"]`` )overwrite_by_pkeys) BatchWriternamemetaclient)r rrrr batch_writer s zTableResource.batch_writer)N)__name__ __module__ __qualname__r r __classcell__rrr rrs rc@sbeZdZdZdddZddZdd Zd d Zd d ZddZ ddZ ddZ ddZ ddZ dS)rzAAutomatically handle batch writes to DynamoDB for a single table.NcCs"||_||_g|_||_||_dS)aU :type table_name: str :param table_name: The name of the table. The class handles batch writes to a single table. :type client: ``botocore.client.Client`` :param client: A botocore client. Note this client **must** have the dynamodb customizations applied to it for transforming AttributeValues into the wire protocol. What this means in practice is that you need to use a client that comes from a DynamoDB resource if you're going to instantiate this class directly, i.e ``boto3.resource('dynamodb').Table('foo').meta.client``. :type flush_amount: int :param flush_amount: The number of items to keep in a local buffer before sending a batch_write_item request to DynamoDB. :type overwrite_by_pkeys: list(string) :param overwrite_by_pkeys: De-duplicate request items in buffer if match new request item on specified primary keys. i.e ``["partition_key1", "sort_key2", "sort_key3"]`` N) _table_name_client _items_buffer _flush_amount_overwrite_by_pkeys)r Z table_namerZ flush_amountrrrrr As zBatchWriter.__init__cCs|dd|iidS)N PutRequestItem_add_request_and_process)r r rrrput_itemdszBatchWriter.put_itemcCs|dd|iidS)N DeleteRequestKeyr!)r r%rrr delete_itemgszBatchWriter.delete_itemcCs(|jr|||j||dSr)r _remove_dup_pkeys_request_if_anyrappend_flush_if_neededr requestrrrr"js  z$BatchWriter._add_request_and_processcCs@||}|jD]*}|||kr|j|td|qdS)Nz4With overwrite_by_pkeys enabled, skipping request:%s)_extract_pkey_valuesrremoveloggerdebug)r r+Zpkey_values_newitemrrrr'ps   z,BatchWriter._remove_dup_pkeys_request_if_anycs@drfdd|jDSdr<fdd|jDSdS)Nrcsg|]}dd|qS)rr r.0keyr+rr zsz4BatchWriter._extract_pkey_values..r$csg|]}dd|qS)r$r%rr1r4rrr5}s)getrr*rr4rr,xs    z BatchWriter._extract_pkey_valuescCst|j|jkr|dSr)lenrr_flushr rrrr)szBatchWriter._flush_if_neededcCs|jd|j}|j|jd|_|jj|j|id}|d}|r`||jr`|j||jng|_tdt|t|jdS)N)Z RequestItemsZUnprocessedItemsz$Batch write sent %s, unprocessed: %s) rrrZbatch_write_itemrextendr.r/r7)r Z items_to_sendresponseZunprocessed_itemsrrrr8szBatchWriter._flushcCs|Srrr9rrr __enter__szBatchWriter.__enter__cCs|jr|qdSr)rr8)r exc_type exc_valuetbrrr__exit__szBatchWriter.__exit__)rN)rrr__doc__r r#r&r"r'r,r)r8r<r@rrrrr?s # r)logging getLoggerrr.robjectrrrrrr s  #