/**
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0.
*/
#pragma once
#include Additional connection options for the connector.See Also:
* AWS
* API Reference
Extra condition clause to filter data from source. For example:
* BillingCity='Mountain View'
When using a query instead of a
* table name, you should validate that the query works with the specified
* filterPredicate
.
Extra condition clause to filter data from source. For example:
* BillingCity='Mountain View'
When using a query instead of a
* table name, you should validate that the query works with the specified
* filterPredicate
.
Extra condition clause to filter data from source. For example:
* BillingCity='Mountain View'
When using a query instead of a
* table name, you should validate that the query works with the specified
* filterPredicate
.
Extra condition clause to filter data from source. For example:
* BillingCity='Mountain View'
When using a query instead of a
* table name, you should validate that the query works with the specified
* filterPredicate
.
Extra condition clause to filter data from source. For example:
* BillingCity='Mountain View'
When using a query instead of a
* table name, you should validate that the query works with the specified
* filterPredicate
.
Extra condition clause to filter data from source. For example:
* BillingCity='Mountain View'
When using a query instead of a
* table name, you should validate that the query works with the specified
* filterPredicate
.
Extra condition clause to filter data from source. For example:
* BillingCity='Mountain View'
When using a query instead of a
* table name, you should validate that the query works with the specified
* filterPredicate
.
Extra condition clause to filter data from source. For example:
* BillingCity='Mountain View'
When using a query instead of a
* table name, you should validate that the query works with the specified
* filterPredicate
.
The name of an integer column that is used for partitioning. This option
* works only when it's included with lowerBound
,
* upperBound
, and numPartitions
. This option works the
* same way as in the Spark SQL JDBC reader.
The name of an integer column that is used for partitioning. This option
* works only when it's included with lowerBound
,
* upperBound
, and numPartitions
. This option works the
* same way as in the Spark SQL JDBC reader.
The name of an integer column that is used for partitioning. This option
* works only when it's included with lowerBound
,
* upperBound
, and numPartitions
. This option works the
* same way as in the Spark SQL JDBC reader.
The name of an integer column that is used for partitioning. This option
* works only when it's included with lowerBound
,
* upperBound
, and numPartitions
. This option works the
* same way as in the Spark SQL JDBC reader.
The name of an integer column that is used for partitioning. This option
* works only when it's included with lowerBound
,
* upperBound
, and numPartitions
. This option works the
* same way as in the Spark SQL JDBC reader.
The name of an integer column that is used for partitioning. This option
* works only when it's included with lowerBound
,
* upperBound
, and numPartitions
. This option works the
* same way as in the Spark SQL JDBC reader.
The name of an integer column that is used for partitioning. This option
* works only when it's included with lowerBound
,
* upperBound
, and numPartitions
. This option works the
* same way as in the Spark SQL JDBC reader.
The name of an integer column that is used for partitioning. This option
* works only when it's included with lowerBound
,
* upperBound
, and numPartitions
. This option works the
* same way as in the Spark SQL JDBC reader.
The minimum value of partitionColumn
that is used to decide
* partition stride.
The minimum value of partitionColumn
that is used to decide
* partition stride.
The minimum value of partitionColumn
that is used to decide
* partition stride.
The minimum value of partitionColumn
that is used to decide
* partition stride.
The maximum value of partitionColumn
that is used to decide
* partition stride.
The maximum value of partitionColumn
that is used to decide
* partition stride.
The maximum value of partitionColumn
that is used to decide
* partition stride.
The maximum value of partitionColumn
that is used to decide
* partition stride.
The number of partitions. This value, along with lowerBound
* (inclusive) and upperBound
(exclusive), form partition strides for
* generated WHERE
clause expressions that are used to split the
* partitionColumn
.
The number of partitions. This value, along with lowerBound
* (inclusive) and upperBound
(exclusive), form partition strides for
* generated WHERE
clause expressions that are used to split the
* partitionColumn
.
The number of partitions. This value, along with lowerBound
* (inclusive) and upperBound
(exclusive), form partition strides for
* generated WHERE
clause expressions that are used to split the
* partitionColumn
.
The number of partitions. This value, along with lowerBound
* (inclusive) and upperBound
(exclusive), form partition strides for
* generated WHERE
clause expressions that are used to split the
* partitionColumn
.
The name of the job bookmark keys on which to sort.
*/ inline const Aws::VectorThe name of the job bookmark keys on which to sort.
*/ inline bool JobBookmarkKeysHasBeenSet() const { return m_jobBookmarkKeysHasBeenSet; } /** *The name of the job bookmark keys on which to sort.
*/ inline void SetJobBookmarkKeys(const Aws::VectorThe name of the job bookmark keys on which to sort.
*/ inline void SetJobBookmarkKeys(Aws::VectorThe name of the job bookmark keys on which to sort.
*/ inline JDBCConnectorOptions& WithJobBookmarkKeys(const Aws::VectorThe name of the job bookmark keys on which to sort.
*/ inline JDBCConnectorOptions& WithJobBookmarkKeys(Aws::VectorThe name of the job bookmark keys on which to sort.
*/ inline JDBCConnectorOptions& AddJobBookmarkKeys(const Aws::String& value) { m_jobBookmarkKeysHasBeenSet = true; m_jobBookmarkKeys.push_back(value); return *this; } /** *The name of the job bookmark keys on which to sort.
*/ inline JDBCConnectorOptions& AddJobBookmarkKeys(Aws::String&& value) { m_jobBookmarkKeysHasBeenSet = true; m_jobBookmarkKeys.push_back(std::move(value)); return *this; } /** *The name of the job bookmark keys on which to sort.
*/ inline JDBCConnectorOptions& AddJobBookmarkKeys(const char* value) { m_jobBookmarkKeysHasBeenSet = true; m_jobBookmarkKeys.push_back(value); return *this; } /** *Specifies an ascending or descending sort order.
*/ inline const Aws::String& GetJobBookmarkKeysSortOrder() const{ return m_jobBookmarkKeysSortOrder; } /** *Specifies an ascending or descending sort order.
*/ inline bool JobBookmarkKeysSortOrderHasBeenSet() const { return m_jobBookmarkKeysSortOrderHasBeenSet; } /** *Specifies an ascending or descending sort order.
*/ inline void SetJobBookmarkKeysSortOrder(const Aws::String& value) { m_jobBookmarkKeysSortOrderHasBeenSet = true; m_jobBookmarkKeysSortOrder = value; } /** *Specifies an ascending or descending sort order.
*/ inline void SetJobBookmarkKeysSortOrder(Aws::String&& value) { m_jobBookmarkKeysSortOrderHasBeenSet = true; m_jobBookmarkKeysSortOrder = std::move(value); } /** *Specifies an ascending or descending sort order.
*/ inline void SetJobBookmarkKeysSortOrder(const char* value) { m_jobBookmarkKeysSortOrderHasBeenSet = true; m_jobBookmarkKeysSortOrder.assign(value); } /** *Specifies an ascending or descending sort order.
*/ inline JDBCConnectorOptions& WithJobBookmarkKeysSortOrder(const Aws::String& value) { SetJobBookmarkKeysSortOrder(value); return *this;} /** *Specifies an ascending or descending sort order.
*/ inline JDBCConnectorOptions& WithJobBookmarkKeysSortOrder(Aws::String&& value) { SetJobBookmarkKeysSortOrder(std::move(value)); return *this;} /** *Specifies an ascending or descending sort order.
*/ inline JDBCConnectorOptions& WithJobBookmarkKeysSortOrder(const char* value) { SetJobBookmarkKeysSortOrder(value); return *this;} /** *Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.
Custom data type mapping that builds a mapping from a JDBC data type to an
* Glue data type. For example, the option
* "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type
* FLOAT
into the Java String
type by calling the
* ResultSet.getString()
method of the driver, and uses it to build
* the Glue record. The ResultSet
object is implemented by each
* driver, so the behavior is specific to the driver you use. Refer to the
* documentation for your JDBC driver to understand how the driver performs the
* conversions.