/**
* Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
* SPDX-License-Identifier: Apache-2.0.
*/
#pragma once
#include Additional options for streaming.See Also:
AWS
* API Reference
A list of bootstrap server URLs, for example, as
* b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This
* option must be specified in the API call or defined in the table metadata in the
* Data Catalog.
A list of bootstrap server URLs, for example, as
* b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This
* option must be specified in the API call or defined in the table metadata in the
* Data Catalog.
A list of bootstrap server URLs, for example, as
* b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This
* option must be specified in the API call or defined in the table metadata in the
* Data Catalog.
A list of bootstrap server URLs, for example, as
* b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This
* option must be specified in the API call or defined in the table metadata in the
* Data Catalog.
A list of bootstrap server URLs, for example, as
* b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This
* option must be specified in the API call or defined in the table metadata in the
* Data Catalog.
A list of bootstrap server URLs, for example, as
* b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This
* option must be specified in the API call or defined in the table metadata in the
* Data Catalog.
A list of bootstrap server URLs, for example, as
* b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This
* option must be specified in the API call or defined in the table metadata in the
* Data Catalog.
A list of bootstrap server URLs, for example, as
* b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This
* option must be specified in the API call or defined in the table metadata in the
* Data Catalog.
The protocol used to communicate with brokers. The possible values are
* "SSL"
or "PLAINTEXT"
.
The protocol used to communicate with brokers. The possible values are
* "SSL"
or "PLAINTEXT"
.
The protocol used to communicate with brokers. The possible values are
* "SSL"
or "PLAINTEXT"
.
The protocol used to communicate with brokers. The possible values are
* "SSL"
or "PLAINTEXT"
.
The protocol used to communicate with brokers. The possible values are
* "SSL"
or "PLAINTEXT"
.
The protocol used to communicate with brokers. The possible values are
* "SSL"
or "PLAINTEXT"
.
The protocol used to communicate with brokers. The possible values are
* "SSL"
or "PLAINTEXT"
.
The protocol used to communicate with brokers. The possible values are
* "SSL"
or "PLAINTEXT"
.
The name of the connection.
*/ inline const Aws::String& GetConnectionName() const{ return m_connectionName; } /** *The name of the connection.
*/ inline bool ConnectionNameHasBeenSet() const { return m_connectionNameHasBeenSet; } /** *The name of the connection.
*/ inline void SetConnectionName(const Aws::String& value) { m_connectionNameHasBeenSet = true; m_connectionName = value; } /** *The name of the connection.
*/ inline void SetConnectionName(Aws::String&& value) { m_connectionNameHasBeenSet = true; m_connectionName = std::move(value); } /** *The name of the connection.
*/ inline void SetConnectionName(const char* value) { m_connectionNameHasBeenSet = true; m_connectionName.assign(value); } /** *The name of the connection.
*/ inline KafkaStreamingSourceOptions& WithConnectionName(const Aws::String& value) { SetConnectionName(value); return *this;} /** *The name of the connection.
*/ inline KafkaStreamingSourceOptions& WithConnectionName(Aws::String&& value) { SetConnectionName(std::move(value)); return *this;} /** *The name of the connection.
*/ inline KafkaStreamingSourceOptions& WithConnectionName(const char* value) { SetConnectionName(value); return *this;} /** *The topic name as specified in Apache Kafka. You must specify at least one of
* "topicName"
, "assign"
or
* "subscribePattern"
.
The topic name as specified in Apache Kafka. You must specify at least one of
* "topicName"
, "assign"
or
* "subscribePattern"
.
The topic name as specified in Apache Kafka. You must specify at least one of
* "topicName"
, "assign"
or
* "subscribePattern"
.
The topic name as specified in Apache Kafka. You must specify at least one of
* "topicName"
, "assign"
or
* "subscribePattern"
.
The topic name as specified in Apache Kafka. You must specify at least one of
* "topicName"
, "assign"
or
* "subscribePattern"
.
The topic name as specified in Apache Kafka. You must specify at least one of
* "topicName"
, "assign"
or
* "subscribePattern"
.
The topic name as specified in Apache Kafka. You must specify at least one of
* "topicName"
, "assign"
or
* "subscribePattern"
.
The topic name as specified in Apache Kafka. You must specify at least one of
* "topicName"
, "assign"
or
* "subscribePattern"
.
The specific TopicPartitions
to consume. You must specify at
* least one of "topicName"
, "assign"
or
* "subscribePattern"
.
The specific TopicPartitions
to consume. You must specify at
* least one of "topicName"
, "assign"
or
* "subscribePattern"
.
The specific TopicPartitions
to consume. You must specify at
* least one of "topicName"
, "assign"
or
* "subscribePattern"
.
The specific TopicPartitions
to consume. You must specify at
* least one of "topicName"
, "assign"
or
* "subscribePattern"
.
The specific TopicPartitions
to consume. You must specify at
* least one of "topicName"
, "assign"
or
* "subscribePattern"
.
The specific TopicPartitions
to consume. You must specify at
* least one of "topicName"
, "assign"
or
* "subscribePattern"
.
The specific TopicPartitions
to consume. You must specify at
* least one of "topicName"
, "assign"
or
* "subscribePattern"
.
The specific TopicPartitions
to consume. You must specify at
* least one of "topicName"
, "assign"
or
* "subscribePattern"
.
A Java regex string that identifies the topic list to subscribe to. You must
* specify at least one of "topicName"
, "assign"
or
* "subscribePattern"
.
A Java regex string that identifies the topic list to subscribe to. You must
* specify at least one of "topicName"
, "assign"
or
* "subscribePattern"
.
A Java regex string that identifies the topic list to subscribe to. You must
* specify at least one of "topicName"
, "assign"
or
* "subscribePattern"
.
A Java regex string that identifies the topic list to subscribe to. You must
* specify at least one of "topicName"
, "assign"
or
* "subscribePattern"
.
A Java regex string that identifies the topic list to subscribe to. You must
* specify at least one of "topicName"
, "assign"
or
* "subscribePattern"
.
A Java regex string that identifies the topic list to subscribe to. You must
* specify at least one of "topicName"
, "assign"
or
* "subscribePattern"
.
A Java regex string that identifies the topic list to subscribe to. You must
* specify at least one of "topicName"
, "assign"
or
* "subscribePattern"
.
A Java regex string that identifies the topic list to subscribe to. You must
* specify at least one of "topicName"
, "assign"
or
* "subscribePattern"
.
An optional classification.
*/ inline const Aws::String& GetClassification() const{ return m_classification; } /** *An optional classification.
*/ inline bool ClassificationHasBeenSet() const { return m_classificationHasBeenSet; } /** *An optional classification.
*/ inline void SetClassification(const Aws::String& value) { m_classificationHasBeenSet = true; m_classification = value; } /** *An optional classification.
*/ inline void SetClassification(Aws::String&& value) { m_classificationHasBeenSet = true; m_classification = std::move(value); } /** *An optional classification.
*/ inline void SetClassification(const char* value) { m_classificationHasBeenSet = true; m_classification.assign(value); } /** *An optional classification.
*/ inline KafkaStreamingSourceOptions& WithClassification(const Aws::String& value) { SetClassification(value); return *this;} /** *An optional classification.
*/ inline KafkaStreamingSourceOptions& WithClassification(Aws::String&& value) { SetClassification(std::move(value)); return *this;} /** *An optional classification.
*/ inline KafkaStreamingSourceOptions& WithClassification(const char* value) { SetClassification(value); return *this;} /** *Specifies the delimiter character.
*/ inline const Aws::String& GetDelimiter() const{ return m_delimiter; } /** *Specifies the delimiter character.
*/ inline bool DelimiterHasBeenSet() const { return m_delimiterHasBeenSet; } /** *Specifies the delimiter character.
*/ inline void SetDelimiter(const Aws::String& value) { m_delimiterHasBeenSet = true; m_delimiter = value; } /** *Specifies the delimiter character.
*/ inline void SetDelimiter(Aws::String&& value) { m_delimiterHasBeenSet = true; m_delimiter = std::move(value); } /** *Specifies the delimiter character.
*/ inline void SetDelimiter(const char* value) { m_delimiterHasBeenSet = true; m_delimiter.assign(value); } /** *Specifies the delimiter character.
*/ inline KafkaStreamingSourceOptions& WithDelimiter(const Aws::String& value) { SetDelimiter(value); return *this;} /** *Specifies the delimiter character.
*/ inline KafkaStreamingSourceOptions& WithDelimiter(Aws::String&& value) { SetDelimiter(std::move(value)); return *this;} /** *Specifies the delimiter character.
*/ inline KafkaStreamingSourceOptions& WithDelimiter(const char* value) { SetDelimiter(value); return *this;} /** *The starting position in the Kafka topic to read data from. The possible
* values are "earliest"
or "latest"
. The default value
* is "latest"
.
The starting position in the Kafka topic to read data from. The possible
* values are "earliest"
or "latest"
. The default value
* is "latest"
.
The starting position in the Kafka topic to read data from. The possible
* values are "earliest"
or "latest"
. The default value
* is "latest"
.
The starting position in the Kafka topic to read data from. The possible
* values are "earliest"
or "latest"
. The default value
* is "latest"
.
The starting position in the Kafka topic to read data from. The possible
* values are "earliest"
or "latest"
. The default value
* is "latest"
.
The starting position in the Kafka topic to read data from. The possible
* values are "earliest"
or "latest"
. The default value
* is "latest"
.
The starting position in the Kafka topic to read data from. The possible
* values are "earliest"
or "latest"
. The default value
* is "latest"
.
The starting position in the Kafka topic to read data from. The possible
* values are "earliest"
or "latest"
. The default value
* is "latest"
.
The end point when a batch query is ended. Possible values are either
* "latest"
or a JSON string that specifies an ending offset for each
* TopicPartition
.
The end point when a batch query is ended. Possible values are either
* "latest"
or a JSON string that specifies an ending offset for each
* TopicPartition
.
The end point when a batch query is ended. Possible values are either
* "latest"
or a JSON string that specifies an ending offset for each
* TopicPartition
.
The end point when a batch query is ended. Possible values are either
* "latest"
or a JSON string that specifies an ending offset for each
* TopicPartition
.
The end point when a batch query is ended. Possible values are either
* "latest"
or a JSON string that specifies an ending offset for each
* TopicPartition
.
The end point when a batch query is ended. Possible values are either
* "latest"
or a JSON string that specifies an ending offset for each
* TopicPartition
.
The end point when a batch query is ended. Possible values are either
* "latest"
or a JSON string that specifies an ending offset for each
* TopicPartition
.
The end point when a batch query is ended. Possible values are either
* "latest"
or a JSON string that specifies an ending offset for each
* TopicPartition
.
The timeout in milliseconds to poll data from Kafka in Spark job executors.
* The default value is 512
.
The timeout in milliseconds to poll data from Kafka in Spark job executors.
* The default value is 512
.
The timeout in milliseconds to poll data from Kafka in Spark job executors.
* The default value is 512
.
The timeout in milliseconds to poll data from Kafka in Spark job executors.
* The default value is 512
.
The number of times to retry before failing to fetch Kafka offsets. The
* default value is 3
.
The number of times to retry before failing to fetch Kafka offsets. The
* default value is 3
.
The number of times to retry before failing to fetch Kafka offsets. The
* default value is 3
.
The number of times to retry before failing to fetch Kafka offsets. The
* default value is 3
.
The time in milliseconds to wait before retrying to fetch Kafka offsets. The
* default value is 10
.
The time in milliseconds to wait before retrying to fetch Kafka offsets. The
* default value is 10
.
The time in milliseconds to wait before retrying to fetch Kafka offsets. The
* default value is 10
.
The time in milliseconds to wait before retrying to fetch Kafka offsets. The
* default value is 10
.
The rate limit on the maximum number of offsets that are processed per
* trigger interval. The specified total number of offsets is proportionally split
* across topicPartitions
of different volumes. The default value is
* null, which means that the consumer reads all offsets until the known latest
* offset.
The rate limit on the maximum number of offsets that are processed per
* trigger interval. The specified total number of offsets is proportionally split
* across topicPartitions
of different volumes. The default value is
* null, which means that the consumer reads all offsets until the known latest
* offset.
The rate limit on the maximum number of offsets that are processed per
* trigger interval. The specified total number of offsets is proportionally split
* across topicPartitions
of different volumes. The default value is
* null, which means that the consumer reads all offsets until the known latest
* offset.
The rate limit on the maximum number of offsets that are processed per
* trigger interval. The specified total number of offsets is proportionally split
* across topicPartitions
of different volumes. The default value is
* null, which means that the consumer reads all offsets until the known latest
* offset.
The desired minimum number of partitions to read from Kafka. The default * value is null, which means that the number of spark partitions is equal to the * number of Kafka partitions.
*/ inline int GetMinPartitions() const{ return m_minPartitions; } /** *The desired minimum number of partitions to read from Kafka. The default * value is null, which means that the number of spark partitions is equal to the * number of Kafka partitions.
*/ inline bool MinPartitionsHasBeenSet() const { return m_minPartitionsHasBeenSet; } /** *The desired minimum number of partitions to read from Kafka. The default * value is null, which means that the number of spark partitions is equal to the * number of Kafka partitions.
*/ inline void SetMinPartitions(int value) { m_minPartitionsHasBeenSet = true; m_minPartitions = value; } /** *The desired minimum number of partitions to read from Kafka. The default * value is null, which means that the number of spark partitions is equal to the * number of Kafka partitions.
*/ inline KafkaStreamingSourceOptions& WithMinPartitions(int value) { SetMinPartitions(value); return *this;} /** *Whether to include the Kafka headers. When the option is set to "true", the
* data output will contain an additional column named
* "glue_streaming_kafka_headers" with type Array[Struct(key: String, value:
* String)]
. The default value is "false". This option is available in Glue
* version 3.0 or later only.
Whether to include the Kafka headers. When the option is set to "true", the
* data output will contain an additional column named
* "glue_streaming_kafka_headers" with type Array[Struct(key: String, value:
* String)]
. The default value is "false". This option is available in Glue
* version 3.0 or later only.
Whether to include the Kafka headers. When the option is set to "true", the
* data output will contain an additional column named
* "glue_streaming_kafka_headers" with type Array[Struct(key: String, value:
* String)]
. The default value is "false". This option is available in Glue
* version 3.0 or later only.
Whether to include the Kafka headers. When the option is set to "true", the
* data output will contain an additional column named
* "glue_streaming_kafka_headers" with type Array[Struct(key: String, value:
* String)]
. The default value is "false". This option is available in Glue
* version 3.0 or later only.
When this option is set to 'true', the data output will contain an additional * column named "__src_timestamp" that indicates the time when the corresponding * record received by the topic. The default value is 'false'. This option is * supported in Glue version 4.0 or later.
*/ inline const Aws::String& GetAddRecordTimestamp() const{ return m_addRecordTimestamp; } /** *When this option is set to 'true', the data output will contain an additional * column named "__src_timestamp" that indicates the time when the corresponding * record received by the topic. The default value is 'false'. This option is * supported in Glue version 4.0 or later.
*/ inline bool AddRecordTimestampHasBeenSet() const { return m_addRecordTimestampHasBeenSet; } /** *When this option is set to 'true', the data output will contain an additional * column named "__src_timestamp" that indicates the time when the corresponding * record received by the topic. The default value is 'false'. This option is * supported in Glue version 4.0 or later.
*/ inline void SetAddRecordTimestamp(const Aws::String& value) { m_addRecordTimestampHasBeenSet = true; m_addRecordTimestamp = value; } /** *When this option is set to 'true', the data output will contain an additional * column named "__src_timestamp" that indicates the time when the corresponding * record received by the topic. The default value is 'false'. This option is * supported in Glue version 4.0 or later.
*/ inline void SetAddRecordTimestamp(Aws::String&& value) { m_addRecordTimestampHasBeenSet = true; m_addRecordTimestamp = std::move(value); } /** *When this option is set to 'true', the data output will contain an additional * column named "__src_timestamp" that indicates the time when the corresponding * record received by the topic. The default value is 'false'. This option is * supported in Glue version 4.0 or later.
*/ inline void SetAddRecordTimestamp(const char* value) { m_addRecordTimestampHasBeenSet = true; m_addRecordTimestamp.assign(value); } /** *When this option is set to 'true', the data output will contain an additional * column named "__src_timestamp" that indicates the time when the corresponding * record received by the topic. The default value is 'false'. This option is * supported in Glue version 4.0 or later.
*/ inline KafkaStreamingSourceOptions& WithAddRecordTimestamp(const Aws::String& value) { SetAddRecordTimestamp(value); return *this;} /** *When this option is set to 'true', the data output will contain an additional * column named "__src_timestamp" that indicates the time when the corresponding * record received by the topic. The default value is 'false'. This option is * supported in Glue version 4.0 or later.
*/ inline KafkaStreamingSourceOptions& WithAddRecordTimestamp(Aws::String&& value) { SetAddRecordTimestamp(std::move(value)); return *this;} /** *When this option is set to 'true', the data output will contain an additional * column named "__src_timestamp" that indicates the time when the corresponding * record received by the topic. The default value is 'false'. This option is * supported in Glue version 4.0 or later.
*/ inline KafkaStreamingSourceOptions& WithAddRecordTimestamp(const char* value) { SetAddRecordTimestamp(value); return *this;} /** *When this option is set to 'true', for each batch, it will emit the metrics * for the duration between the oldest record received by the topic and the time it * arrives in Glue to CloudWatch. The metric's name is * "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This * option is supported in Glue version 4.0 or later.
*/ inline const Aws::String& GetEmitConsumerLagMetrics() const{ return m_emitConsumerLagMetrics; } /** *When this option is set to 'true', for each batch, it will emit the metrics * for the duration between the oldest record received by the topic and the time it * arrives in Glue to CloudWatch. The metric's name is * "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This * option is supported in Glue version 4.0 or later.
*/ inline bool EmitConsumerLagMetricsHasBeenSet() const { return m_emitConsumerLagMetricsHasBeenSet; } /** *When this option is set to 'true', for each batch, it will emit the metrics * for the duration between the oldest record received by the topic and the time it * arrives in Glue to CloudWatch. The metric's name is * "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This * option is supported in Glue version 4.0 or later.
*/ inline void SetEmitConsumerLagMetrics(const Aws::String& value) { m_emitConsumerLagMetricsHasBeenSet = true; m_emitConsumerLagMetrics = value; } /** *When this option is set to 'true', for each batch, it will emit the metrics * for the duration between the oldest record received by the topic and the time it * arrives in Glue to CloudWatch. The metric's name is * "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This * option is supported in Glue version 4.0 or later.
*/ inline void SetEmitConsumerLagMetrics(Aws::String&& value) { m_emitConsumerLagMetricsHasBeenSet = true; m_emitConsumerLagMetrics = std::move(value); } /** *When this option is set to 'true', for each batch, it will emit the metrics * for the duration between the oldest record received by the topic and the time it * arrives in Glue to CloudWatch. The metric's name is * "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This * option is supported in Glue version 4.0 or later.
*/ inline void SetEmitConsumerLagMetrics(const char* value) { m_emitConsumerLagMetricsHasBeenSet = true; m_emitConsumerLagMetrics.assign(value); } /** *When this option is set to 'true', for each batch, it will emit the metrics * for the duration between the oldest record received by the topic and the time it * arrives in Glue to CloudWatch. The metric's name is * "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This * option is supported in Glue version 4.0 or later.
*/ inline KafkaStreamingSourceOptions& WithEmitConsumerLagMetrics(const Aws::String& value) { SetEmitConsumerLagMetrics(value); return *this;} /** *When this option is set to 'true', for each batch, it will emit the metrics * for the duration between the oldest record received by the topic and the time it * arrives in Glue to CloudWatch. The metric's name is * "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This * option is supported in Glue version 4.0 or later.
*/ inline KafkaStreamingSourceOptions& WithEmitConsumerLagMetrics(Aws::String&& value) { SetEmitConsumerLagMetrics(std::move(value)); return *this;} /** *When this option is set to 'true', for each batch, it will emit the metrics * for the duration between the oldest record received by the topic and the time it * arrives in Glue to CloudWatch. The metric's name is * "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This * option is supported in Glue version 4.0 or later.
*/ inline KafkaStreamingSourceOptions& WithEmitConsumerLagMetrics(const char* value) { SetEmitConsumerLagMetrics(value); return *this;} /** *The timestamp of the record in the Kafka topic to start reading data from.
* The possible values are a timestamp string in UTC format of the pattern
* yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset with
* a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of
* StartingTimestamp
or StartingOffsets
must be set.
The timestamp of the record in the Kafka topic to start reading data from.
* The possible values are a timestamp string in UTC format of the pattern
* yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset with
* a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of
* StartingTimestamp
or StartingOffsets
must be set.
The timestamp of the record in the Kafka topic to start reading data from.
* The possible values are a timestamp string in UTC format of the pattern
* yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset with
* a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of
* StartingTimestamp
or StartingOffsets
must be set.
The timestamp of the record in the Kafka topic to start reading data from.
* The possible values are a timestamp string in UTC format of the pattern
* yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset with
* a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of
* StartingTimestamp
or StartingOffsets
must be set.
The timestamp of the record in the Kafka topic to start reading data from.
* The possible values are a timestamp string in UTC format of the pattern
* yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset with
* a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of
* StartingTimestamp
or StartingOffsets
must be set.
The timestamp of the record in the Kafka topic to start reading data from.
* The possible values are a timestamp string in UTC format of the pattern
* yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset with
* a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of
* StartingTimestamp
or StartingOffsets
must be set.