& context = nullptr) const
{
return SubmitAsync(&BatchClient::CancelJob, request, handler, context);
}
/**
* Creates an Batch compute environment. You can create MANAGED
or
* UNMANAGED
compute environments. MANAGED
compute
* environments can use Amazon EC2 or Fargate resources. UNMANAGED
* compute environments can only use EC2 resources.
In a managed compute
* environment, Batch manages the capacity and instance types of the compute
* resources within the environment. This is based on the compute resource
* specification that you define or the launch
* template that you specify when you create the compute environment. Either,
* you can choose to use EC2 On-Demand Instances and EC2 Spot Instances. Or, you
* can use Fargate and Fargate Spot capacity in your managed compute environment.
* You can optionally set a maximum price so that Spot Instances only launch when
* the Spot Instance price is less than a specified percentage of the On-Demand
* price.
Multi-node parallel jobs aren't supported on Spot
* Instances.
In an unmanaged compute environment, you can manage
* your own EC2 compute resources and have flexibility with how you configure your
* compute resources. For example, you can use custom AMIs. However, you must
* verify that each of your AMIs meet the Amazon ECS container instance AMI
* specification. For more information, see container
* instance AMIs in the Amazon Elastic Container Service Developer
* Guide. After you created your unmanaged compute environment, you can use the
* DescribeComputeEnvironments operation to find the Amazon ECS cluster
* that's associated with it. Then, launch your container instances into that
* Amazon ECS cluster. For more information, see Launching
* an Amazon ECS container instance in the Amazon Elastic Container Service
* Developer Guide.
To create a compute environment that uses EKS
* resources, the caller must have permissions to call
* eks:DescribeCluster
.
Batch doesn't
* automatically upgrade the AMIs in a compute environment after it's created. For
* example, it also doesn't update the AMIs in your compute environment when a
* newer version of the Amazon ECS optimized AMI is available. You're responsible
* for the management of the guest operating system. This includes any updates and
* security patches. You're also responsible for any additional application
* software or utilities that you install on the compute resources. There are two
* ways to use a new AMI for your Batch jobs. The original method is to complete
* these steps:
-
Create a new compute environment with the new
* AMI.
-
Add the compute environment to an existing job queue.
* -
Remove the earlier compute environment from your job queue.
* -
Delete the earlier compute environment.
In
* April 2022, Batch added enhanced support for updating compute environments. For
* more information, see Updating
* compute environments. To use the enhanced updating of compute environments
* to update AMIs, follow these rules:
-
Either don't set the
* service role (serviceRole
) parameter or set it to the
* AWSBatchServiceRole service-linked role.
-
Set the
* allocation strategy (allocationStrategy
) parameter to
* BEST_FIT_PROGRESSIVE
or SPOT_CAPACITY_OPTIMIZED
.
* -
Set the update to latest image version
* (updateToLatestImageVersion
) parameter to true
. The
* updateToLatestImageVersion
parameter is used when you update a
* compute environment. This parameter is ignored when you create a compute
* environment.
-
Don't specify an AMI ID in imageId
,
* imageIdOverride
(in
* ec2Configuration
), or in the launch template
* (launchTemplate
). In that case, Batch selects the latest Amazon ECS
* optimized AMI that's supported by Batch at the time the infrastructure update is
* initiated. Alternatively, you can specify the AMI ID in the imageId
* or imageIdOverride
parameters, or the launch template identified by
* the LaunchTemplate
properties. Changing any of these properties
* starts an infrastructure update. If the AMI ID is specified in the launch
* template, it can't be replaced by specifying an AMI ID in either the
* imageId
or imageIdOverride
parameters. It can only be
* replaced by specifying a different launch template, or if the launch template
* version is set to $Default
or $Latest
, by setting
* either a new default version for the launch template (if $Default
)
* or by adding a new version to the launch template (if $Latest
).
*
If these rules are followed, any update that starts an
* infrastructure update causes the AMI ID to be re-selected. If the
* version
setting in the launch template
* (launchTemplate
) is set to $Latest
or
* $Default
, the latest or default version of the launch template is
* evaluated up at the time of the infrastructure update, even if the
* launchTemplate
wasn't updated.
See Also:
* AWS
* API Reference
*/
virtual Model::CreateComputeEnvironmentOutcome CreateComputeEnvironment(const Model::CreateComputeEnvironmentRequest& request) const;
/**
* A Callable wrapper for CreateComputeEnvironment that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateComputeEnvironmentOutcomeCallable CreateComputeEnvironmentCallable(const CreateComputeEnvironmentRequestT& request) const
{
return SubmitCallable(&BatchClient::CreateComputeEnvironment, request);
}
/**
* An Async wrapper for CreateComputeEnvironment that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateComputeEnvironmentAsync(const CreateComputeEnvironmentRequestT& request, const CreateComputeEnvironmentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::CreateComputeEnvironment, request, handler, context);
}
/**
* Creates an Batch job queue. When you create a job queue, you associate one or
* more compute environments to the queue and assign an order of preference for the
* compute environments.
You also set a priority to the job queue that
* determines the order that the Batch scheduler places jobs onto its associated
* compute environments. For example, if a compute environment is associated with
* more than one job queue, the job queue with a higher priority is given
* preference for scheduling jobs to that compute environment.
See
* Also:
AWS
* API Reference
*/
virtual Model::CreateJobQueueOutcome CreateJobQueue(const Model::CreateJobQueueRequest& request) const;
/**
* A Callable wrapper for CreateJobQueue that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateJobQueueOutcomeCallable CreateJobQueueCallable(const CreateJobQueueRequestT& request) const
{
return SubmitCallable(&BatchClient::CreateJobQueue, request);
}
/**
* An Async wrapper for CreateJobQueue that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateJobQueueAsync(const CreateJobQueueRequestT& request, const CreateJobQueueResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::CreateJobQueue, request, handler, context);
}
/**
* Creates an Batch scheduling policy.
See Also:
AWS
* API Reference
*/
virtual Model::CreateSchedulingPolicyOutcome CreateSchedulingPolicy(const Model::CreateSchedulingPolicyRequest& request) const;
/**
* A Callable wrapper for CreateSchedulingPolicy that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateSchedulingPolicyOutcomeCallable CreateSchedulingPolicyCallable(const CreateSchedulingPolicyRequestT& request) const
{
return SubmitCallable(&BatchClient::CreateSchedulingPolicy, request);
}
/**
* An Async wrapper for CreateSchedulingPolicy that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateSchedulingPolicyAsync(const CreateSchedulingPolicyRequestT& request, const CreateSchedulingPolicyResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::CreateSchedulingPolicy, request, handler, context);
}
/**
* Deletes an Batch compute environment.
Before you can delete a compute
* environment, you must set its state to DISABLED
with the
* UpdateComputeEnvironment API operation and disassociate it from any job
* queues with the UpdateJobQueue API operation. Compute environments that
* use Fargate resources must terminate all active jobs on that compute environment
* before deleting the compute environment. If this isn't done, the compute
* environment enters an invalid state.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteComputeEnvironmentOutcome DeleteComputeEnvironment(const Model::DeleteComputeEnvironmentRequest& request) const;
/**
* A Callable wrapper for DeleteComputeEnvironment that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteComputeEnvironmentOutcomeCallable DeleteComputeEnvironmentCallable(const DeleteComputeEnvironmentRequestT& request) const
{
return SubmitCallable(&BatchClient::DeleteComputeEnvironment, request);
}
/**
* An Async wrapper for DeleteComputeEnvironment that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteComputeEnvironmentAsync(const DeleteComputeEnvironmentRequestT& request, const DeleteComputeEnvironmentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DeleteComputeEnvironment, request, handler, context);
}
/**
* Deletes the specified job queue. You must first disable submissions for a
* queue with the UpdateJobQueue operation. All jobs in the queue are
* eventually terminated when you delete a job queue. The jobs are terminated at a
* rate of about 16 jobs each second.
It's not necessary to disassociate
* compute environments from a queue before submitting a
* DeleteJobQueue
request.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteJobQueueOutcome DeleteJobQueue(const Model::DeleteJobQueueRequest& request) const;
/**
* A Callable wrapper for DeleteJobQueue that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteJobQueueOutcomeCallable DeleteJobQueueCallable(const DeleteJobQueueRequestT& request) const
{
return SubmitCallable(&BatchClient::DeleteJobQueue, request);
}
/**
* An Async wrapper for DeleteJobQueue that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteJobQueueAsync(const DeleteJobQueueRequestT& request, const DeleteJobQueueResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DeleteJobQueue, request, handler, context);
}
/**
* Deletes the specified scheduling policy.
You can't delete a scheduling
* policy that's used in any job queues.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteSchedulingPolicyOutcome DeleteSchedulingPolicy(const Model::DeleteSchedulingPolicyRequest& request) const;
/**
* A Callable wrapper for DeleteSchedulingPolicy that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteSchedulingPolicyOutcomeCallable DeleteSchedulingPolicyCallable(const DeleteSchedulingPolicyRequestT& request) const
{
return SubmitCallable(&BatchClient::DeleteSchedulingPolicy, request);
}
/**
* An Async wrapper for DeleteSchedulingPolicy that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteSchedulingPolicyAsync(const DeleteSchedulingPolicyRequestT& request, const DeleteSchedulingPolicyResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DeleteSchedulingPolicy, request, handler, context);
}
/**
* Deregisters an Batch job definition. Job definitions are permanently deleted
* after 180 days.
See Also:
AWS
* API Reference
*/
virtual Model::DeregisterJobDefinitionOutcome DeregisterJobDefinition(const Model::DeregisterJobDefinitionRequest& request) const;
/**
* A Callable wrapper for DeregisterJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeregisterJobDefinitionOutcomeCallable DeregisterJobDefinitionCallable(const DeregisterJobDefinitionRequestT& request) const
{
return SubmitCallable(&BatchClient::DeregisterJobDefinition, request);
}
/**
* An Async wrapper for DeregisterJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeregisterJobDefinitionAsync(const DeregisterJobDefinitionRequestT& request, const DeregisterJobDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DeregisterJobDefinition, request, handler, context);
}
/**
* Describes one or more of your compute environments.
If you're using an
* unmanaged compute environment, you can use the
* DescribeComputeEnvironment
operation to determine the
* ecsClusterArn
that you launch your Amazon ECS container instances
* into.
See Also:
AWS
* API Reference
*/
virtual Model::DescribeComputeEnvironmentsOutcome DescribeComputeEnvironments(const Model::DescribeComputeEnvironmentsRequest& request) const;
/**
* A Callable wrapper for DescribeComputeEnvironments that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeComputeEnvironmentsOutcomeCallable DescribeComputeEnvironmentsCallable(const DescribeComputeEnvironmentsRequestT& request) const
{
return SubmitCallable(&BatchClient::DescribeComputeEnvironments, request);
}
/**
* An Async wrapper for DescribeComputeEnvironments that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeComputeEnvironmentsAsync(const DescribeComputeEnvironmentsRequestT& request, const DescribeComputeEnvironmentsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DescribeComputeEnvironments, request, handler, context);
}
/**
* Describes a list of job definitions. You can specify a status
* (such as ACTIVE
) to only return job definitions that match that
* status.
See Also:
AWS
* API Reference
*/
virtual Model::DescribeJobDefinitionsOutcome DescribeJobDefinitions(const Model::DescribeJobDefinitionsRequest& request) const;
/**
* A Callable wrapper for DescribeJobDefinitions that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeJobDefinitionsOutcomeCallable DescribeJobDefinitionsCallable(const DescribeJobDefinitionsRequestT& request) const
{
return SubmitCallable(&BatchClient::DescribeJobDefinitions, request);
}
/**
* An Async wrapper for DescribeJobDefinitions that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeJobDefinitionsAsync(const DescribeJobDefinitionsRequestT& request, const DescribeJobDefinitionsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DescribeJobDefinitions, request, handler, context);
}
/**
* Describes one or more of your job queues.
See Also:
AWS
* API Reference
*/
virtual Model::DescribeJobQueuesOutcome DescribeJobQueues(const Model::DescribeJobQueuesRequest& request) const;
/**
* A Callable wrapper for DescribeJobQueues that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeJobQueuesOutcomeCallable DescribeJobQueuesCallable(const DescribeJobQueuesRequestT& request) const
{
return SubmitCallable(&BatchClient::DescribeJobQueues, request);
}
/**
* An Async wrapper for DescribeJobQueues that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeJobQueuesAsync(const DescribeJobQueuesRequestT& request, const DescribeJobQueuesResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DescribeJobQueues, request, handler, context);
}
/**
* Describes a list of Batch jobs.
See Also:
AWS
* API Reference
*/
virtual Model::DescribeJobsOutcome DescribeJobs(const Model::DescribeJobsRequest& request) const;
/**
* A Callable wrapper for DescribeJobs that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeJobsOutcomeCallable DescribeJobsCallable(const DescribeJobsRequestT& request) const
{
return SubmitCallable(&BatchClient::DescribeJobs, request);
}
/**
* An Async wrapper for DescribeJobs that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeJobsAsync(const DescribeJobsRequestT& request, const DescribeJobsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DescribeJobs, request, handler, context);
}
/**
* Describes one or more of your scheduling policies.
See Also:
* AWS
* API Reference
*/
virtual Model::DescribeSchedulingPoliciesOutcome DescribeSchedulingPolicies(const Model::DescribeSchedulingPoliciesRequest& request) const;
/**
* A Callable wrapper for DescribeSchedulingPolicies that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeSchedulingPoliciesOutcomeCallable DescribeSchedulingPoliciesCallable(const DescribeSchedulingPoliciesRequestT& request) const
{
return SubmitCallable(&BatchClient::DescribeSchedulingPolicies, request);
}
/**
* An Async wrapper for DescribeSchedulingPolicies that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeSchedulingPoliciesAsync(const DescribeSchedulingPoliciesRequestT& request, const DescribeSchedulingPoliciesResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::DescribeSchedulingPolicies, request, handler, context);
}
/**
* Returns a list of Batch jobs.
You must specify only one of the
* following items:
-
A job queue ID to return a list of jobs in
* that job queue
-
A multi-node parallel job ID to return a list
* of nodes for that job
-
An array job ID to return a list of the
* children for that job
You can filter the results by job
* status with the jobStatus
parameter. If you don't specify a status,
* only RUNNING
jobs are returned.
See Also:
AWS API
* Reference
*/
virtual Model::ListJobsOutcome ListJobs(const Model::ListJobsRequest& request) const;
/**
* A Callable wrapper for ListJobs that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::ListJobsOutcomeCallable ListJobsCallable(const ListJobsRequestT& request) const
{
return SubmitCallable(&BatchClient::ListJobs, request);
}
/**
* An Async wrapper for ListJobs that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void ListJobsAsync(const ListJobsRequestT& request, const ListJobsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::ListJobs, request, handler, context);
}
/**
* Returns a list of Batch scheduling policies.
See Also:
AWS
* API Reference
*/
virtual Model::ListSchedulingPoliciesOutcome ListSchedulingPolicies(const Model::ListSchedulingPoliciesRequest& request) const;
/**
* A Callable wrapper for ListSchedulingPolicies that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::ListSchedulingPoliciesOutcomeCallable ListSchedulingPoliciesCallable(const ListSchedulingPoliciesRequestT& request) const
{
return SubmitCallable(&BatchClient::ListSchedulingPolicies, request);
}
/**
* An Async wrapper for ListSchedulingPolicies that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void ListSchedulingPoliciesAsync(const ListSchedulingPoliciesRequestT& request, const ListSchedulingPoliciesResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::ListSchedulingPolicies, request, handler, context);
}
/**
* Lists the tags for an Batch resource. Batch resources that support tags are
* compute environments, jobs, job definitions, job queues, and scheduling
* policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren't
* supported.
See Also:
AWS
* API Reference
*/
virtual Model::ListTagsForResourceOutcome ListTagsForResource(const Model::ListTagsForResourceRequest& request) const;
/**
* A Callable wrapper for ListTagsForResource that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::ListTagsForResourceOutcomeCallable ListTagsForResourceCallable(const ListTagsForResourceRequestT& request) const
{
return SubmitCallable(&BatchClient::ListTagsForResource, request);
}
/**
* An Async wrapper for ListTagsForResource that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void ListTagsForResourceAsync(const ListTagsForResourceRequestT& request, const ListTagsForResourceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::ListTagsForResource, request, handler, context);
}
/**
* Registers an Batch job definition.
See Also:
AWS
* API Reference
*/
virtual Model::RegisterJobDefinitionOutcome RegisterJobDefinition(const Model::RegisterJobDefinitionRequest& request) const;
/**
* A Callable wrapper for RegisterJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::RegisterJobDefinitionOutcomeCallable RegisterJobDefinitionCallable(const RegisterJobDefinitionRequestT& request) const
{
return SubmitCallable(&BatchClient::RegisterJobDefinition, request);
}
/**
* An Async wrapper for RegisterJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void RegisterJobDefinitionAsync(const RegisterJobDefinitionRequestT& request, const RegisterJobDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::RegisterJobDefinition, request, handler, context);
}
/**
* Submits an Batch job from a job definition. Parameters that are specified
* during SubmitJob override parameters defined in the job definition. vCPU
* and memory requirements that are specified in the
* resourceRequirements
objects in the job definition are the
* exception. They can't be overridden this way using the memory
and
* vcpus
parameters. Rather, you must specify updates to job
* definition parameters in a resourceRequirements
object that's
* included in the containerOverrides
parameter.
Job
* queues with a scheduling policy are limited to 500 active fair share identifiers
* at a time.
Jobs that run on Fargate resources can't
* be guaranteed to run for more than 14 days. This is because, after 14 days,
* Fargate resources might become unavailable and job might be terminated.
* See Also:
AWS API
* Reference
*/
virtual Model::SubmitJobOutcome SubmitJob(const Model::SubmitJobRequest& request) const;
/**
* A Callable wrapper for SubmitJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::SubmitJobOutcomeCallable SubmitJobCallable(const SubmitJobRequestT& request) const
{
return SubmitCallable(&BatchClient::SubmitJob, request);
}
/**
* An Async wrapper for SubmitJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void SubmitJobAsync(const SubmitJobRequestT& request, const SubmitJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::SubmitJob, request, handler, context);
}
/**
* Associates the specified tags to a resource with the specified
* resourceArn
. If existing tags on a resource aren't specified in the
* request parameters, they aren't changed. When a resource is deleted, the tags
* that are associated with that resource are deleted as well. Batch resources that
* support tags are compute environments, jobs, job definitions, job queues, and
* scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP)
* jobs aren't supported.
See Also:
AWS
* API Reference
*/
virtual Model::TagResourceOutcome TagResource(const Model::TagResourceRequest& request) const;
/**
* A Callable wrapper for TagResource that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::TagResourceOutcomeCallable TagResourceCallable(const TagResourceRequestT& request) const
{
return SubmitCallable(&BatchClient::TagResource, request);
}
/**
* An Async wrapper for TagResource that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void TagResourceAsync(const TagResourceRequestT& request, const TagResourceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::TagResource, request, handler, context);
}
/**
* Terminates a job in a job queue. Jobs that are in the STARTING
* or RUNNING
state are terminated, which causes them to transition to
* FAILED
. Jobs that have not progressed to the STARTING
* state are cancelled.
See Also:
AWS
* API Reference
*/
virtual Model::TerminateJobOutcome TerminateJob(const Model::TerminateJobRequest& request) const;
/**
* A Callable wrapper for TerminateJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::TerminateJobOutcomeCallable TerminateJobCallable(const TerminateJobRequestT& request) const
{
return SubmitCallable(&BatchClient::TerminateJob, request);
}
/**
* An Async wrapper for TerminateJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void TerminateJobAsync(const TerminateJobRequestT& request, const TerminateJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::TerminateJob, request, handler, context);
}
/**
* Deletes specified tags from an Batch resource.
See Also:
AWS
* API Reference
*/
virtual Model::UntagResourceOutcome UntagResource(const Model::UntagResourceRequest& request) const;
/**
* A Callable wrapper for UntagResource that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::UntagResourceOutcomeCallable UntagResourceCallable(const UntagResourceRequestT& request) const
{
return SubmitCallable(&BatchClient::UntagResource, request);
}
/**
* An Async wrapper for UntagResource that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void UntagResourceAsync(const UntagResourceRequestT& request, const UntagResourceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::UntagResource, request, handler, context);
}
/**
* Updates an Batch compute environment.
See Also:
AWS
* API Reference
*/
virtual Model::UpdateComputeEnvironmentOutcome UpdateComputeEnvironment(const Model::UpdateComputeEnvironmentRequest& request) const;
/**
* A Callable wrapper for UpdateComputeEnvironment that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::UpdateComputeEnvironmentOutcomeCallable UpdateComputeEnvironmentCallable(const UpdateComputeEnvironmentRequestT& request) const
{
return SubmitCallable(&BatchClient::UpdateComputeEnvironment, request);
}
/**
* An Async wrapper for UpdateComputeEnvironment that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void UpdateComputeEnvironmentAsync(const UpdateComputeEnvironmentRequestT& request, const UpdateComputeEnvironmentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::UpdateComputeEnvironment, request, handler, context);
}
/**
* Updates a job queue.
See Also:
AWS
* API Reference
*/
virtual Model::UpdateJobQueueOutcome UpdateJobQueue(const Model::UpdateJobQueueRequest& request) const;
/**
* A Callable wrapper for UpdateJobQueue that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::UpdateJobQueueOutcomeCallable UpdateJobQueueCallable(const UpdateJobQueueRequestT& request) const
{
return SubmitCallable(&BatchClient::UpdateJobQueue, request);
}
/**
* An Async wrapper for UpdateJobQueue that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void UpdateJobQueueAsync(const UpdateJobQueueRequestT& request, const UpdateJobQueueResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::UpdateJobQueue, request, handler, context);
}
/**
* Updates a scheduling policy.
See Also:
AWS
* API Reference
*/
virtual Model::UpdateSchedulingPolicyOutcome UpdateSchedulingPolicy(const Model::UpdateSchedulingPolicyRequest& request) const;
/**
* A Callable wrapper for UpdateSchedulingPolicy that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::UpdateSchedulingPolicyOutcomeCallable UpdateSchedulingPolicyCallable(const UpdateSchedulingPolicyRequestT& request) const
{
return SubmitCallable(&BatchClient::UpdateSchedulingPolicy, request);
}
/**
* An Async wrapper for UpdateSchedulingPolicy that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void UpdateSchedulingPolicyAsync(const UpdateSchedulingPolicyRequestT& request, const UpdateSchedulingPolicyResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&BatchClient::UpdateSchedulingPolicy, request, handler, context);
}
void OverrideEndpoint(const Aws::String& endpoint);
std::shared_ptr& accessEndpointProvider();
private:
friend class Aws::Client::ClientWithAsyncTemplateMethods;
void init(const BatchClientConfiguration& clientConfiguration);
BatchClientConfiguration m_clientConfiguration;
std::shared_ptr m_executor;
std::shared_ptr m_endpointProvider;
};
} // namespace Batch
} // namespace Aws