& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateEdgePackagingJob, request, handler, context);
}
/**
* Creates an endpoint using the endpoint configuration specified in the
* request. SageMaker uses the endpoint to provision resources and deploy models.
* You create the endpoint configuration with the CreateEndpointConfig
* API.
Use this API to deploy models using SageMaker hosting services.
*
For an example that calls this method when deploying a model to
* SageMaker hosting services, see the Create
* Endpoint example notebook.
You must not delete an
* EndpointConfig
that is in use by an endpoint that is live or while
* the UpdateEndpoint
or CreateEndpoint
operations are
* being performed on the endpoint. To update an endpoint, you must create a new
* EndpointConfig
.
The endpoint name must be unique
* within an Amazon Web Services Region in your Amazon Web Services account.
* When it receives the request, SageMaker creates the endpoint, launches the
* resources (ML compute instances), and deploys the model(s) on them.
* When you call CreateEndpoint,
* a load call is made to DynamoDB to verify that your endpoint configuration
* exists. When you read data from a DynamoDB table supporting
* Eventually Consistent Reads
, the response might not reflect
* the results of a recently completed write operation. The response might include
* some stale data. If the dependent entities are not yet in DynamoDB, this causes
* a validation error. If you repeat your read request after a short time, the
* response should return the latest data. So retry logic is recommended to handle
* these possible issues. We also recommend that customers call DescribeEndpointConfig
* before calling CreateEndpoint
* to minimize the potential impact of a DynamoDB eventually consistent read.
* When SageMaker receives the request, it sets the endpoint status to
* Creating
. After it creates the endpoint, it sets the status to
* InService
. SageMaker can then process incoming requests for
* inferences. To check the status of an endpoint, use the DescribeEndpoint
* API.
If any of the models hosted at this endpoint get model data from an
* Amazon S3 location, SageMaker uses Amazon Web Services Security Token Service to
* download model artifacts from the S3 path you provided. Amazon Web Services STS
* is activated in your Amazon Web Services account by default. If you previously
* deactivated Amazon Web Services STS for a region, you need to reactivate Amazon
* Web Services STS for that region. For more information, see Activating
* and Deactivating Amazon Web Services STS in an Amazon Web Services Region in
* the Amazon Web Services Identity and Access Management User Guide.
* To add the IAM role policies for using this API operation, go to the
* IAM console, and choose Roles
* in the left navigation pane. Search the IAM role that you want to grant access
* to use the CreateEndpoint
* and CreateEndpointConfig
* API operations, add the following policies to the role.
-
Option
* 1: For a full SageMaker access, search and attach the
* AmazonSageMakerFullAccess
policy.
-
Option 2: For
* granting a limited access to an IAM role, paste the following Action elements
* manually into the JSON file of the IAM role:
"Action":
* ["sagemaker:CreateEndpoint", "sagemaker:CreateEndpointConfig"]
* "Resource": [
* "arn:aws:sagemaker:region:account-id:endpoint/endpointName"
*
* "arn:aws:sagemaker:region:account-id:endpoint-config/endpointConfigName"
*
]
For more information, see SageMaker
* API Permissions: Actions, Permissions, and Resources Reference.
*
See Also:
AWS
* API Reference
*/
virtual Model::CreateEndpointOutcome CreateEndpoint(const Model::CreateEndpointRequest& request) const;
/**
* A Callable wrapper for CreateEndpoint that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateEndpointOutcomeCallable CreateEndpointCallable(const CreateEndpointRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateEndpoint, request);
}
/**
* An Async wrapper for CreateEndpoint that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateEndpointAsync(const CreateEndpointRequestT& request, const CreateEndpointResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateEndpoint, request, handler, context);
}
/**
* Creates an endpoint configuration that SageMaker hosting services uses to
* deploy models. In the configuration, you identify one or more models, created
* using the CreateModel
API, to deploy and the resources that you
* want SageMaker to provision. Then you call the CreateEndpoint
* API.
Use this API if you want to use SageMaker hosting services
* to deploy models into production.
In the request, you define a
* ProductionVariant
, for each model that you want to deploy. Each
* ProductionVariant
parameter also describes the resources that you
* want SageMaker to provision. This includes the number and type of ML compute
* instances to deploy.
If you are hosting multiple models, you also assign
* a VariantWeight
to specify how much traffic you want to allocate to
* each model. For example, suppose that you want to host two models, A and B, and
* you assign traffic weight 2 for model A and 1 for model B. SageMaker distributes
* two-thirds of the traffic to Model A, and one-third to model B.
* When you call CreateEndpoint,
* a load call is made to DynamoDB to verify that your endpoint configuration
* exists. When you read data from a DynamoDB table supporting
* Eventually Consistent Reads
, the response might not reflect
* the results of a recently completed write operation. The response might include
* some stale data. If the dependent entities are not yet in DynamoDB, this causes
* a validation error. If you repeat your read request after a short time, the
* response should return the latest data. So retry logic is recommended to handle
* these possible issues. We also recommend that customers call DescribeEndpointConfig
* before calling CreateEndpoint
* to minimize the potential impact of a DynamoDB eventually consistent read.
* See Also:
AWS
* API Reference
*/
virtual Model::CreateEndpointConfigOutcome CreateEndpointConfig(const Model::CreateEndpointConfigRequest& request) const;
/**
* A Callable wrapper for CreateEndpointConfig that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateEndpointConfigOutcomeCallable CreateEndpointConfigCallable(const CreateEndpointConfigRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateEndpointConfig, request);
}
/**
* An Async wrapper for CreateEndpointConfig that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateEndpointConfigAsync(const CreateEndpointConfigRequestT& request, const CreateEndpointConfigResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateEndpointConfig, request, handler, context);
}
/**
* Creates a SageMaker experiment. An experiment is a collection of
* trials that are observed, compared and evaluated as a group. A trial is a
* set of steps, called trial components, that produce a machine learning
* model.
In the Studio UI, trials are referred to as run
* groups and trial components are referred to as runs.
* The goal of an experiment is to determine the components that produce the
* best model. Multiple trials are performed, each one isolating and measuring the
* impact of a change to one or more inputs, while keeping the remaining inputs
* constant.
When you use SageMaker Studio or the SageMaker Python SDK, all
* experiments, trials, and trial components are automatically tracked, logged, and
* indexed. When you use the Amazon Web Services SDK for Python (Boto), you must
* use the logging APIs provided by the SDK.
You can add tags to
* experiments, trials, trial components and then use the Search
* API to search for the tags.
To add a description to an experiment,
* specify the optional Description
parameter. To add a description
* later, or to change the description, call the UpdateExperiment
* API.
To get a list of all your experiments, call the ListExperiments
* API. To view an experiment's properties, call the DescribeExperiment
* API. To get a list of all the trials associated with an experiment, call the ListTrials
* API. To create a trial call the CreateTrial
* API.
See Also:
AWS
* API Reference
*/
virtual Model::CreateExperimentOutcome CreateExperiment(const Model::CreateExperimentRequest& request) const;
/**
* A Callable wrapper for CreateExperiment that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateExperimentOutcomeCallable CreateExperimentCallable(const CreateExperimentRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateExperiment, request);
}
/**
* An Async wrapper for CreateExperiment that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateExperimentAsync(const CreateExperimentRequestT& request, const CreateExperimentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateExperiment, request, handler, context);
}
/**
* Create a new FeatureGroup
. A FeatureGroup
is a
* group of Features
defined in the FeatureStore
to
* describe a Record
.
The FeatureGroup
defines
* the schema and features contained in the FeatureGroup. A
* FeatureGroup
definition is composed of a list of
* Features
, a RecordIdentifierFeatureName
, an
* EventTimeFeatureName
and configurations for its
* OnlineStore
and OfflineStore
. Check Amazon
* Web Services service quotas to see the FeatureGroup
s quota for
* your Amazon Web Services account.
You must include at least
* one of OnlineStoreConfig
and OfflineStoreConfig
to
* create a FeatureGroup
.
See Also:
AWS
* API Reference
*/
virtual Model::CreateFeatureGroupOutcome CreateFeatureGroup(const Model::CreateFeatureGroupRequest& request) const;
/**
* A Callable wrapper for CreateFeatureGroup that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateFeatureGroupOutcomeCallable CreateFeatureGroupCallable(const CreateFeatureGroupRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateFeatureGroup, request);
}
/**
* An Async wrapper for CreateFeatureGroup that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateFeatureGroupAsync(const CreateFeatureGroupRequestT& request, const CreateFeatureGroupResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateFeatureGroup, request, handler, context);
}
/**
* Creates a flow definition.
See Also:
AWS
* API Reference
*/
virtual Model::CreateFlowDefinitionOutcome CreateFlowDefinition(const Model::CreateFlowDefinitionRequest& request) const;
/**
* A Callable wrapper for CreateFlowDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateFlowDefinitionOutcomeCallable CreateFlowDefinitionCallable(const CreateFlowDefinitionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateFlowDefinition, request);
}
/**
* An Async wrapper for CreateFlowDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateFlowDefinitionAsync(const CreateFlowDefinitionRequestT& request, const CreateFlowDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateFlowDefinition, request, handler, context);
}
/**
* Create a hub.
Hub APIs are only callable through SageMaker
* Studio.
See Also:
AWS
* API Reference
*/
virtual Model::CreateHubOutcome CreateHub(const Model::CreateHubRequest& request) const;
/**
* A Callable wrapper for CreateHub that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateHubOutcomeCallable CreateHubCallable(const CreateHubRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateHub, request);
}
/**
* An Async wrapper for CreateHub that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateHubAsync(const CreateHubRequestT& request, const CreateHubResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateHub, request, handler, context);
}
/**
* Defines the settings you will use for the human review workflow user
* interface. Reviewers will see a three-panel interface with an instruction area,
* the item to review, and an input area.
See Also:
AWS
* API Reference
*/
virtual Model::CreateHumanTaskUiOutcome CreateHumanTaskUi(const Model::CreateHumanTaskUiRequest& request) const;
/**
* A Callable wrapper for CreateHumanTaskUi that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateHumanTaskUiOutcomeCallable CreateHumanTaskUiCallable(const CreateHumanTaskUiRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateHumanTaskUi, request);
}
/**
* An Async wrapper for CreateHumanTaskUi that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateHumanTaskUiAsync(const CreateHumanTaskUiRequestT& request, const CreateHumanTaskUiResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateHumanTaskUi, request, handler, context);
}
/**
* Starts a hyperparameter tuning job. A hyperparameter tuning job finds the
* best version of a model by running many training jobs on your dataset using the
* algorithm you choose and values for hyperparameters within ranges that you
* specify. It then chooses the hyperparameter values that result in a model that
* performs the best, as measured by an objective metric that you choose.
A
* hyperparameter tuning job automatically creates Amazon SageMaker experiments,
* trials, and trial components for each training job that it runs. You can view
* these entities in Amazon SageMaker Studio. For more information, see View
* Experiments, Trials, and Trial Components.
Do not include
* any security-sensitive information including account access IDs, secrets or
* tokens in any hyperparameter field. If the use of security-sensitive credentials
* are detected, SageMaker will reject your training job request and return an
* exception error.
See Also:
AWS
* API Reference
*/
virtual Model::CreateHyperParameterTuningJobOutcome CreateHyperParameterTuningJob(const Model::CreateHyperParameterTuningJobRequest& request) const;
/**
* A Callable wrapper for CreateHyperParameterTuningJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateHyperParameterTuningJobOutcomeCallable CreateHyperParameterTuningJobCallable(const CreateHyperParameterTuningJobRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateHyperParameterTuningJob, request);
}
/**
* An Async wrapper for CreateHyperParameterTuningJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateHyperParameterTuningJobAsync(const CreateHyperParameterTuningJobRequestT& request, const CreateHyperParameterTuningJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateHyperParameterTuningJob, request, handler, context);
}
/**
* Creates a custom SageMaker image. A SageMaker image is a set of image
* versions. Each image version represents a container image stored in Amazon
* Elastic Container Registry (ECR). For more information, see Bring
* your own SageMaker image.
See Also:
AWS
* API Reference
*/
virtual Model::CreateImageOutcome CreateImage(const Model::CreateImageRequest& request) const;
/**
* A Callable wrapper for CreateImage that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateImageOutcomeCallable CreateImageCallable(const CreateImageRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateImage, request);
}
/**
* An Async wrapper for CreateImage that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateImageAsync(const CreateImageRequestT& request, const CreateImageResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateImage, request, handler, context);
}
/**
* Creates a version of the SageMaker image specified by ImageName
.
* The version represents the Amazon Elastic Container Registry (ECR) container
* image specified by BaseImage
.
See Also:
AWS
* API Reference
*/
virtual Model::CreateImageVersionOutcome CreateImageVersion(const Model::CreateImageVersionRequest& request) const;
/**
* A Callable wrapper for CreateImageVersion that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateImageVersionOutcomeCallable CreateImageVersionCallable(const CreateImageVersionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateImageVersion, request);
}
/**
* An Async wrapper for CreateImageVersion that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateImageVersionAsync(const CreateImageVersionRequestT& request, const CreateImageVersionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateImageVersion, request, handler, context);
}
/**
* Creates an inference experiment using the configurations specified in the
* request.
Use this API to setup and schedule an experiment to compare
* model variants on a Amazon SageMaker inference endpoint. For more information
* about inference experiments, see Shadow
* tests.
Amazon SageMaker begins your experiment at the scheduled
* time and routes traffic to your endpoint's model variants based on your
* specified configuration.
While the experiment is in progress or after
* it has concluded, you can view metrics that compare your model variants. For
* more information, see View,
* monitor, and edit shadow tests.
See Also:
AWS
* API Reference
*/
virtual Model::CreateInferenceExperimentOutcome CreateInferenceExperiment(const Model::CreateInferenceExperimentRequest& request) const;
/**
* A Callable wrapper for CreateInferenceExperiment that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateInferenceExperimentOutcomeCallable CreateInferenceExperimentCallable(const CreateInferenceExperimentRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateInferenceExperiment, request);
}
/**
* An Async wrapper for CreateInferenceExperiment that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateInferenceExperimentAsync(const CreateInferenceExperimentRequestT& request, const CreateInferenceExperimentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateInferenceExperiment, request, handler, context);
}
/**
* Starts a recommendation job. You can create either an instance recommendation
* or load test job.
See Also:
AWS
* API Reference
*/
virtual Model::CreateInferenceRecommendationsJobOutcome CreateInferenceRecommendationsJob(const Model::CreateInferenceRecommendationsJobRequest& request) const;
/**
* A Callable wrapper for CreateInferenceRecommendationsJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateInferenceRecommendationsJobOutcomeCallable CreateInferenceRecommendationsJobCallable(const CreateInferenceRecommendationsJobRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateInferenceRecommendationsJob, request);
}
/**
* An Async wrapper for CreateInferenceRecommendationsJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateInferenceRecommendationsJobAsync(const CreateInferenceRecommendationsJobRequestT& request, const CreateInferenceRecommendationsJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateInferenceRecommendationsJob, request, handler, context);
}
/**
* Creates a job that uses workers to label the data objects in your input
* dataset. You can use the labeled data to train machine learning models.
* You can select your workforce from one of three providers:
-
A
* private workforce that you create. It can include employees, contractors, and
* outside experts. Use a private workforce when want the data to stay within your
* organization or when a specific set of skills is required.
-
One
* or more vendors that you select from the Amazon Web Services Marketplace.
* Vendors provide expertise in specific areas.
-
The Amazon
* Mechanical Turk workforce. This is the largest workforce, but it should only be
* used for public data or data that has been stripped of any personally
* identifiable information.
You can also use automated data
* labeling to reduce the number of data objects that need to be labeled by a
* human. Automated data labeling uses active learning to determine if a
* data object can be labeled by machine or if it needs to be sent to a human
* worker. For more information, see Using
* Automated Data Labeling.
The data objects to be labeled are contained
* in an Amazon S3 bucket. You create a manifest file that describes the
* location of each object. For more information, see Using Input
* and Output Data.
The output can be used as the manifest file for
* another labeling job or as training data for your machine learning models.
* You can use this operation to create a static labeling job or a streaming
* labeling job. A static labeling job stops if all data objects in the input
* manifest file identified in ManifestS3Uri
have been labeled. A
* streaming labeling job runs perpetually until it is manually stopped, or remains
* idle for 10 days. You can send new data objects to an active
* (InProgress
) streaming labeling job in real time. To learn how to
* create a static labeling job, see Create
* a Labeling Job (API) in the Amazon SageMaker Developer Guide. To learn how
* to create a streaming labeling job, see Create
* a Streaming Labeling Job.
See Also:
AWS
* API Reference
*/
virtual Model::CreateLabelingJobOutcome CreateLabelingJob(const Model::CreateLabelingJobRequest& request) const;
/**
* A Callable wrapper for CreateLabelingJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateLabelingJobOutcomeCallable CreateLabelingJobCallable(const CreateLabelingJobRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateLabelingJob, request);
}
/**
* An Async wrapper for CreateLabelingJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateLabelingJobAsync(const CreateLabelingJobRequestT& request, const CreateLabelingJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateLabelingJob, request, handler, context);
}
/**
* Creates a model in SageMaker. In the request, you name the model and describe
* a primary container. For the primary container, you specify the Docker image
* that contains inference code, artifacts (from prior training), and a custom
* environment map that the inference code uses when you deploy the model for
* predictions.
Use this API to create a model if you want to use SageMaker
* hosting services or run a batch transform job.
To host your model, you
* create an endpoint configuration with the CreateEndpointConfig
API,
* and then create an endpoint with the CreateEndpoint
API. SageMaker
* then deploys all of the containers that you defined for the model in the hosting
* environment.
For an example that calls this method when deploying a
* model to SageMaker hosting services, see Create
* a Model (Amazon Web Services SDK for Python (Boto 3)).
To run a
* batch transform using your model, you start a job with the
* CreateTransformJob
API. SageMaker uses your model and your dataset
* to get inferences which are then saved to a specified S3 location.
In the
* request, you also provide an IAM role that SageMaker can assume to access model
* artifacts and docker image for deployment on ML compute hosting instances or for
* batch transform jobs. In addition, you also use the IAM role to manage
* permissions the inference code needs. For example, if the inference code access
* any other Amazon Web Services resources, you grant necessary permissions via
* this role.
See Also:
AWS
* API Reference
*/
virtual Model::CreateModelOutcome CreateModel(const Model::CreateModelRequest& request) const;
/**
* A Callable wrapper for CreateModel that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateModelOutcomeCallable CreateModelCallable(const CreateModelRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateModel, request);
}
/**
* An Async wrapper for CreateModel that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateModelAsync(const CreateModelRequestT& request, const CreateModelResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateModel, request, handler, context);
}
/**
* Creates the definition for a model bias job.
See Also:
AWS
* API Reference
*/
virtual Model::CreateModelBiasJobDefinitionOutcome CreateModelBiasJobDefinition(const Model::CreateModelBiasJobDefinitionRequest& request) const;
/**
* A Callable wrapper for CreateModelBiasJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateModelBiasJobDefinitionOutcomeCallable CreateModelBiasJobDefinitionCallable(const CreateModelBiasJobDefinitionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateModelBiasJobDefinition, request);
}
/**
* An Async wrapper for CreateModelBiasJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateModelBiasJobDefinitionAsync(const CreateModelBiasJobDefinitionRequestT& request, const CreateModelBiasJobDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateModelBiasJobDefinition, request, handler, context);
}
/**
* Creates an Amazon SageMaker Model Card.
For information about how to
* use model cards, see Amazon
* SageMaker Model Card.
See Also:
AWS
* API Reference
*/
virtual Model::CreateModelCardOutcome CreateModelCard(const Model::CreateModelCardRequest& request) const;
/**
* A Callable wrapper for CreateModelCard that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateModelCardOutcomeCallable CreateModelCardCallable(const CreateModelCardRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateModelCard, request);
}
/**
* An Async wrapper for CreateModelCard that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateModelCardAsync(const CreateModelCardRequestT& request, const CreateModelCardResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateModelCard, request, handler, context);
}
/**
* Creates an Amazon SageMaker Model Card export job.
See Also:
* AWS
* API Reference
*/
virtual Model::CreateModelCardExportJobOutcome CreateModelCardExportJob(const Model::CreateModelCardExportJobRequest& request) const;
/**
* A Callable wrapper for CreateModelCardExportJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateModelCardExportJobOutcomeCallable CreateModelCardExportJobCallable(const CreateModelCardExportJobRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateModelCardExportJob, request);
}
/**
* An Async wrapper for CreateModelCardExportJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateModelCardExportJobAsync(const CreateModelCardExportJobRequestT& request, const CreateModelCardExportJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateModelCardExportJob, request, handler, context);
}
/**
* Creates the definition for a model explainability job.
See
* Also:
AWS
* API Reference
*/
virtual Model::CreateModelExplainabilityJobDefinitionOutcome CreateModelExplainabilityJobDefinition(const Model::CreateModelExplainabilityJobDefinitionRequest& request) const;
/**
* A Callable wrapper for CreateModelExplainabilityJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateModelExplainabilityJobDefinitionOutcomeCallable CreateModelExplainabilityJobDefinitionCallable(const CreateModelExplainabilityJobDefinitionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateModelExplainabilityJobDefinition, request);
}
/**
* An Async wrapper for CreateModelExplainabilityJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateModelExplainabilityJobDefinitionAsync(const CreateModelExplainabilityJobDefinitionRequestT& request, const CreateModelExplainabilityJobDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateModelExplainabilityJobDefinition, request, handler, context);
}
/**
* Creates a model package that you can use to create SageMaker models or list
* on Amazon Web Services Marketplace, or a versioned model that is part of a model
* group. Buyers can subscribe to model packages listed on Amazon Web Services
* Marketplace to create models in SageMaker.
To create a model package by
* specifying a Docker container that contains your inference code and the Amazon
* S3 location of your model artifacts, provide values for
* InferenceSpecification
. To create a model from an algorithm
* resource that you created or subscribed to in Amazon Web Services Marketplace,
* provide a value for SourceAlgorithmSpecification
.
* There are two types of model packages:
See Also:
AWS
* API Reference
*/
virtual Model::CreateModelPackageOutcome CreateModelPackage(const Model::CreateModelPackageRequest& request) const;
/**
* A Callable wrapper for CreateModelPackage that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateModelPackageOutcomeCallable CreateModelPackageCallable(const CreateModelPackageRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateModelPackage, request);
}
/**
* An Async wrapper for CreateModelPackage that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateModelPackageAsync(const CreateModelPackageRequestT& request, const CreateModelPackageResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateModelPackage, request, handler, context);
}
/**
* Creates a model group. A model group contains a group of model
* versions.
See Also:
AWS
* API Reference
*/
virtual Model::CreateModelPackageGroupOutcome CreateModelPackageGroup(const Model::CreateModelPackageGroupRequest& request) const;
/**
* A Callable wrapper for CreateModelPackageGroup that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateModelPackageGroupOutcomeCallable CreateModelPackageGroupCallable(const CreateModelPackageGroupRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateModelPackageGroup, request);
}
/**
* An Async wrapper for CreateModelPackageGroup that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateModelPackageGroupAsync(const CreateModelPackageGroupRequestT& request, const CreateModelPackageGroupResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateModelPackageGroup, request, handler, context);
}
/**
* Creates a definition for a job that monitors model quality and drift. For
* information about model monitor, see Amazon
* SageMaker Model Monitor.
See Also:
AWS
* API Reference
*/
virtual Model::CreateModelQualityJobDefinitionOutcome CreateModelQualityJobDefinition(const Model::CreateModelQualityJobDefinitionRequest& request) const;
/**
* A Callable wrapper for CreateModelQualityJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateModelQualityJobDefinitionOutcomeCallable CreateModelQualityJobDefinitionCallable(const CreateModelQualityJobDefinitionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateModelQualityJobDefinition, request);
}
/**
* An Async wrapper for CreateModelQualityJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateModelQualityJobDefinitionAsync(const CreateModelQualityJobDefinitionRequestT& request, const CreateModelQualityJobDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateModelQualityJobDefinition, request, handler, context);
}
/**
* Creates a schedule that regularly starts Amazon SageMaker Processing Jobs to
* monitor the data captured for an Amazon SageMaker Endpoint.
See
* Also:
AWS
* API Reference
*/
virtual Model::CreateMonitoringScheduleOutcome CreateMonitoringSchedule(const Model::CreateMonitoringScheduleRequest& request) const;
/**
* A Callable wrapper for CreateMonitoringSchedule that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateMonitoringScheduleOutcomeCallable CreateMonitoringScheduleCallable(const CreateMonitoringScheduleRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateMonitoringSchedule, request);
}
/**
* An Async wrapper for CreateMonitoringSchedule that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateMonitoringScheduleAsync(const CreateMonitoringScheduleRequestT& request, const CreateMonitoringScheduleResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateMonitoringSchedule, request, handler, context);
}
/**
* Creates an SageMaker notebook instance. A notebook instance is a machine
* learning (ML) compute instance running on a Jupyter notebook.
In a
* CreateNotebookInstance
request, specify the type of ML compute
* instance that you want to run. SageMaker launches the instance, installs common
* libraries that you can use to explore datasets for model training, and attaches
* an ML storage volume to the notebook instance.
SageMaker also provides a
* set of example notebooks. Each notebook demonstrates how to use SageMaker with a
* specific algorithm or with a machine learning framework.
After receiving
* the request, SageMaker does the following:
-
Creates a network
* interface in the SageMaker VPC.
-
(Option) If you specified
* SubnetId
, SageMaker creates a network interface in your own VPC,
* which is inferred from the subnet ID that you provide in the input. When
* creating this network interface, SageMaker attaches the security group that you
* specified in the request to the network interface that it creates in your
* VPC.
-
Launches an EC2 instance of the type specified in the
* request in the SageMaker VPC. If you specified SubnetId
of your
* VPC, SageMaker specifies both network interfaces when launching this instance.
* This enables inbound traffic from your own VPC to the notebook instance,
* assuming that the security groups allow it.
After creating
* the notebook instance, SageMaker returns its Amazon Resource Name (ARN). You
* can't change the name of a notebook instance after you create it.
After
* SageMaker creates the notebook instance, you can connect to the Jupyter server
* and work in Jupyter notebooks. For example, you can write code to explore a
* dataset that you can use for model training, train a model, host models by
* creating SageMaker endpoints, and validate hosted models.
For more
* information, see How It
* Works.
See Also:
AWS
* API Reference
*/
virtual Model::CreateNotebookInstanceOutcome CreateNotebookInstance(const Model::CreateNotebookInstanceRequest& request) const;
/**
* A Callable wrapper for CreateNotebookInstance that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateNotebookInstanceOutcomeCallable CreateNotebookInstanceCallable(const CreateNotebookInstanceRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateNotebookInstance, request);
}
/**
* An Async wrapper for CreateNotebookInstance that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateNotebookInstanceAsync(const CreateNotebookInstanceRequestT& request, const CreateNotebookInstanceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateNotebookInstance, request, handler, context);
}
/**
* Creates a lifecycle configuration that you can associate with a notebook
* instance. A lifecycle configuration is a collection of shell scripts that
* run when you create or start a notebook instance.
Each lifecycle
* configuration script has a limit of 16384 characters.
The value of the
* $PATH
environment variable that is available to both scripts is
* /sbin:bin:/usr/sbin:/usr/bin
.
View CloudWatch Logs for
* notebook instance lifecycle configurations in log group
* /aws/sagemaker/NotebookInstances
in log stream
* [notebook-instance-name]/[LifecycleConfigHook]
.
Lifecycle
* configuration scripts cannot run for longer than 5 minutes. If a script runs for
* longer than 5 minutes, it fails and the notebook instance is not created or
* started.
For information about notebook instance lifestyle
* configurations, see Step
* 2.1: (Optional) Customize a Notebook Instance.
See Also:
AWS
* API Reference
*/
virtual Model::CreateNotebookInstanceLifecycleConfigOutcome CreateNotebookInstanceLifecycleConfig(const Model::CreateNotebookInstanceLifecycleConfigRequest& request) const;
/**
* A Callable wrapper for CreateNotebookInstanceLifecycleConfig that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateNotebookInstanceLifecycleConfigOutcomeCallable CreateNotebookInstanceLifecycleConfigCallable(const CreateNotebookInstanceLifecycleConfigRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateNotebookInstanceLifecycleConfig, request);
}
/**
* An Async wrapper for CreateNotebookInstanceLifecycleConfig that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateNotebookInstanceLifecycleConfigAsync(const CreateNotebookInstanceLifecycleConfigRequestT& request, const CreateNotebookInstanceLifecycleConfigResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateNotebookInstanceLifecycleConfig, request, handler, context);
}
/**
* Creates a pipeline using a JSON pipeline definition.
See Also:
* AWS
* API Reference
*/
virtual Model::CreatePipelineOutcome CreatePipeline(const Model::CreatePipelineRequest& request) const;
/**
* A Callable wrapper for CreatePipeline that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreatePipelineOutcomeCallable CreatePipelineCallable(const CreatePipelineRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreatePipeline, request);
}
/**
* An Async wrapper for CreatePipeline that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreatePipelineAsync(const CreatePipelineRequestT& request, const CreatePipelineResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreatePipeline, request, handler, context);
}
/**
* Creates a URL for a specified UserProfile in a Domain. When accessed in a web
* browser, the user will be automatically signed in to Amazon SageMaker Studio,
* and granted access to all of the Apps and files associated with the Domain's
* Amazon Elastic File System (EFS) volume. This operation can only be called when
* the authentication mode equals IAM.
The IAM role or user passed to this
* API defines the permissions to access the app. Once the presigned URL is
* created, no additional permission is required to access this URL. IAM
* authorization policies for this API are also enforced for every HTTP request and
* WebSocket frame that attempts to connect to the app.
You can restrict
* access to this API and to the URL that it returns to a list of IP addresses,
* Amazon VPCs or Amazon VPC Endpoints that you specify. For more information, see
* Connect
* to SageMaker Studio Through an Interface VPC Endpoint .
The
* URL that you get from a call to CreatePresignedDomainUrl
has a
* default timeout of 5 minutes. You can configure this value using
* ExpiresInSeconds
. If you try to use the URL after the timeout limit
* expires, you are directed to the Amazon Web Services console sign-in page.
* See Also:
AWS
* API Reference
*/
virtual Model::CreatePresignedDomainUrlOutcome CreatePresignedDomainUrl(const Model::CreatePresignedDomainUrlRequest& request) const;
/**
* A Callable wrapper for CreatePresignedDomainUrl that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreatePresignedDomainUrlOutcomeCallable CreatePresignedDomainUrlCallable(const CreatePresignedDomainUrlRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreatePresignedDomainUrl, request);
}
/**
* An Async wrapper for CreatePresignedDomainUrl that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreatePresignedDomainUrlAsync(const CreatePresignedDomainUrlRequestT& request, const CreatePresignedDomainUrlResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreatePresignedDomainUrl, request, handler, context);
}
/**
* Returns a URL that you can use to connect to the Jupyter server from a
* notebook instance. In the SageMaker console, when you choose Open
* next to a notebook instance, SageMaker opens a new tab showing the Jupyter
* server home page from the notebook instance. The console uses this API to get
* the URL and show the page.
The IAM role or user used to call this API
* defines the permissions to access the notebook instance. Once the presigned URL
* is created, no additional permission is required to access this URL. IAM
* authorization policies for this API are also enforced for every HTTP request and
* WebSocket frame that attempts to connect to the notebook instance.
You
* can restrict access to this API and to the URL that it returns to a list of IP
* addresses that you specify. Use the NotIpAddress
condition operator
* and the aws:SourceIP
condition context key to specify the list of
* IP addresses that you want to have access to the notebook instance. For more
* information, see Limit
* Access to a Notebook Instance by IP Address.
The URL that you
* get from a call to CreatePresignedNotebookInstanceUrl
* is valid only for 5 minutes. If you try to use the URL after the 5-minute limit
* expires, you are directed to the Amazon Web Services console sign-in page.
* See Also:
AWS
* API Reference
*/
virtual Model::CreatePresignedNotebookInstanceUrlOutcome CreatePresignedNotebookInstanceUrl(const Model::CreatePresignedNotebookInstanceUrlRequest& request) const;
/**
* A Callable wrapper for CreatePresignedNotebookInstanceUrl that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreatePresignedNotebookInstanceUrlOutcomeCallable CreatePresignedNotebookInstanceUrlCallable(const CreatePresignedNotebookInstanceUrlRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreatePresignedNotebookInstanceUrl, request);
}
/**
* An Async wrapper for CreatePresignedNotebookInstanceUrl that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreatePresignedNotebookInstanceUrlAsync(const CreatePresignedNotebookInstanceUrlRequestT& request, const CreatePresignedNotebookInstanceUrlResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreatePresignedNotebookInstanceUrl, request, handler, context);
}
/**
* Creates a processing job.
See Also:
AWS
* API Reference
*/
virtual Model::CreateProcessingJobOutcome CreateProcessingJob(const Model::CreateProcessingJobRequest& request) const;
/**
* A Callable wrapper for CreateProcessingJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateProcessingJobOutcomeCallable CreateProcessingJobCallable(const CreateProcessingJobRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateProcessingJob, request);
}
/**
* An Async wrapper for CreateProcessingJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateProcessingJobAsync(const CreateProcessingJobRequestT& request, const CreateProcessingJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateProcessingJob, request, handler, context);
}
/**
* Creates a machine learning (ML) project that can contain one or more
* templates that set up an ML pipeline from training to deploying an approved
* model.
See Also:
AWS
* API Reference
*/
virtual Model::CreateProjectOutcome CreateProject(const Model::CreateProjectRequest& request) const;
/**
* A Callable wrapper for CreateProject that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateProjectOutcomeCallable CreateProjectCallable(const CreateProjectRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateProject, request);
}
/**
* An Async wrapper for CreateProject that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateProjectAsync(const CreateProjectRequestT& request, const CreateProjectResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateProject, request, handler, context);
}
/**
* Creates a space used for real time collaboration in a Domain.
See
* Also:
AWS
* API Reference
*/
virtual Model::CreateSpaceOutcome CreateSpace(const Model::CreateSpaceRequest& request) const;
/**
* A Callable wrapper for CreateSpace that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateSpaceOutcomeCallable CreateSpaceCallable(const CreateSpaceRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateSpace, request);
}
/**
* An Async wrapper for CreateSpace that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateSpaceAsync(const CreateSpaceRequestT& request, const CreateSpaceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateSpace, request, handler, context);
}
/**
* Creates a new Studio Lifecycle Configuration.
See Also:
AWS
* API Reference
*/
virtual Model::CreateStudioLifecycleConfigOutcome CreateStudioLifecycleConfig(const Model::CreateStudioLifecycleConfigRequest& request) const;
/**
* A Callable wrapper for CreateStudioLifecycleConfig that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateStudioLifecycleConfigOutcomeCallable CreateStudioLifecycleConfigCallable(const CreateStudioLifecycleConfigRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateStudioLifecycleConfig, request);
}
/**
* An Async wrapper for CreateStudioLifecycleConfig that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateStudioLifecycleConfigAsync(const CreateStudioLifecycleConfigRequestT& request, const CreateStudioLifecycleConfigResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateStudioLifecycleConfig, request, handler, context);
}
/**
* Starts a model training job. After training completes, SageMaker saves the
* resulting model artifacts to an Amazon S3 location that you specify.
If
* you choose to host your model using SageMaker hosting services, you can use the
* resulting model artifacts as part of the model. You can also use the artifacts
* in a machine learning service other than SageMaker, provided that you know how
* to use them for inference.
In the request body, you provide the
* following:
-
AlgorithmSpecification
- Identifies
* the training algorithm to use.
-
HyperParameters
* - Specify these algorithm-specific parameters to enable the estimation of model
* parameters during training. Hyperparameters can be tuned to optimize this
* learning process. For a list of hyperparameters for each training algorithm
* provided by SageMaker, see Algorithms.
*
Do not include any security-sensitive information including
* account access IDs, secrets or tokens in any hyperparameter field. If the use of
* security-sensitive credentials are detected, SageMaker will reject your training
* job request and return an exception error.
-
* InputDataConfig
- Describes the input required by the training job
* and the Amazon S3, EFS, or FSx location where it is stored.
-
* OutputDataConfig
- Identifies the Amazon S3 bucket where you want
* SageMaker to save the results of model training.
-
* ResourceConfig
- Identifies the resources, ML compute instances,
* and ML storage volumes to deploy for model training. In distributed training,
* you specify more than one instance.
-
* EnableManagedSpotTraining
- Optimize the cost of training machine
* learning models by up to 80% by using Amazon EC2 Spot instances. For more
* information, see Managed
* Spot Training.
-
RoleArn
- The Amazon
* Resource Name (ARN) that SageMaker assumes to perform tasks on your behalf
* during model training. You must grant this role the necessary permissions so
* that SageMaker can successfully complete model training.
-
* StoppingCondition
- To help cap training costs, use
* MaxRuntimeInSeconds
to set a time limit for training. Use
* MaxWaitTimeInSeconds
to specify how long a managed spot training
* job has to complete.
-
Environment
- The
* environment variables to set in the Docker container.
-
* RetryStrategy
- The number of times to retry the job when the job
* fails due to an InternalServerError
.
For more
* information about SageMaker, see How It
* Works.
See Also:
AWS
* API Reference
*/
virtual Model::CreateTrainingJobOutcome CreateTrainingJob(const Model::CreateTrainingJobRequest& request) const;
/**
* A Callable wrapper for CreateTrainingJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateTrainingJobOutcomeCallable CreateTrainingJobCallable(const CreateTrainingJobRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateTrainingJob, request);
}
/**
* An Async wrapper for CreateTrainingJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateTrainingJobAsync(const CreateTrainingJobRequestT& request, const CreateTrainingJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateTrainingJob, request, handler, context);
}
/**
* Starts a transform job. A transform job uses a trained model to get
* inferences on a dataset and saves these results to an Amazon S3 location that
* you specify.
To perform batch transformations, you create a transform job
* and use the data that you have readily available.
In the request body,
* you provide the following:
-
TransformJobName
-
* Identifies the transform job. The name must be unique within an Amazon Web
* Services Region in an Amazon Web Services account.
-
* ModelName
- Identifies the model to use. ModelName
* must be the name of an existing Amazon SageMaker model in the same Amazon Web
* Services Region and Amazon Web Services account. For information on creating a
* model, see CreateModel.
* -
TransformInput
- Describes the dataset to be
* transformed and the Amazon S3 location where it is stored.
-
* TransformOutput
- Identifies the Amazon S3 location where you want
* Amazon SageMaker to save the results from the transform job.
-
* TransformResources
- Identifies the ML compute instances for the
* transform job.
For more information about how batch
* transformation works, see Batch
* Transform.
See Also:
AWS
* API Reference
*/
virtual Model::CreateTransformJobOutcome CreateTransformJob(const Model::CreateTransformJobRequest& request) const;
/**
* A Callable wrapper for CreateTransformJob that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateTransformJobOutcomeCallable CreateTransformJobCallable(const CreateTransformJobRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateTransformJob, request);
}
/**
* An Async wrapper for CreateTransformJob that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateTransformJobAsync(const CreateTransformJobRequestT& request, const CreateTransformJobResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateTransformJob, request, handler, context);
}
/**
* Creates an SageMaker trial. A trial is a set of steps called trial
* components that produce a machine learning model. A trial is part of a
* single SageMaker experiment.
When you use SageMaker Studio or the
* SageMaker Python SDK, all experiments, trials, and trial components are
* automatically tracked, logged, and indexed. When you use the Amazon Web Services
* SDK for Python (Boto), you must use the logging APIs provided by the SDK.
* You can add tags to a trial and then use the Search
* API to search for the tags.
To get a list of all your trials, call the ListTrials
* API. To view a trial's properties, call the DescribeTrial
* API. To create a trial component, call the CreateTrialComponent
* API.
See Also:
AWS
* API Reference
*/
virtual Model::CreateTrialOutcome CreateTrial(const Model::CreateTrialRequest& request) const;
/**
* A Callable wrapper for CreateTrial that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateTrialOutcomeCallable CreateTrialCallable(const CreateTrialRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateTrial, request);
}
/**
* An Async wrapper for CreateTrial that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateTrialAsync(const CreateTrialRequestT& request, const CreateTrialResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateTrial, request, handler, context);
}
/**
* Creates a trial component, which is a stage of a machine learning
* trial. A trial is composed of one or more trial components. A trial
* component can be used in multiple trials.
Trial components include
* pre-processing jobs, training jobs, and batch transform jobs.
When you
* use SageMaker Studio or the SageMaker Python SDK, all experiments, trials, and
* trial components are automatically tracked, logged, and indexed. When you use
* the Amazon Web Services SDK for Python (Boto), you must use the logging APIs
* provided by the SDK.
You can add tags to a trial component and then use
* the Search
* API to search for the tags.
See Also:
AWS
* API Reference
*/
virtual Model::CreateTrialComponentOutcome CreateTrialComponent(const Model::CreateTrialComponentRequest& request) const;
/**
* A Callable wrapper for CreateTrialComponent that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateTrialComponentOutcomeCallable CreateTrialComponentCallable(const CreateTrialComponentRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateTrialComponent, request);
}
/**
* An Async wrapper for CreateTrialComponent that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateTrialComponentAsync(const CreateTrialComponentRequestT& request, const CreateTrialComponentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateTrialComponent, request, handler, context);
}
/**
* Creates a user profile. A user profile represents a single user within a
* domain, and is the main way to reference a "person" for the purposes of sharing,
* reporting, and other user-oriented features. This entity is created when a user
* onboards to Amazon SageMaker Studio. If an administrator invites a person by
* email or imports them from IAM Identity Center, a user profile is automatically
* created. A user profile is the primary holder of settings for an individual user
* and has a reference to the user's private Amazon Elastic File System (EFS) home
* directory.
See Also:
AWS
* API Reference
*/
virtual Model::CreateUserProfileOutcome CreateUserProfile(const Model::CreateUserProfileRequest& request) const;
/**
* A Callable wrapper for CreateUserProfile that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateUserProfileOutcomeCallable CreateUserProfileCallable(const CreateUserProfileRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateUserProfile, request);
}
/**
* An Async wrapper for CreateUserProfile that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateUserProfileAsync(const CreateUserProfileRequestT& request, const CreateUserProfileResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateUserProfile, request, handler, context);
}
/**
* Use this operation to create a workforce. This operation will return an error
* if a workforce already exists in the Amazon Web Services Region that you
* specify. You can only create one workforce in each Amazon Web Services Region
* per Amazon Web Services account.
If you want to create a new workforce in
* an Amazon Web Services Region where a workforce already exists, use the DeleteWorkforce
* API operation to delete the existing workforce and then use
* CreateWorkforce
to create a new workforce.
To create a
* private workforce using Amazon Cognito, you must specify a Cognito user pool in
* CognitoConfig
. You can also create an Amazon Cognito workforce
* using the Amazon SageMaker console. For more information, see
* Create a Private Workforce (Amazon Cognito).
To create a private
* workforce using your own OIDC Identity Provider (IdP), specify your IdP
* configuration in OidcConfig
. Your OIDC IdP must support
* groups because groups are used by Ground Truth and Amazon A2I to create
* work teams. For more information, see
* Create a Private Workforce (OIDC IdP).
See Also:
AWS
* API Reference
*/
virtual Model::CreateWorkforceOutcome CreateWorkforce(const Model::CreateWorkforceRequest& request) const;
/**
* A Callable wrapper for CreateWorkforce that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateWorkforceOutcomeCallable CreateWorkforceCallable(const CreateWorkforceRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateWorkforce, request);
}
/**
* An Async wrapper for CreateWorkforce that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateWorkforceAsync(const CreateWorkforceRequestT& request, const CreateWorkforceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateWorkforce, request, handler, context);
}
/**
* Creates a new work team for labeling your data. A work team is defined by one
* or more Amazon Cognito user pools. You must first create the user pools before
* you can create a work team.
You cannot create more than 25 work teams in
* an account and region.
See Also:
AWS
* API Reference
*/
virtual Model::CreateWorkteamOutcome CreateWorkteam(const Model::CreateWorkteamRequest& request) const;
/**
* A Callable wrapper for CreateWorkteam that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateWorkteamOutcomeCallable CreateWorkteamCallable(const CreateWorkteamRequestT& request) const
{
return SubmitCallable(&SageMakerClient::CreateWorkteam, request);
}
/**
* An Async wrapper for CreateWorkteam that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateWorkteamAsync(const CreateWorkteamRequestT& request, const CreateWorkteamResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::CreateWorkteam, request, handler, context);
}
/**
* Deletes an action.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteActionOutcome DeleteAction(const Model::DeleteActionRequest& request) const;
/**
* A Callable wrapper for DeleteAction that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteActionOutcomeCallable DeleteActionCallable(const DeleteActionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteAction, request);
}
/**
* An Async wrapper for DeleteAction that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteActionAsync(const DeleteActionRequestT& request, const DeleteActionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteAction, request, handler, context);
}
/**
* Removes the specified algorithm from your account.
See Also:
* AWS
* API Reference
*/
virtual Model::DeleteAlgorithmOutcome DeleteAlgorithm(const Model::DeleteAlgorithmRequest& request) const;
/**
* A Callable wrapper for DeleteAlgorithm that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteAlgorithmOutcomeCallable DeleteAlgorithmCallable(const DeleteAlgorithmRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteAlgorithm, request);
}
/**
* An Async wrapper for DeleteAlgorithm that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteAlgorithmAsync(const DeleteAlgorithmRequestT& request, const DeleteAlgorithmResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteAlgorithm, request, handler, context);
}
/**
* Used to stop and delete an app.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteAppOutcome DeleteApp(const Model::DeleteAppRequest& request) const;
/**
* A Callable wrapper for DeleteApp that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteAppOutcomeCallable DeleteAppCallable(const DeleteAppRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteApp, request);
}
/**
* An Async wrapper for DeleteApp that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteAppAsync(const DeleteAppRequestT& request, const DeleteAppResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteApp, request, handler, context);
}
/**
* Deletes an AppImageConfig.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteAppImageConfigOutcome DeleteAppImageConfig(const Model::DeleteAppImageConfigRequest& request) const;
/**
* A Callable wrapper for DeleteAppImageConfig that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteAppImageConfigOutcomeCallable DeleteAppImageConfigCallable(const DeleteAppImageConfigRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteAppImageConfig, request);
}
/**
* An Async wrapper for DeleteAppImageConfig that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteAppImageConfigAsync(const DeleteAppImageConfigRequestT& request, const DeleteAppImageConfigResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteAppImageConfig, request, handler, context);
}
/**
* Deletes an artifact. Either ArtifactArn
or Source
* must be specified.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteArtifactOutcome DeleteArtifact(const Model::DeleteArtifactRequest& request) const;
/**
* A Callable wrapper for DeleteArtifact that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteArtifactOutcomeCallable DeleteArtifactCallable(const DeleteArtifactRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteArtifact, request);
}
/**
* An Async wrapper for DeleteArtifact that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteArtifactAsync(const DeleteArtifactRequestT& request, const DeleteArtifactResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteArtifact, request, handler, context);
}
/**
* Deletes an association.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteAssociationOutcome DeleteAssociation(const Model::DeleteAssociationRequest& request) const;
/**
* A Callable wrapper for DeleteAssociation that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteAssociationOutcomeCallable DeleteAssociationCallable(const DeleteAssociationRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteAssociation, request);
}
/**
* An Async wrapper for DeleteAssociation that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteAssociationAsync(const DeleteAssociationRequestT& request, const DeleteAssociationResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteAssociation, request, handler, context);
}
/**
* Deletes the specified Git repository from your account.
See
* Also:
AWS
* API Reference
*/
virtual Model::DeleteCodeRepositoryOutcome DeleteCodeRepository(const Model::DeleteCodeRepositoryRequest& request) const;
/**
* A Callable wrapper for DeleteCodeRepository that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteCodeRepositoryOutcomeCallable DeleteCodeRepositoryCallable(const DeleteCodeRepositoryRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteCodeRepository, request);
}
/**
* An Async wrapper for DeleteCodeRepository that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteCodeRepositoryAsync(const DeleteCodeRepositoryRequestT& request, const DeleteCodeRepositoryResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteCodeRepository, request, handler, context);
}
/**
* Deletes an context.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteContextOutcome DeleteContext(const Model::DeleteContextRequest& request) const;
/**
* A Callable wrapper for DeleteContext that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteContextOutcomeCallable DeleteContextCallable(const DeleteContextRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteContext, request);
}
/**
* An Async wrapper for DeleteContext that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteContextAsync(const DeleteContextRequestT& request, const DeleteContextResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteContext, request, handler, context);
}
/**
* Deletes a data quality monitoring job definition.
See Also:
* AWS
* API Reference
*/
virtual Model::DeleteDataQualityJobDefinitionOutcome DeleteDataQualityJobDefinition(const Model::DeleteDataQualityJobDefinitionRequest& request) const;
/**
* A Callable wrapper for DeleteDataQualityJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteDataQualityJobDefinitionOutcomeCallable DeleteDataQualityJobDefinitionCallable(const DeleteDataQualityJobDefinitionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteDataQualityJobDefinition, request);
}
/**
* An Async wrapper for DeleteDataQualityJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteDataQualityJobDefinitionAsync(const DeleteDataQualityJobDefinitionRequestT& request, const DeleteDataQualityJobDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteDataQualityJobDefinition, request, handler, context);
}
/**
* Deletes a fleet.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteDeviceFleetOutcome DeleteDeviceFleet(const Model::DeleteDeviceFleetRequest& request) const;
/**
* A Callable wrapper for DeleteDeviceFleet that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteDeviceFleetOutcomeCallable DeleteDeviceFleetCallable(const DeleteDeviceFleetRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteDeviceFleet, request);
}
/**
* An Async wrapper for DeleteDeviceFleet that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteDeviceFleetAsync(const DeleteDeviceFleetRequestT& request, const DeleteDeviceFleetResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteDeviceFleet, request, handler, context);
}
/**
* Used to delete a domain. If you onboarded with IAM mode, you will need to
* delete your domain to onboard again using IAM Identity Center. Use with caution.
* All of the members of the domain will lose access to their EFS volume, including
* data, notebooks, and other artifacts.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteDomainOutcome DeleteDomain(const Model::DeleteDomainRequest& request) const;
/**
* A Callable wrapper for DeleteDomain that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteDomainOutcomeCallable DeleteDomainCallable(const DeleteDomainRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteDomain, request);
}
/**
* An Async wrapper for DeleteDomain that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteDomainAsync(const DeleteDomainRequestT& request, const DeleteDomainResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteDomain, request, handler, context);
}
/**
* Deletes an edge deployment plan if (and only if) all the stages in the plan
* are inactive or there are no stages in the plan.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteEdgeDeploymentPlanOutcome DeleteEdgeDeploymentPlan(const Model::DeleteEdgeDeploymentPlanRequest& request) const;
/**
* A Callable wrapper for DeleteEdgeDeploymentPlan that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteEdgeDeploymentPlanOutcomeCallable DeleteEdgeDeploymentPlanCallable(const DeleteEdgeDeploymentPlanRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteEdgeDeploymentPlan, request);
}
/**
* An Async wrapper for DeleteEdgeDeploymentPlan that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteEdgeDeploymentPlanAsync(const DeleteEdgeDeploymentPlanRequestT& request, const DeleteEdgeDeploymentPlanResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteEdgeDeploymentPlan, request, handler, context);
}
/**
* Delete a stage in an edge deployment plan if (and only if) the stage is
* inactive.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteEdgeDeploymentStageOutcome DeleteEdgeDeploymentStage(const Model::DeleteEdgeDeploymentStageRequest& request) const;
/**
* A Callable wrapper for DeleteEdgeDeploymentStage that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteEdgeDeploymentStageOutcomeCallable DeleteEdgeDeploymentStageCallable(const DeleteEdgeDeploymentStageRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteEdgeDeploymentStage, request);
}
/**
* An Async wrapper for DeleteEdgeDeploymentStage that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteEdgeDeploymentStageAsync(const DeleteEdgeDeploymentStageRequestT& request, const DeleteEdgeDeploymentStageResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteEdgeDeploymentStage, request, handler, context);
}
/**
* Deletes an endpoint. SageMaker frees up all of the resources that were
* deployed when the endpoint was created.
SageMaker retires any custom KMS
* key grants associated with the endpoint, meaning you don't need to use the RevokeGrant
* API call.
When you delete your endpoint, SageMaker asynchronously deletes
* associated endpoint resources such as KMS key grants. You might still see these
* resources in your account for a few minutes after deleting your endpoint. Do not
* delete or revoke the permissions for your ExecutionRoleArn
*
, otherwise SageMaker cannot delete these resources.
See
* Also:
AWS
* API Reference
*/
virtual Model::DeleteEndpointOutcome DeleteEndpoint(const Model::DeleteEndpointRequest& request) const;
/**
* A Callable wrapper for DeleteEndpoint that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteEndpointOutcomeCallable DeleteEndpointCallable(const DeleteEndpointRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteEndpoint, request);
}
/**
* An Async wrapper for DeleteEndpoint that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteEndpointAsync(const DeleteEndpointRequestT& request, const DeleteEndpointResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteEndpoint, request, handler, context);
}
/**
* Deletes an endpoint configuration. The DeleteEndpointConfig
API
* deletes only the specified configuration. It does not delete endpoints created
* using the configuration.
You must not delete an
* EndpointConfig
in use by an endpoint that is live or while the
* UpdateEndpoint
or CreateEndpoint
operations are being
* performed on the endpoint. If you delete the EndpointConfig
of an
* endpoint that is active or being created or updated you may lose visibility into
* the instance type the endpoint is using. The endpoint must be deleted in order
* to stop incurring charges.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteEndpointConfigOutcome DeleteEndpointConfig(const Model::DeleteEndpointConfigRequest& request) const;
/**
* A Callable wrapper for DeleteEndpointConfig that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteEndpointConfigOutcomeCallable DeleteEndpointConfigCallable(const DeleteEndpointConfigRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteEndpointConfig, request);
}
/**
* An Async wrapper for DeleteEndpointConfig that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteEndpointConfigAsync(const DeleteEndpointConfigRequestT& request, const DeleteEndpointConfigResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteEndpointConfig, request, handler, context);
}
/**
* Deletes an SageMaker experiment. All trials associated with the experiment
* must be deleted first. Use the ListTrials
* API to get a list of the trials associated with the experiment.
See
* Also:
AWS
* API Reference
*/
virtual Model::DeleteExperimentOutcome DeleteExperiment(const Model::DeleteExperimentRequest& request) const;
/**
* A Callable wrapper for DeleteExperiment that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteExperimentOutcomeCallable DeleteExperimentCallable(const DeleteExperimentRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteExperiment, request);
}
/**
* An Async wrapper for DeleteExperiment that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteExperimentAsync(const DeleteExperimentRequestT& request, const DeleteExperimentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteExperiment, request, handler, context);
}
/**
* Delete the FeatureGroup
and any data that was written to the
* OnlineStore
of the FeatureGroup
. Data cannot be
* accessed from the OnlineStore
immediately after
* DeleteFeatureGroup
is called.
Data written into the
* OfflineStore
will not be deleted. The Amazon Web Services Glue
* database and tables that are automatically created for your
* OfflineStore
are not deleted.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteFeatureGroupOutcome DeleteFeatureGroup(const Model::DeleteFeatureGroupRequest& request) const;
/**
* A Callable wrapper for DeleteFeatureGroup that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteFeatureGroupOutcomeCallable DeleteFeatureGroupCallable(const DeleteFeatureGroupRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteFeatureGroup, request);
}
/**
* An Async wrapper for DeleteFeatureGroup that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteFeatureGroupAsync(const DeleteFeatureGroupRequestT& request, const DeleteFeatureGroupResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteFeatureGroup, request, handler, context);
}
/**
* Deletes the specified flow definition.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteFlowDefinitionOutcome DeleteFlowDefinition(const Model::DeleteFlowDefinitionRequest& request) const;
/**
* A Callable wrapper for DeleteFlowDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteFlowDefinitionOutcomeCallable DeleteFlowDefinitionCallable(const DeleteFlowDefinitionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteFlowDefinition, request);
}
/**
* An Async wrapper for DeleteFlowDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteFlowDefinitionAsync(const DeleteFlowDefinitionRequestT& request, const DeleteFlowDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteFlowDefinition, request, handler, context);
}
/**
* Delete a hub.
Hub APIs are only callable through SageMaker
* Studio.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteHubOutcome DeleteHub(const Model::DeleteHubRequest& request) const;
/**
* A Callable wrapper for DeleteHub that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteHubOutcomeCallable DeleteHubCallable(const DeleteHubRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteHub, request);
}
/**
* An Async wrapper for DeleteHub that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteHubAsync(const DeleteHubRequestT& request, const DeleteHubResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteHub, request, handler, context);
}
/**
* Delete the contents of a hub.
Hub APIs are only callable
* through SageMaker Studio.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteHubContentOutcome DeleteHubContent(const Model::DeleteHubContentRequest& request) const;
/**
* A Callable wrapper for DeleteHubContent that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteHubContentOutcomeCallable DeleteHubContentCallable(const DeleteHubContentRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteHubContent, request);
}
/**
* An Async wrapper for DeleteHubContent that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteHubContentAsync(const DeleteHubContentRequestT& request, const DeleteHubContentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteHubContent, request, handler, context);
}
/**
* Use this operation to delete a human task user interface (worker task
* template).
To see a list of human task user interfaces (work task
* templates) in your account, use ListHumanTaskUis.
* When you delete a worker task template, it no longer appears when you call
* ListHumanTaskUis
.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteHumanTaskUiOutcome DeleteHumanTaskUi(const Model::DeleteHumanTaskUiRequest& request) const;
/**
* A Callable wrapper for DeleteHumanTaskUi that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteHumanTaskUiOutcomeCallable DeleteHumanTaskUiCallable(const DeleteHumanTaskUiRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteHumanTaskUi, request);
}
/**
* An Async wrapper for DeleteHumanTaskUi that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteHumanTaskUiAsync(const DeleteHumanTaskUiRequestT& request, const DeleteHumanTaskUiResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteHumanTaskUi, request, handler, context);
}
/**
* Deletes a SageMaker image and all versions of the image. The container images
* aren't deleted.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteImageOutcome DeleteImage(const Model::DeleteImageRequest& request) const;
/**
* A Callable wrapper for DeleteImage that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteImageOutcomeCallable DeleteImageCallable(const DeleteImageRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteImage, request);
}
/**
* An Async wrapper for DeleteImage that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteImageAsync(const DeleteImageRequestT& request, const DeleteImageResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteImage, request, handler, context);
}
/**
* Deletes a version of a SageMaker image. The container image the version
* represents isn't deleted.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteImageVersionOutcome DeleteImageVersion(const Model::DeleteImageVersionRequest& request) const;
/**
* A Callable wrapper for DeleteImageVersion that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteImageVersionOutcomeCallable DeleteImageVersionCallable(const DeleteImageVersionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteImageVersion, request);
}
/**
* An Async wrapper for DeleteImageVersion that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteImageVersionAsync(const DeleteImageVersionRequestT& request, const DeleteImageVersionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteImageVersion, request, handler, context);
}
/**
* Deletes an inference experiment.
This operation does not
* delete your endpoint, variants, or any underlying resources. This operation only
* deletes the metadata of your experiment.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteInferenceExperimentOutcome DeleteInferenceExperiment(const Model::DeleteInferenceExperimentRequest& request) const;
/**
* A Callable wrapper for DeleteInferenceExperiment that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteInferenceExperimentOutcomeCallable DeleteInferenceExperimentCallable(const DeleteInferenceExperimentRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteInferenceExperiment, request);
}
/**
* An Async wrapper for DeleteInferenceExperiment that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteInferenceExperimentAsync(const DeleteInferenceExperimentRequestT& request, const DeleteInferenceExperimentResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteInferenceExperiment, request, handler, context);
}
/**
* Deletes a model. The DeleteModel
API deletes only the model
* entry that was created in SageMaker when you called the CreateModel
* API. It does not delete model artifacts, inference code, or the IAM role that
* you specified when creating the model.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteModelOutcome DeleteModel(const Model::DeleteModelRequest& request) const;
/**
* A Callable wrapper for DeleteModel that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteModelOutcomeCallable DeleteModelCallable(const DeleteModelRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteModel, request);
}
/**
* An Async wrapper for DeleteModel that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteModelAsync(const DeleteModelRequestT& request, const DeleteModelResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteModel, request, handler, context);
}
/**
* Deletes an Amazon SageMaker model bias job definition.
See
* Also:
AWS
* API Reference
*/
virtual Model::DeleteModelBiasJobDefinitionOutcome DeleteModelBiasJobDefinition(const Model::DeleteModelBiasJobDefinitionRequest& request) const;
/**
* A Callable wrapper for DeleteModelBiasJobDefinition that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteModelBiasJobDefinitionOutcomeCallable DeleteModelBiasJobDefinitionCallable(const DeleteModelBiasJobDefinitionRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteModelBiasJobDefinition, request);
}
/**
* An Async wrapper for DeleteModelBiasJobDefinition that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteModelBiasJobDefinitionAsync(const DeleteModelBiasJobDefinitionRequestT& request, const DeleteModelBiasJobDefinitionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteModelBiasJobDefinition, request, handler, context);
}
/**
* Deletes an Amazon SageMaker Model Card.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteModelCardOutcome DeleteModelCard(const Model::DeleteModelCardRequest& request) const;
/**
* A Callable wrapper for DeleteModelCard that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteModelCardOutcomeCallable DeleteModelCardCallable(const DeleteModelCardRequestT& request) const
{
return SubmitCallable(&SageMakerClient::DeleteModelCard, request);
}
/**
* An Async wrapper for DeleteModelCard that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteModelCardAsync(const DeleteModelCardRequestT& request, const DeleteModelCardResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&SageMakerClient::DeleteModelCard, request, handler, context);
}
/**
*