& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::CreateDataSourceFromRDS, request, handler, context);
}
/**
* Creates a DataSource
from a database hosted on an Amazon
* Redshift cluster. A DataSource
references data that can be used to
* perform either CreateMLModel
, CreateEvaluation
, or
* CreateBatchPrediction
operations.
* CreateDataSourceFromRedshift
is an asynchronous operation. In
* response to CreateDataSourceFromRedshift
, Amazon Machine Learning
* (Amazon ML) immediately returns and sets the DataSource
status to
* PENDING
. After the DataSource
is created and ready for
* use, Amazon ML sets the Status
parameter to COMPLETED
.
* DataSource
in COMPLETED
or PENDING
states
* can be used to perform only CreateMLModel
,
* CreateEvaluation
, or CreateBatchPrediction
operations.
*
If Amazon ML can't accept the input source, it sets the
* Status
parameter to FAILED
and includes an error
* message in the Message
attribute of the GetDataSource
* operation response.
The observations should be contained in the database
* hosted on an Amazon Redshift cluster and should be specified by a
* SelectSqlQuery
query. Amazon ML executes an Unload
* command in Amazon Redshift to transfer the result set of the
* SelectSqlQuery
query to S3StagingLocation
.
* After the DataSource
has been created, it's ready for use in
* evaluations and batch predictions. If you plan to use the
* DataSource
to train an MLModel
, the
* DataSource
also requires a recipe. A recipe describes how each
* input variable will be used in training an MLModel
. Will the
* variable be included or excluded from training? Will the variable be
* manipulated; for example, will it be combined with another variable or will it
* be split apart into word combinations? The recipe provides answers to these
* questions.
You can't change an existing datasource, but you can copy and
* modify the settings from an existing Amazon Redshift datasource to create a new
* datasource. To do so, call GetDataSource
for an existing datasource
* and copy the values to a CreateDataSource
call. Change the settings
* that you want to change and make sure that all required fields have the
* appropriate values.
See Also:
AWS
* API Reference
*/
virtual Model::CreateDataSourceFromRedshiftOutcome CreateDataSourceFromRedshift(const Model::CreateDataSourceFromRedshiftRequest& request) const;
/**
* A Callable wrapper for CreateDataSourceFromRedshift that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateDataSourceFromRedshiftOutcomeCallable CreateDataSourceFromRedshiftCallable(const CreateDataSourceFromRedshiftRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::CreateDataSourceFromRedshift, request);
}
/**
* An Async wrapper for CreateDataSourceFromRedshift that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateDataSourceFromRedshiftAsync(const CreateDataSourceFromRedshiftRequestT& request, const CreateDataSourceFromRedshiftResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::CreateDataSourceFromRedshift, request, handler, context);
}
/**
* Creates a DataSource
object. A DataSource
* references data that can be used to perform CreateMLModel
,
* CreateEvaluation
, or CreateBatchPrediction
* operations.
CreateDataSourceFromS3
is an asynchronous
* operation. In response to CreateDataSourceFromS3
, Amazon Machine
* Learning (Amazon ML) immediately returns and sets the DataSource
* status to PENDING
. After the DataSource
has been
* created and is ready for use, Amazon ML sets the Status
parameter
* to COMPLETED
. DataSource
in the COMPLETED
* or PENDING
state can be used to perform only
* CreateMLModel
, CreateEvaluation
or
* CreateBatchPrediction
operations.
If Amazon ML can't
* accept the input source, it sets the Status
parameter to
* FAILED
and includes an error message in the Message
* attribute of the GetDataSource
operation response.
The
* observation data used in a DataSource
should be ready to use; that
* is, it should have a consistent structure, and missing data values should be
* kept to a minimum. The observation data must reside in one or more .csv files in
* an Amazon Simple Storage Service (Amazon S3) location, along with a schema that
* describes the data items by name and type. The same schema must be used for all
* of the data files referenced by the DataSource
.
After the
* DataSource
has been created, it's ready to use in evaluations and
* batch predictions. If you plan to use the DataSource
to train an
* MLModel
, the DataSource
also needs a recipe. A recipe
* describes how each input variable will be used in training an
* MLModel
. Will the variable be included or excluded from training?
* Will the variable be manipulated; for example, will it be combined with another
* variable or will it be split apart into word combinations? The recipe provides
* answers to these questions.
See Also:
AWS
* API Reference
*/
virtual Model::CreateDataSourceFromS3Outcome CreateDataSourceFromS3(const Model::CreateDataSourceFromS3Request& request) const;
/**
* A Callable wrapper for CreateDataSourceFromS3 that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateDataSourceFromS3OutcomeCallable CreateDataSourceFromS3Callable(const CreateDataSourceFromS3RequestT& request) const
{
return SubmitCallable(&MachineLearningClient::CreateDataSourceFromS3, request);
}
/**
* An Async wrapper for CreateDataSourceFromS3 that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateDataSourceFromS3Async(const CreateDataSourceFromS3RequestT& request, const CreateDataSourceFromS3ResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::CreateDataSourceFromS3, request, handler, context);
}
/**
* Creates a new Evaluation
of an MLModel
. An
* MLModel
is evaluated on a set of observations associated to a
* DataSource
. Like a DataSource
for an
* MLModel
, the DataSource
for an Evaluation
* contains values for the Target Variable
. The
* Evaluation
compares the predicted result for each observation to
* the actual outcome and provides a summary so that you know how effective the
* MLModel
functions on the test data. Evaluation generates a relevant
* performance metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore
* based on the corresponding MLModelType
: BINARY
,
* REGRESSION
or MULTICLASS
.
* CreateEvaluation
is an asynchronous operation. In response to
* CreateEvaluation
, Amazon Machine Learning (Amazon ML) immediately
* returns and sets the evaluation status to PENDING
. After the
* Evaluation
is created and ready for use, Amazon ML sets the status
* to COMPLETED
.
You can use the GetEvaluation
* operation to check progress of the evaluation during the creation
* operation.
See Also:
AWS
* API Reference
*/
virtual Model::CreateEvaluationOutcome CreateEvaluation(const Model::CreateEvaluationRequest& request) const;
/**
* A Callable wrapper for CreateEvaluation that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateEvaluationOutcomeCallable CreateEvaluationCallable(const CreateEvaluationRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::CreateEvaluation, request);
}
/**
* An Async wrapper for CreateEvaluation that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateEvaluationAsync(const CreateEvaluationRequestT& request, const CreateEvaluationResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::CreateEvaluation, request, handler, context);
}
/**
* Creates a new MLModel
using the DataSource
and the
* recipe as information sources.
An MLModel
is nearly
* immutable. Users can update only the MLModelName
and the
* ScoreThreshold
in an MLModel
without creating a new
* MLModel
.
CreateMLModel
is an asynchronous
* operation. In response to CreateMLModel
, Amazon Machine Learning
* (Amazon ML) immediately returns and sets the MLModel
status to
* PENDING
. After the MLModel
has been created and ready
* is for use, Amazon ML sets the status to COMPLETED
.
You can
* use the GetMLModel
operation to check the progress of the
* MLModel
during the creation operation.
* CreateMLModel
requires a DataSource
with computed
* statistics, which can be created by setting ComputeStatistics
to
* true
in CreateDataSourceFromRDS
,
* CreateDataSourceFromS3
, or
* CreateDataSourceFromRedshift
operations.
See Also:
* AWS
* API Reference
*/
virtual Model::CreateMLModelOutcome CreateMLModel(const Model::CreateMLModelRequest& request) const;
/**
* A Callable wrapper for CreateMLModel that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateMLModelOutcomeCallable CreateMLModelCallable(const CreateMLModelRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::CreateMLModel, request);
}
/**
* An Async wrapper for CreateMLModel that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateMLModelAsync(const CreateMLModelRequestT& request, const CreateMLModelResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::CreateMLModel, request, handler, context);
}
/**
* Creates a real-time endpoint for the MLModel
. The endpoint
* contains the URI of the MLModel
; that is, the location to send
* real-time prediction requests for the specified
* MLModel
.
See Also:
AWS
* API Reference
*/
virtual Model::CreateRealtimeEndpointOutcome CreateRealtimeEndpoint(const Model::CreateRealtimeEndpointRequest& request) const;
/**
* A Callable wrapper for CreateRealtimeEndpoint that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::CreateRealtimeEndpointOutcomeCallable CreateRealtimeEndpointCallable(const CreateRealtimeEndpointRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::CreateRealtimeEndpoint, request);
}
/**
* An Async wrapper for CreateRealtimeEndpoint that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void CreateRealtimeEndpointAsync(const CreateRealtimeEndpointRequestT& request, const CreateRealtimeEndpointResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::CreateRealtimeEndpoint, request, handler, context);
}
/**
* Assigns the DELETED status to a BatchPrediction
, rendering it
* unusable.
After using the DeleteBatchPrediction
operation,
* you can use the GetBatchPrediction operation to verify that the status of
* the BatchPrediction
changed to DELETED.
Caution: The
* result of the DeleteBatchPrediction
operation is
* irreversible.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteBatchPredictionOutcome DeleteBatchPrediction(const Model::DeleteBatchPredictionRequest& request) const;
/**
* A Callable wrapper for DeleteBatchPrediction that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteBatchPredictionOutcomeCallable DeleteBatchPredictionCallable(const DeleteBatchPredictionRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DeleteBatchPrediction, request);
}
/**
* An Async wrapper for DeleteBatchPrediction that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteBatchPredictionAsync(const DeleteBatchPredictionRequestT& request, const DeleteBatchPredictionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DeleteBatchPrediction, request, handler, context);
}
/**
* Assigns the DELETED status to a DataSource
, rendering it
* unusable.
After using the DeleteDataSource
operation, you
* can use the GetDataSource operation to verify that the status of the
* DataSource
changed to DELETED.
Caution: The results
* of the DeleteDataSource
operation are irreversible.
See
* Also:
AWS
* API Reference
*/
virtual Model::DeleteDataSourceOutcome DeleteDataSource(const Model::DeleteDataSourceRequest& request) const;
/**
* A Callable wrapper for DeleteDataSource that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteDataSourceOutcomeCallable DeleteDataSourceCallable(const DeleteDataSourceRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DeleteDataSource, request);
}
/**
* An Async wrapper for DeleteDataSource that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteDataSourceAsync(const DeleteDataSourceRequestT& request, const DeleteDataSourceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DeleteDataSource, request, handler, context);
}
/**
* Assigns the DELETED
status to an Evaluation
,
* rendering it unusable.
After invoking the DeleteEvaluation
* operation, you can use the GetEvaluation
operation to verify that
* the status of the Evaluation
changed to DELETED
.
* Caution: The results of the DeleteEvaluation
operation
* are irreversible.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteEvaluationOutcome DeleteEvaluation(const Model::DeleteEvaluationRequest& request) const;
/**
* A Callable wrapper for DeleteEvaluation that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteEvaluationOutcomeCallable DeleteEvaluationCallable(const DeleteEvaluationRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DeleteEvaluation, request);
}
/**
* An Async wrapper for DeleteEvaluation that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteEvaluationAsync(const DeleteEvaluationRequestT& request, const DeleteEvaluationResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DeleteEvaluation, request, handler, context);
}
/**
* Assigns the DELETED
status to an MLModel
, rendering
* it unusable.
After using the DeleteMLModel
operation, you
* can use the GetMLModel
operation to verify that the status of the
* MLModel
changed to DELETED.
Caution: The result of
* the DeleteMLModel
operation is irreversible.
See
* Also:
AWS
* API Reference
*/
virtual Model::DeleteMLModelOutcome DeleteMLModel(const Model::DeleteMLModelRequest& request) const;
/**
* A Callable wrapper for DeleteMLModel that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteMLModelOutcomeCallable DeleteMLModelCallable(const DeleteMLModelRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DeleteMLModel, request);
}
/**
* An Async wrapper for DeleteMLModel that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteMLModelAsync(const DeleteMLModelRequestT& request, const DeleteMLModelResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DeleteMLModel, request, handler, context);
}
/**
* Deletes a real time endpoint of an MLModel
.
See
* Also:
AWS
* API Reference
*/
virtual Model::DeleteRealtimeEndpointOutcome DeleteRealtimeEndpoint(const Model::DeleteRealtimeEndpointRequest& request) const;
/**
* A Callable wrapper for DeleteRealtimeEndpoint that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteRealtimeEndpointOutcomeCallable DeleteRealtimeEndpointCallable(const DeleteRealtimeEndpointRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DeleteRealtimeEndpoint, request);
}
/**
* An Async wrapper for DeleteRealtimeEndpoint that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteRealtimeEndpointAsync(const DeleteRealtimeEndpointRequestT& request, const DeleteRealtimeEndpointResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DeleteRealtimeEndpoint, request, handler, context);
}
/**
* Deletes the specified tags associated with an ML object. After this operation
* is complete, you can't recover deleted tags.
If you specify a tag that
* doesn't exist, Amazon ML ignores it.
See Also:
AWS
* API Reference
*/
virtual Model::DeleteTagsOutcome DeleteTags(const Model::DeleteTagsRequest& request) const;
/**
* A Callable wrapper for DeleteTags that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DeleteTagsOutcomeCallable DeleteTagsCallable(const DeleteTagsRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DeleteTags, request);
}
/**
* An Async wrapper for DeleteTags that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DeleteTagsAsync(const DeleteTagsRequestT& request, const DeleteTagsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DeleteTags, request, handler, context);
}
/**
* Returns a list of BatchPrediction
operations that match the
* search criteria in the request.
See Also:
AWS
* API Reference
*/
virtual Model::DescribeBatchPredictionsOutcome DescribeBatchPredictions(const Model::DescribeBatchPredictionsRequest& request) const;
/**
* A Callable wrapper for DescribeBatchPredictions that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeBatchPredictionsOutcomeCallable DescribeBatchPredictionsCallable(const DescribeBatchPredictionsRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DescribeBatchPredictions, request);
}
/**
* An Async wrapper for DescribeBatchPredictions that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeBatchPredictionsAsync(const DescribeBatchPredictionsRequestT& request, const DescribeBatchPredictionsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DescribeBatchPredictions, request, handler, context);
}
/**
* Returns a list of DataSource
that match the search criteria in
* the request.
See Also:
AWS
* API Reference
*/
virtual Model::DescribeDataSourcesOutcome DescribeDataSources(const Model::DescribeDataSourcesRequest& request) const;
/**
* A Callable wrapper for DescribeDataSources that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeDataSourcesOutcomeCallable DescribeDataSourcesCallable(const DescribeDataSourcesRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DescribeDataSources, request);
}
/**
* An Async wrapper for DescribeDataSources that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeDataSourcesAsync(const DescribeDataSourcesRequestT& request, const DescribeDataSourcesResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DescribeDataSources, request, handler, context);
}
/**
* Returns a list of DescribeEvaluations
that match the search
* criteria in the request.
See Also:
AWS
* API Reference
*/
virtual Model::DescribeEvaluationsOutcome DescribeEvaluations(const Model::DescribeEvaluationsRequest& request) const;
/**
* A Callable wrapper for DescribeEvaluations that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeEvaluationsOutcomeCallable DescribeEvaluationsCallable(const DescribeEvaluationsRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DescribeEvaluations, request);
}
/**
* An Async wrapper for DescribeEvaluations that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeEvaluationsAsync(const DescribeEvaluationsRequestT& request, const DescribeEvaluationsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DescribeEvaluations, request, handler, context);
}
/**
* Returns a list of MLModel
that match the search criteria in the
* request.
See Also:
AWS
* API Reference
*/
virtual Model::DescribeMLModelsOutcome DescribeMLModels(const Model::DescribeMLModelsRequest& request) const;
/**
* A Callable wrapper for DescribeMLModels that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeMLModelsOutcomeCallable DescribeMLModelsCallable(const DescribeMLModelsRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DescribeMLModels, request);
}
/**
* An Async wrapper for DescribeMLModels that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeMLModelsAsync(const DescribeMLModelsRequestT& request, const DescribeMLModelsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DescribeMLModels, request, handler, context);
}
/**
* Describes one or more of the tags for your Amazon ML object.
See
* Also:
AWS
* API Reference
*/
virtual Model::DescribeTagsOutcome DescribeTags(const Model::DescribeTagsRequest& request) const;
/**
* A Callable wrapper for DescribeTags that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::DescribeTagsOutcomeCallable DescribeTagsCallable(const DescribeTagsRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::DescribeTags, request);
}
/**
* An Async wrapper for DescribeTags that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void DescribeTagsAsync(const DescribeTagsRequestT& request, const DescribeTagsResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::DescribeTags, request, handler, context);
}
/**
* Returns a BatchPrediction
that includes detailed metadata,
* status, and data file information for a Batch Prediction
* request.
See Also:
AWS
* API Reference
*/
virtual Model::GetBatchPredictionOutcome GetBatchPrediction(const Model::GetBatchPredictionRequest& request) const;
/**
* A Callable wrapper for GetBatchPrediction that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::GetBatchPredictionOutcomeCallable GetBatchPredictionCallable(const GetBatchPredictionRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::GetBatchPrediction, request);
}
/**
* An Async wrapper for GetBatchPrediction that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void GetBatchPredictionAsync(const GetBatchPredictionRequestT& request, const GetBatchPredictionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::GetBatchPrediction, request, handler, context);
}
/**
* Returns a DataSource
that includes metadata and data file
* information, as well as the current status of the DataSource
.
* GetDataSource
provides results in normal or verbose format. The
* verbose format adds the schema description and the list of files pointed to by
* the DataSource to the normal format.
See Also:
AWS
* API Reference
*/
virtual Model::GetDataSourceOutcome GetDataSource(const Model::GetDataSourceRequest& request) const;
/**
* A Callable wrapper for GetDataSource that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::GetDataSourceOutcomeCallable GetDataSourceCallable(const GetDataSourceRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::GetDataSource, request);
}
/**
* An Async wrapper for GetDataSource that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void GetDataSourceAsync(const GetDataSourceRequestT& request, const GetDataSourceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::GetDataSource, request, handler, context);
}
/**
* Returns an Evaluation
that includes metadata as well as the
* current status of the Evaluation
.
See Also:
AWS
* API Reference
*/
virtual Model::GetEvaluationOutcome GetEvaluation(const Model::GetEvaluationRequest& request) const;
/**
* A Callable wrapper for GetEvaluation that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::GetEvaluationOutcomeCallable GetEvaluationCallable(const GetEvaluationRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::GetEvaluation, request);
}
/**
* An Async wrapper for GetEvaluation that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void GetEvaluationAsync(const GetEvaluationRequestT& request, const GetEvaluationResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::GetEvaluation, request, handler, context);
}
/**
* Returns an MLModel
that includes detailed metadata, data source
* information, and the current status of the MLModel
.
* GetMLModel
provides results in normal or verbose format.
*
See Also:
AWS
* API Reference
*/
virtual Model::GetMLModelOutcome GetMLModel(const Model::GetMLModelRequest& request) const;
/**
* A Callable wrapper for GetMLModel that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::GetMLModelOutcomeCallable GetMLModelCallable(const GetMLModelRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::GetMLModel, request);
}
/**
* An Async wrapper for GetMLModel that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void GetMLModelAsync(const GetMLModelRequestT& request, const GetMLModelResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::GetMLModel, request, handler, context);
}
/**
* Generates a prediction for the observation using the specified ML
* Model
.
Note: Not all response parameters will be
* populated. Whether a response parameter is populated depends on the type of
* model requested.
See Also:
AWS
* API Reference
*/
virtual Model::PredictOutcome Predict(const Model::PredictRequest& request) const;
/**
* A Callable wrapper for Predict that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::PredictOutcomeCallable PredictCallable(const PredictRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::Predict, request);
}
/**
* An Async wrapper for Predict that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void PredictAsync(const PredictRequestT& request, const PredictResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::Predict, request, handler, context);
}
/**
* Updates the BatchPredictionName
of a
* BatchPrediction
.
You can use the
* GetBatchPrediction
operation to view the contents of the updated
* data element.
See Also:
AWS
* API Reference
*/
virtual Model::UpdateBatchPredictionOutcome UpdateBatchPrediction(const Model::UpdateBatchPredictionRequest& request) const;
/**
* A Callable wrapper for UpdateBatchPrediction that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::UpdateBatchPredictionOutcomeCallable UpdateBatchPredictionCallable(const UpdateBatchPredictionRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::UpdateBatchPrediction, request);
}
/**
* An Async wrapper for UpdateBatchPrediction that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void UpdateBatchPredictionAsync(const UpdateBatchPredictionRequestT& request, const UpdateBatchPredictionResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::UpdateBatchPrediction, request, handler, context);
}
/**
* Updates the DataSourceName
of a DataSource
.
* You can use the GetDataSource
operation to view the contents of
* the updated data element.
See Also:
AWS
* API Reference
*/
virtual Model::UpdateDataSourceOutcome UpdateDataSource(const Model::UpdateDataSourceRequest& request) const;
/**
* A Callable wrapper for UpdateDataSource that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::UpdateDataSourceOutcomeCallable UpdateDataSourceCallable(const UpdateDataSourceRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::UpdateDataSource, request);
}
/**
* An Async wrapper for UpdateDataSource that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void UpdateDataSourceAsync(const UpdateDataSourceRequestT& request, const UpdateDataSourceResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::UpdateDataSource, request, handler, context);
}
/**
* Updates the EvaluationName
of an Evaluation
.
* You can use the GetEvaluation
operation to view the contents of
* the updated data element.
See Also:
AWS
* API Reference
*/
virtual Model::UpdateEvaluationOutcome UpdateEvaluation(const Model::UpdateEvaluationRequest& request) const;
/**
* A Callable wrapper for UpdateEvaluation that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::UpdateEvaluationOutcomeCallable UpdateEvaluationCallable(const UpdateEvaluationRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::UpdateEvaluation, request);
}
/**
* An Async wrapper for UpdateEvaluation that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void UpdateEvaluationAsync(const UpdateEvaluationRequestT& request, const UpdateEvaluationResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::UpdateEvaluation, request, handler, context);
}
/**
* Updates the MLModelName
and the ScoreThreshold
of
* an MLModel
.
You can use the GetMLModel
* operation to view the contents of the updated data element.
See
* Also:
AWS
* API Reference
*/
virtual Model::UpdateMLModelOutcome UpdateMLModel(const Model::UpdateMLModelRequest& request) const;
/**
* A Callable wrapper for UpdateMLModel that returns a future to the operation so that it can be executed in parallel to other requests.
*/
template
Model::UpdateMLModelOutcomeCallable UpdateMLModelCallable(const UpdateMLModelRequestT& request) const
{
return SubmitCallable(&MachineLearningClient::UpdateMLModel, request);
}
/**
* An Async wrapper for UpdateMLModel that queues the request into a thread executor and triggers associated callback when operation has finished.
*/
template
void UpdateMLModelAsync(const UpdateMLModelRequestT& request, const UpdateMLModelResponseReceivedHandler& handler, const std::shared_ptr& context = nullptr) const
{
return SubmitAsync(&MachineLearningClient::UpdateMLModel, request, handler, context);
}
void OverrideEndpoint(const Aws::String& endpoint);
std::shared_ptr& accessEndpointProvider();
private:
friend class Aws::Client::ClientWithAsyncTemplateMethods;
void init(const MachineLearningClientConfiguration& clientConfiguration);
MachineLearningClientConfiguration m_clientConfiguration;
std::shared_ptr m_executor;
std::shared_ptr m_endpointProvider;
};
} // namespace MachineLearning
} // namespace Aws