/* * Copyright 2018-2023 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with * the License. A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR * CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions * and limitations under the License. */ package com.amazonaws.services.machinelearning; import javax.annotation.Generated; import com.amazonaws.*; import com.amazonaws.regions.*; import com.amazonaws.services.machinelearning.model.*; import com.amazonaws.services.machinelearning.waiters.AmazonMachineLearningWaiters; /** * Interface for accessing Amazon Machine Learning. *
* Note: Do not directly implement this interface, new methods are added to it regularly. Extend from * {@link com.amazonaws.services.machinelearning.AbstractAmazonMachineLearning} instead. *
** Definition of the public APIs exposed by Amazon Machine Learning */ @Generated("com.amazonaws:aws-java-sdk-code-generator") public interface AmazonMachineLearning { /** * The region metadata service name for computing region endpoints. You can use this value to retrieve metadata * (such as supported regions) of the service. * * @see RegionUtils#getRegionsForService(String) */ String ENDPOINT_PREFIX = "machinelearning"; /** * Overrides the default endpoint for this client ("https://machinelearning.us-east-1.amazonaws.com"). Callers can * use this method to control which AWS region they want to work with. *
* Callers can pass in just the endpoint (ex: "machinelearning.us-east-1.amazonaws.com") or a full URL, including * the protocol (ex: "https://machinelearning.us-east-1.amazonaws.com"). If the protocol is not specified here, the * default protocol from this client's {@link ClientConfiguration} will be used, which by default is HTTPS. *
* For more information on using AWS regions with the AWS SDK for Java, and a complete list of all available * endpoints for all AWS services, see: https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-region-selection.html#region-selection- * choose-endpoint *
* This method is not threadsafe. An endpoint should be configured when the client is created and before any * service requests are made. Changing it afterwards creates inevitable race conditions for any service requests in * transit or retrying. * * @param endpoint * The endpoint (ex: "machinelearning.us-east-1.amazonaws.com") or a full URL, including the protocol (ex: * "https://machinelearning.us-east-1.amazonaws.com") of the region specific AWS endpoint this client will * communicate with. * @deprecated use {@link AwsClientBuilder#setEndpointConfiguration(AwsClientBuilder.EndpointConfiguration)} for * example: * {@code builder.setEndpointConfiguration(new EndpointConfiguration(endpoint, signingRegion));} */ @Deprecated void setEndpoint(String endpoint); /** * An alternative to {@link AmazonMachineLearning#setEndpoint(String)}, sets the regional endpoint for this client's * service calls. Callers can use this method to control which AWS region they want to work with. *
* By default, all service endpoints in all regions use the https protocol. To use http instead, specify it in the * {@link ClientConfiguration} supplied at construction. *
* This method is not threadsafe. A region should be configured when the client is created and before any service * requests are made. Changing it afterwards creates inevitable race conditions for any service requests in transit * or retrying. * * @param region * The region this client will communicate with. See {@link Region#getRegion(com.amazonaws.regions.Regions)} * for accessing a given region. Must not be null and must be a region where the service is available. * * @see Region#getRegion(com.amazonaws.regions.Regions) * @see Region#createClient(Class, com.amazonaws.auth.AWSCredentialsProvider, ClientConfiguration) * @see Region#isServiceSupported(String) * @deprecated use {@link AwsClientBuilder#setRegion(String)} */ @Deprecated void setRegion(Region region); /** *
* Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you
* add a tag using a key that is already associated with the ML object, AddTags updates the tag's
* value.
*
* Generates predictions for a group of observations. The observations to process exist in one or more data files
* referenced by a DataSource. This operation creates a new BatchPrediction, and uses an
* MLModel and the data files referenced by the DataSource as information sources.
*
* CreateBatchPrediction is an asynchronous operation. In response to
* CreateBatchPrediction, Amazon Machine Learning (Amazon ML) immediately returns and sets the
* BatchPrediction status to PENDING. After the BatchPrediction completes,
* Amazon ML sets the status to COMPLETED.
*
* You can poll for status updates by using the GetBatchPrediction operation and checking the
* Status parameter of the result. After the COMPLETED status appears, the results are
* available in the location specified by the OutputUri parameter.
*
* Creates a DataSource object from an Amazon Relational Database
* Service (Amazon RDS). A DataSource references data that can be used to perform
* CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.
*
* CreateDataSourceFromRDS is an asynchronous operation. In response to
* CreateDataSourceFromRDS, Amazon Machine Learning (Amazon ML) immediately returns and sets the
* DataSource status to PENDING. After the DataSource is created and ready
* for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in
* the COMPLETED or PENDING state can be used only to perform
* >CreateMLModel>, CreateEvaluation, or CreateBatchPrediction
* operations.
*
* If Amazon ML cannot accept the input source, it sets the Status parameter to FAILED and
* includes an error message in the Message attribute of the GetDataSource operation
* response.
*
* Creates a DataSource from a database hosted on an Amazon Redshift cluster. A DataSource
* references data that can be used to perform either CreateMLModel, CreateEvaluation, or
* CreateBatchPrediction operations.
*
* CreateDataSourceFromRedshift is an asynchronous operation. In response to
* CreateDataSourceFromRedshift, Amazon Machine Learning (Amazon ML) immediately returns and sets the
* DataSource status to PENDING. After the DataSource is created and ready
* for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in
* COMPLETED or PENDING states can be used to perform only CreateMLModel,
* CreateEvaluation, or CreateBatchPrediction operations.
*
* If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and
* includes an error message in the Message attribute of the GetDataSource operation
* response.
*
* The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified
* by a SelectSqlQuery query. Amazon ML executes an Unload command in Amazon Redshift to
* transfer the result set of the SelectSqlQuery query to S3StagingLocation.
*
* After the DataSource has been created, it's ready for use in evaluations and batch predictions. If
* you plan to use the DataSource to train an MLModel, the DataSource also
* requires a recipe. A recipe describes how each input variable will be used in training an MLModel.
* Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it
* be combined with another variable or will it be split apart into word combinations? The recipe provides answers
* to these questions.
*
* You can't change an existing datasource, but you can copy and modify the settings from an existing Amazon
* Redshift datasource to create a new datasource. To do so, call GetDataSource for an existing
* datasource and copy the values to a CreateDataSource call. Change the settings that you want to
* change and make sure that all required fields have the appropriate values.
*
* Creates a DataSource object. A DataSource references data that can be used to perform
* CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.
*
* CreateDataSourceFromS3 is an asynchronous operation. In response to
* CreateDataSourceFromS3, Amazon Machine Learning (Amazon ML) immediately returns and sets the
* DataSource status to PENDING. After the DataSource has been created and is
* ready for use, Amazon ML sets the Status parameter to COMPLETED.
* DataSource in the COMPLETED or PENDING state can be used to perform only
* CreateMLModel, CreateEvaluation or CreateBatchPrediction operations.
*
* If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and
* includes an error message in the Message attribute of the GetDataSource operation
* response.
*
* The observation data used in a DataSource should be ready to use; that is, it should have a
* consistent structure, and missing data values should be kept to a minimum. The observation data must reside in
* one or more .csv files in an Amazon Simple Storage Service (Amazon S3) location, along with a schema that
* describes the data items by name and type. The same schema must be used for all of the data files referenced by
* the DataSource.
*
* After the DataSource has been created, it's ready to use in evaluations and batch predictions. If
* you plan to use the DataSource to train an MLModel, the DataSource also
* needs a recipe. A recipe describes how each input variable will be used in training an MLModel. Will
* the variable be included or excluded from training? Will the variable be manipulated; for example, will it be
* combined with another variable or will it be split apart into word combinations? The recipe provides answers to
* these questions.
*
* Creates a new Evaluation of an MLModel. An MLModel is evaluated on a set
* of observations associated to a DataSource. Like a DataSource for an
* MLModel, the DataSource for an Evaluation contains values for the
* Target Variable. The Evaluation compares the predicted result for each observation to
* the actual outcome and provides a summary so that you know how effective the MLModel functions on
* the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or
* MulticlassAvgFScore based on the corresponding MLModelType: BINARY,
* REGRESSION or MULTICLASS.
*
* CreateEvaluation is an asynchronous operation. In response to CreateEvaluation, Amazon
* Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING. After
* the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED.
*
* You can use the GetEvaluation operation to check progress of the evaluation during the creation
* operation.
*
* Creates a new MLModel using the DataSource and the recipe as information sources.
*
* An MLModel is nearly immutable. Users can update only the MLModelName and the
* ScoreThreshold in an MLModel without creating a new MLModel.
*
* CreateMLModel is an asynchronous operation. In response to CreateMLModel, Amazon
* Machine Learning (Amazon ML) immediately returns and sets the MLModel status to PENDING
* . After the MLModel has been created and ready is for use, Amazon ML sets the status to
* COMPLETED.
*
* You can use the GetMLModel operation to check the progress of the MLModel during the
* creation operation.
*
* CreateMLModel requires a DataSource with computed statistics, which can be created by
* setting ComputeStatistics to true in CreateDataSourceFromRDS,
* CreateDataSourceFromS3, or CreateDataSourceFromRedshift operations.
*
* Creates a real-time endpoint for the MLModel. The endpoint contains the URI of the
* MLModel; that is, the location to send real-time prediction requests for the specified
* MLModel.
*
* Assigns the DELETED status to a BatchPrediction, rendering it unusable.
*
* After using the DeleteBatchPrediction operation, you can use the GetBatchPrediction operation
* to verify that the status of the BatchPrediction changed to DELETED.
*
* Caution: The result of the DeleteBatchPrediction operation is irreversible.
*
* Assigns the DELETED status to a DataSource, rendering it unusable.
*
* After using the DeleteDataSource operation, you can use the GetDataSource operation to verify
* that the status of the DataSource changed to DELETED.
*
* Caution: The results of the DeleteDataSource operation are irreversible.
*
* Assigns the DELETED status to an Evaluation, rendering it unusable.
*
* After invoking the DeleteEvaluation operation, you can use the GetEvaluation operation
* to verify that the status of the Evaluation changed to DELETED.
*
* Caution: The results of the DeleteEvaluation operation are irreversible.
*
* Assigns the DELETED status to an MLModel, rendering it unusable.
*
* After using the DeleteMLModel operation, you can use the GetMLModel operation to verify
* that the status of the MLModel changed to DELETED.
*
* Caution: The result of the DeleteMLModel operation is irreversible.
*
* Deletes a real time endpoint of an MLModel.
*
* Deletes the specified tags associated with an ML object. After this operation is complete, you can't recover * deleted tags. *
** If you specify a tag that doesn't exist, Amazon ML ignores it. *
* * @param deleteTagsRequest * @return Result of the DeleteTags operation returned by the service. * @throws InvalidInputException * An error on the client occurred. Typically, the cause is an invalid input value. * @throws InvalidTagException * @throws ResourceNotFoundException * A specified resource cannot be located. * @throws InternalServerException * An error on the server occurred when trying to process a request. * @sample AmazonMachineLearning.DeleteTags */ DeleteTagsResult deleteTags(DeleteTagsRequest deleteTagsRequest); /** *
* Returns a list of BatchPrediction operations that match the search criteria in the request.
*
* Returns a list of DataSource that match the search criteria in the request.
*
* Returns a list of DescribeEvaluations that match the search criteria in the request.
*
* Returns a list of MLModel that match the search criteria in the request.
*
* Describes one or more of the tags for your Amazon ML object. *
* * @param describeTagsRequest * @return Result of the DescribeTags operation returned by the service. * @throws InvalidInputException * An error on the client occurred. Typically, the cause is an invalid input value. * @throws ResourceNotFoundException * A specified resource cannot be located. * @throws InternalServerException * An error on the server occurred when trying to process a request. * @sample AmazonMachineLearning.DescribeTags */ DescribeTagsResult describeTags(DescribeTagsRequest describeTagsRequest); /** *
* Returns a BatchPrediction that includes detailed metadata, status, and data file information for a
* Batch Prediction request.
*
* Returns a DataSource that includes metadata and data file information, as well as the current status
* of the DataSource.
*
* GetDataSource provides results in normal or verbose format. The verbose format adds the schema
* description and the list of files pointed to by the DataSource to the normal format.
*
* Returns an Evaluation that includes metadata as well as the current status of the
* Evaluation.
*
* Returns an MLModel that includes detailed metadata, data source information, and the current status
* of the MLModel.
*
* GetMLModel provides results in normal or verbose format.
*
* Generates a prediction for the observation using the specified ML Model.
*
* Note: Not all response parameters will be populated. Whether a response parameter is populated depends on * the type of model requested. *
* * @param predictRequest * @return Result of the Predict operation returned by the service. * @throws InvalidInputException * An error on the client occurred. Typically, the cause is an invalid input value. * @throws ResourceNotFoundException * A specified resource cannot be located. * @throws LimitExceededException * The subscriber exceeded the maximum number of operations. This exception can occur when listing objects * such asDataSource.
* @throws InternalServerException
* An error on the server occurred when trying to process a request.
* @throws PredictorNotMountedException
* The exception is thrown when a predict request is made to an unmounted MLModel.
* @sample AmazonMachineLearning.Predict
*/
PredictResult predict(PredictRequest predictRequest);
/**
*
* Updates the BatchPredictionName of a BatchPrediction.
*
* You can use the GetBatchPrediction operation to view the contents of the updated data element.
*
* Updates the DataSourceName of a DataSource.
*
* You can use the GetDataSource operation to view the contents of the updated data element.
*
* Updates the EvaluationName of an Evaluation.
*
* You can use the GetEvaluation operation to view the contents of the updated data element.
*
* Updates the MLModelName and the ScoreThreshold of an MLModel.
*
* You can use the GetMLModel operation to view the contents of the updated data element.
*
* Response metadata is only cached for a limited period of time, so if you need to access this extra diagnostic * information for an executed request, you should use this method to retrieve it as soon as possible after * executing a request. * * @param request * The originally executed request. * * @return The response metadata for the specified request, or null if none is available. */ ResponseMetadata getCachedResponseMetadata(AmazonWebServiceRequest request); AmazonMachineLearningWaiters waiters(); }