'2.0', 'service' => '
Amazon Personalize is a machine learning service that makes it easy to add individualized recommendations to customers.
', 'operations' => [ 'CreateBatchInferenceJob' => 'Creates a batch inference job. The operation can handle up to 50 million records and the input file must be in JSON format. For more information, see Creating a batch inference job.
', 'CreateBatchSegmentJob' => 'Creates a batch segment job. The operation can handle up to 50 million records and the input file must be in JSON format. For more information, see Getting batch recommendations and user segments.
', 'CreateCampaign' => 'Creates a campaign that deploys a solution version. When a client calls the GetRecommendations and GetPersonalizedRanking APIs, a campaign is specified in the request.
Minimum Provisioned TPS and Auto-Scaling
A high minProvisionedTPS
will increase your bill. We recommend starting with 1 for minProvisionedTPS
(the default). Track your usage using Amazon CloudWatch metrics, and increase the minProvisionedTPS
as necessary.
A transaction is a single GetRecommendations
or GetPersonalizedRanking
call. Transactions per second (TPS) is the throughput and unit of billing for Amazon Personalize. The minimum provisioned TPS (minProvisionedTPS
) specifies the baseline throughput provisioned by Amazon Personalize, and thus, the minimum billing charge.
If your TPS increases beyond minProvisionedTPS
, Amazon Personalize auto-scales the provisioned capacity up and down, but never below minProvisionedTPS
. There\'s a short time delay while the capacity is increased that might cause loss of transactions.
The actual TPS used is calculated as the average requests/second within a 5-minute window. You pay for maximum of either the minimum provisioned TPS or the actual TPS. We recommend starting with a low minProvisionedTPS
, track your usage using Amazon CloudWatch metrics, and then increase the minProvisionedTPS
as necessary.
Status
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the campaign status, call DescribeCampaign.
Wait until the status
of the campaign is ACTIVE
before asking the campaign for recommendations.
Related APIs
', 'CreateDataset' => 'Creates an empty dataset and adds it to the specified dataset group. Use CreateDatasetImportJob to import your training data to a dataset.
There are three types of datasets:
Interactions
Items
Users
Each dataset type has an associated schema with required field types. Only the Interactions
dataset is required in order to train a model (also referred to as creating a solution).
A dataset can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the status of the dataset, call DescribeDataset.
Related APIs
', 'CreateDatasetExportJob' => ' Creates a job that exports data from your dataset to an Amazon S3 bucket. To allow Amazon Personalize to export the training data, you must specify an service-linked IAM role that gives Amazon Personalize PutObject
permissions for your Amazon S3 bucket. For information, see Exporting a dataset in the Amazon Personalize developer guide.
Status
A dataset export job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of the export job, call DescribeDatasetExportJob, and specify the Amazon Resource Name (ARN) of the dataset export job. The dataset export is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Creates an empty dataset group. A dataset group is a container for Amazon Personalize resources. A dataset group can contain at most three datasets, one for each type of dataset:
Interactions
Items
Users
A dataset group can be a Domain dataset group, where you specify a domain and use pre-configured resources like recommenders, or a Custom dataset group, where you use custom resources, such as a solution with a solution version, that you deploy with a campaign. If you start with a Domain dataset group, you can still add custom resources such as solutions and solution versions trained with recipes for custom use cases and deployed with campaigns.
A dataset group can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING
To get the status of the dataset group, call DescribeDatasetGroup. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the creation failed.
You must wait until the status
of the dataset group is ACTIVE
before adding a dataset to the group.
You can specify an Key Management Service (KMS) key to encrypt the datasets in the group. If you specify a KMS key, you must also include an Identity and Access Management (IAM) role that has permission to access the key.
APIs that require a dataset group ARN in the request
Related APIs
', 'CreateDatasetImportJob' => 'Creates a job that imports training data from your data source (an Amazon S3 bucket) to an Amazon Personalize dataset. To allow Amazon Personalize to import the training data, you must specify an IAM service role that has permission to read from the data source, as Amazon Personalize makes a copy of your data and processes it internally. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources.
By default, a dataset import job replaces any existing data in the dataset that you imported in bulk. To add new records without replacing existing data, specify INCREMENTAL for the import mode in the CreateDatasetImportJob operation.
Status
A dataset import job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of the import job, call DescribeDatasetImportJob, providing the Amazon Resource Name (ARN) of the dataset import job. The dataset import is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Importing takes time. You must wait until the status shows as ACTIVE before training a model using the dataset.
Related APIs
', 'CreateEventTracker' => 'Creates an event tracker that you use when adding event data to a specified dataset group using the PutEvents API.
Only one event tracker can be associated with a dataset group. You will get an error if you call CreateEventTracker
using the same dataset group as an existing event tracker.
When you create an event tracker, the response includes a tracking ID, which you pass as a parameter when you use the PutEvents operation. Amazon Personalize then appends the event data to the Interactions dataset of the dataset group you specify in your event tracker.
The event tracker can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the status of the event tracker, call DescribeEventTracker.
The event tracker must be in the ACTIVE state before using the tracking ID.
Related APIs
', 'CreateFilter' => 'Creates a recommendation filter. For more information, see Filtering recommendations and user segments.
', 'CreateMetricAttribution' => 'Creates a metric attribution. A metric attribution creates reports on the data that you import into Amazon Personalize. Depending on how you imported the data, you can view reports in Amazon CloudWatch or Amazon S3. For more information, see Measuring impact of recommendations.
', 'CreateRecommender' => 'Creates a recommender with the recipe (a Domain dataset group use case) you specify. You create recommenders for a Domain dataset group and specify the recommender\'s Amazon Resource Name (ARN) when you make a GetRecommendations request.
Minimum recommendation requests per second
A high minRecommendationRequestsPerSecond
will increase your bill. We recommend starting with 1 for minRecommendationRequestsPerSecond
(the default). Track your usage using Amazon CloudWatch metrics, and increase the minRecommendationRequestsPerSecond
as necessary.
When you create a recommender, you can configure the recommender\'s minimum recommendation requests per second. The minimum recommendation requests per second (minRecommendationRequestsPerSecond
) specifies the baseline recommendation request throughput provisioned by Amazon Personalize. The default minRecommendationRequestsPerSecond is 1
. A recommendation request is a single GetRecommendations
operation. Request throughput is measured in requests per second and Amazon Personalize uses your requests per second to derive your requests per hour and the price of your recommender usage.
If your requests per second increases beyond minRecommendationRequestsPerSecond
, Amazon Personalize auto-scales the provisioned capacity up and down, but never below minRecommendationRequestsPerSecond
. There\'s a short time delay while the capacity is increased that might cause loss of requests.
Your bill is the greater of either the minimum requests per hour (based on minRecommendationRequestsPerSecond) or the actual number of requests. The actual request throughput used is calculated as the average requests/second within a one-hour window. We recommend starting with the default minRecommendationRequestsPerSecond
, track your usage using Amazon CloudWatch metrics, and then increase the minRecommendationRequestsPerSecond
as necessary.
Status
A recommender can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE
DELETE PENDING > DELETE IN_PROGRESS
To get the recommender status, call DescribeRecommender.
Wait until the status
of the recommender is ACTIVE
before asking the recommender for recommendations.
Related APIs
', 'CreateSchema' => 'Creates an Amazon Personalize schema from the specified schema string. The schema you create must be in Avro JSON format.
Amazon Personalize recognizes three schema variants. Each schema is associated with a dataset type and has a set of required field and keywords. If you are creating a schema for a dataset in a Domain dataset group, you provide the domain of the Domain dataset group. You specify a schema when you call CreateDataset.
Related APIs
', 'CreateSolution' => 'Creates the configuration for training a model. A trained model is known as a solution version. After the configuration is created, you train the model (create a solution version) by calling the CreateSolutionVersion operation. Every time you call CreateSolutionVersion
, a new version of the solution is created.
After creating a solution version, you check its accuracy by calling GetSolutionMetrics. When you are satisfied with the version, you deploy it using CreateCampaign. The campaign provides recommendations to a client through the GetRecommendations API.
To train a model, Amazon Personalize requires training data and a recipe. The training data comes from the dataset group that you provide in the request. A recipe specifies the training algorithm and a feature transformation. You can specify one of the predefined recipes provided by Amazon Personalize.
Amazon Personalize doesn\'t support configuring the hpoObjective
for solution hyperparameter optimization at this time.
Status
A solution can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the status of the solution, call DescribeSolution. Wait until the status shows as ACTIVE before calling CreateSolutionVersion
.
Related APIs
', 'CreateSolutionVersion' => 'Trains or retrains an active solution in a Custom dataset group. A solution is created using the CreateSolution operation and must be in the ACTIVE state before calling CreateSolutionVersion
. A new version of the solution is created every time you call this operation.
Status
A solution version can be in one of the following states:
CREATE PENDING
CREATE IN_PROGRESS
ACTIVE
CREATE FAILED
CREATE STOPPING
CREATE STOPPED
To get the status of the version, call DescribeSolutionVersion. Wait until the status shows as ACTIVE before calling CreateCampaign
.
If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Related APIs
Removes a campaign by deleting the solution deployment. The solution that the campaign is based on is not deleted and can be redeployed when needed. A deleted campaign can no longer be specified in a GetRecommendations request. For information on creating campaigns, see CreateCampaign.
', 'DeleteDataset' => 'Deletes a dataset. You can\'t delete a dataset if an associated DatasetImportJob
or SolutionVersion
is in the CREATE PENDING or IN PROGRESS state. For more information on datasets, see CreateDataset.
Deletes a dataset group. Before you delete a dataset group, you must delete the following:
All associated event trackers.
All associated solutions.
All datasets in the dataset group.
Deletes the event tracker. Does not delete the event-interactions dataset from the associated dataset group. For more information on event trackers, see CreateEventTracker.
', 'DeleteFilter' => 'Deletes a filter.
', 'DeleteMetricAttribution' => 'Deletes a metric attribution.
', 'DeleteRecommender' => 'Deactivates and removes a recommender. A deleted recommender can no longer be specified in a GetRecommendations request.
', 'DeleteSchema' => 'Deletes a schema. Before deleting a schema, you must delete all datasets referencing the schema. For more information on schemas, see CreateSchema.
', 'DeleteSolution' => 'Deletes all versions of a solution and the Solution
object itself. Before deleting a solution, you must delete all campaigns based on the solution. To determine what campaigns are using the solution, call ListCampaigns and supply the Amazon Resource Name (ARN) of the solution. You can\'t delete a solution if an associated SolutionVersion
is in the CREATE PENDING or IN PROGRESS state. For more information on solutions, see CreateSolution.
Describes the given algorithm.
', 'DescribeBatchInferenceJob' => 'Gets the properties of a batch inference job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate the recommendations.
', 'DescribeBatchSegmentJob' => 'Gets the properties of a batch segment job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate segments.
', 'DescribeCampaign' => 'Describes the given campaign, including its status.
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
When the status
is CREATE FAILED
, the response includes the failureReason
key, which describes why.
For more information on campaigns, see CreateCampaign.
', 'DescribeDataset' => 'Describes the given dataset. For more information on datasets, see CreateDataset.
', 'DescribeDatasetExportJob' => 'Describes the dataset export job created by CreateDatasetExportJob, including the export job status.
', 'DescribeDatasetGroup' => 'Describes the given dataset group. For more information on dataset groups, see CreateDatasetGroup.
', 'DescribeDatasetImportJob' => 'Describes the dataset import job created by CreateDatasetImportJob, including the import job status.
', 'DescribeEventTracker' => 'Describes an event tracker. The response includes the trackingId
and status
of the event tracker. For more information on event trackers, see CreateEventTracker.
Describes the given feature transformation.
', 'DescribeFilter' => 'Describes a filter\'s properties.
', 'DescribeMetricAttribution' => 'Describes a metric attribution.
', 'DescribeRecipe' => 'Describes a recipe.
A recipe contains three items:
An algorithm that trains a model.
Hyperparameters that govern the training.
Feature transformation information for modifying the input data before training.
Amazon Personalize provides a set of predefined recipes. You specify a recipe when you create a solution with the CreateSolution API. CreateSolution
trains a model by using the algorithm in the specified recipe and a training dataset. The solution, when deployed as a campaign, can provide recommendations using the GetRecommendations API.
Describes the given recommender, including its status.
A recommender can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE
DELETE PENDING > DELETE IN_PROGRESS
When the status
is CREATE FAILED
, the response includes the failureReason
key, which describes why.
The modelMetrics
key is null when the recommender is being created or deleted.
For more information on recommenders, see CreateRecommender.
', 'DescribeSchema' => 'Describes a schema. For more information on schemas, see CreateSchema.
', 'DescribeSolution' => 'Describes a solution. For more information on solutions, see CreateSolution.
', 'DescribeSolutionVersion' => 'Describes a specific version of a solution. For more information on solutions, see CreateSolution
', 'GetSolutionMetrics' => 'Gets the metrics for the specified solution version.
', 'ListBatchInferenceJobs' => 'Gets a list of the batch inference jobs that have been performed off of a solution version.
', 'ListBatchSegmentJobs' => 'Gets a list of the batch segment jobs that have been performed off of a solution version that you specify.
', 'ListCampaigns' => 'Returns a list of campaigns that use the given solution. When a solution is not specified, all the campaigns associated with the account are listed. The response provides the properties for each campaign, including the Amazon Resource Name (ARN). For more information on campaigns, see CreateCampaign.
', 'ListDatasetExportJobs' => 'Returns a list of dataset export jobs that use the given dataset. When a dataset is not specified, all the dataset export jobs associated with the account are listed. The response provides the properties for each dataset export job, including the Amazon Resource Name (ARN). For more information on dataset export jobs, see CreateDatasetExportJob. For more information on datasets, see CreateDataset.
', 'ListDatasetGroups' => 'Returns a list of dataset groups. The response provides the properties for each dataset group, including the Amazon Resource Name (ARN). For more information on dataset groups, see CreateDatasetGroup.
', 'ListDatasetImportJobs' => 'Returns a list of dataset import jobs that use the given dataset. When a dataset is not specified, all the dataset import jobs associated with the account are listed. The response provides the properties for each dataset import job, including the Amazon Resource Name (ARN). For more information on dataset import jobs, see CreateDatasetImportJob. For more information on datasets, see CreateDataset.
', 'ListDatasets' => 'Returns the list of datasets contained in the given dataset group. The response provides the properties for each dataset, including the Amazon Resource Name (ARN). For more information on datasets, see CreateDataset.
', 'ListEventTrackers' => 'Returns the list of event trackers associated with the account. The response provides the properties for each event tracker, including the Amazon Resource Name (ARN) and tracking ID. For more information on event trackers, see CreateEventTracker.
', 'ListFilters' => 'Lists all filters that belong to a given dataset group.
', 'ListMetricAttributionMetrics' => 'Lists the metrics for the metric attribution.
', 'ListMetricAttributions' => 'Lists metric attributions.
', 'ListRecipes' => 'Returns a list of available recipes. The response provides the properties for each recipe, including the recipe\'s Amazon Resource Name (ARN).
', 'ListRecommenders' => 'Returns a list of recommenders in a given Domain dataset group. When a Domain dataset group is not specified, all the recommenders associated with the account are listed. The response provides the properties for each recommender, including the Amazon Resource Name (ARN). For more information on recommenders, see CreateRecommender.
', 'ListSchemas' => 'Returns the list of schemas associated with the account. The response provides the properties for each schema, including the Amazon Resource Name (ARN). For more information on schemas, see CreateSchema.
', 'ListSolutionVersions' => 'Returns a list of solution versions for the given solution. When a solution is not specified, all the solution versions associated with the account are listed. The response provides the properties for each solution version, including the Amazon Resource Name (ARN).
', 'ListSolutions' => 'Returns a list of solutions that use the given dataset group. When a dataset group is not specified, all the solutions associated with the account are listed. The response provides the properties for each solution, including the Amazon Resource Name (ARN). For more information on solutions, see CreateSolution.
', 'ListTagsForResource' => 'Get a list of tags attached to a resource.
', 'StartRecommender' => 'Starts a recommender that is INACTIVE. Starting a recommender does not create any new models, but resumes billing and automatic retraining for the recommender.
', 'StopRecommender' => 'Stops a recommender that is ACTIVE. Stopping a recommender halts billing and automatic retraining for the recommender.
', 'StopSolutionVersionCreation' => 'Stops creating a solution version that is in a state of CREATE_PENDING or CREATE IN_PROGRESS.
Depending on the current state of the solution version, the solution version state changes as follows:
CREATE_PENDING > CREATE_STOPPED
or
CREATE_IN_PROGRESS > CREATE_STOPPING > CREATE_STOPPED
You are billed for all of the training completed up until you stop the solution version creation. You cannot resume creating a solution version once it has been stopped.
', 'TagResource' => 'Add a list of tags to a resource.
', 'UntagResource' => 'Remove tags that are attached to a resource.
', 'UpdateCampaign' => 'Updates a campaign by either deploying a new solution or changing the value of the campaign\'s minProvisionedTPS
parameter.
To update a campaign, the campaign status must be ACTIVE or CREATE FAILED. Check the campaign status using the DescribeCampaign operation.
You can still get recommendations from a campaign while an update is in progress. The campaign will use the previous solution version and campaign configuration to generate recommendations until the latest campaign update status is Active
.
For more information on campaigns, see CreateCampaign.
', 'UpdateDataset' => 'Update a dataset to replace its schema with a new or existing one. For more information, see Replacing a dataset\'s schema.
', 'UpdateMetricAttribution' => 'Updates a metric attribution.
', 'UpdateRecommender' => 'Updates the recommender to modify the recommender configuration. If you update the recommender to modify the columns used in training, Amazon Personalize automatically starts a full retraining of the models backing your recommender. While the update completes, you can still get recommendations from the recommender. The recommender uses the previous configuration until the update completes. To track the status of this update, use the latestRecommenderUpdate
returned in the DescribeRecommender operation.
The Amazon Web Services account that owns the event tracker.
', ], ], 'Algorithm' => [ 'base' => 'Describes a custom algorithm.
', 'refs' => [ 'DescribeAlgorithmResponse$algorithm' => 'A listing of the properties of the algorithm.
', ], ], 'AlgorithmImage' => [ 'base' => 'Describes an algorithm image.
', 'refs' => [ 'Algorithm$algorithmImage' => 'The URI of the Docker container for the algorithm image.
', ], ], 'Arn' => [ 'base' => NULL, 'refs' => [ 'Algorithm$algorithmArn' => 'The Amazon Resource Name (ARN) of the algorithm.
', 'Algorithm$roleArn' => 'The Amazon Resource Name (ARN) of the role.
', 'ArnList$member' => NULL, 'AutoMLResult$bestRecipeArn' => 'The Amazon Resource Name (ARN) of the best recipe.
', 'BatchInferenceJob$batchInferenceJobArn' => 'The Amazon Resource Name (ARN) of the batch inference job.
', 'BatchInferenceJob$filterArn' => 'The ARN of the filter used on the batch inference job.
', 'BatchInferenceJob$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version from which the batch inference job was created.
', 'BatchInferenceJobSummary$batchInferenceJobArn' => 'The Amazon Resource Name (ARN) of the batch inference job.
', 'BatchInferenceJobSummary$solutionVersionArn' => 'The ARN of the solution version used by the batch inference job.
', 'BatchSegmentJob$batchSegmentJobArn' => 'The Amazon Resource Name (ARN) of the batch segment job.
', 'BatchSegmentJob$filterArn' => 'The ARN of the filter used on the batch segment job.
', 'BatchSegmentJob$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version used by the batch segment job to generate batch segments.
', 'BatchSegmentJobSummary$batchSegmentJobArn' => 'The Amazon Resource Name (ARN) of the batch segment job.
', 'BatchSegmentJobSummary$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version used by the batch segment job to generate batch segments.
', 'Campaign$campaignArn' => 'The Amazon Resource Name (ARN) of the campaign.
', 'Campaign$solutionVersionArn' => 'The Amazon Resource Name (ARN) of a specific version of the solution.
', 'CampaignSummary$campaignArn' => 'The Amazon Resource Name (ARN) of the campaign.
', 'CampaignUpdateSummary$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the deployed solution version.
', 'CreateBatchInferenceJobRequest$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version that will be used to generate the batch inference recommendations.
', 'CreateBatchInferenceJobRequest$filterArn' => 'The ARN of the filter to apply to the batch inference job. For more information on using filters, see Filtering batch recommendations.
', 'CreateBatchInferenceJobResponse$batchInferenceJobArn' => 'The ARN of the batch inference job.
', 'CreateBatchSegmentJobRequest$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version you want the batch segment job to use to generate batch segments.
', 'CreateBatchSegmentJobRequest$filterArn' => 'The ARN of the filter to apply to the batch segment job. For more information on using filters, see Filtering batch recommendations.
', 'CreateBatchSegmentJobResponse$batchSegmentJobArn' => 'The ARN of the batch segment job.
', 'CreateCampaignRequest$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version to deploy.
', 'CreateCampaignResponse$campaignArn' => 'The Amazon Resource Name (ARN) of the campaign.
', 'CreateDatasetExportJobRequest$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset that contains the data to export.
', 'CreateDatasetExportJobResponse$datasetExportJobArn' => 'The Amazon Resource Name (ARN) of the dataset export job.
', 'CreateDatasetGroupResponse$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the new dataset group.
', 'CreateDatasetImportJobRequest$datasetArn' => 'The ARN of the dataset that receives the imported data.
', 'CreateDatasetImportJobResponse$datasetImportJobArn' => 'The ARN of the dataset import job.
', 'CreateDatasetRequest$schemaArn' => 'The ARN of the schema to associate with the dataset. The schema defines the dataset fields.
', 'CreateDatasetRequest$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group to add the dataset to.
', 'CreateDatasetResponse$datasetArn' => 'The ARN of the dataset.
', 'CreateEventTrackerRequest$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group that receives the event data.
', 'CreateEventTrackerResponse$eventTrackerArn' => 'The ARN of the event tracker.
', 'CreateFilterRequest$datasetGroupArn' => 'The ARN of the dataset group that the filter will belong to.
', 'CreateFilterResponse$filterArn' => 'The ARN of the new filter.
', 'CreateMetricAttributionRequest$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the destination dataset group for the metric attribution.
', 'CreateMetricAttributionResponse$metricAttributionArn' => 'The Amazon Resource Name (ARN) for the new metric attribution.
', 'CreateRecommenderRequest$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the destination domain dataset group for the recommender.
', 'CreateRecommenderRequest$recipeArn' => 'The Amazon Resource Name (ARN) of the recipe that the recommender will use. For a recommender, a recipe is a Domain dataset group use case. Only Domain dataset group use cases can be used to create a recommender. For information about use cases see Choosing recommender use cases.
', 'CreateRecommenderResponse$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender.
', 'CreateSchemaResponse$schemaArn' => 'The Amazon Resource Name (ARN) of the created schema.
', 'CreateSolutionRequest$recipeArn' => 'The ARN of the recipe to use for model training. This is required when performAutoML
is false.
The Amazon Resource Name (ARN) of the dataset group that provides the training data.
', 'CreateSolutionResponse$solutionArn' => 'The ARN of the solution.
', 'CreateSolutionVersionRequest$solutionArn' => 'The Amazon Resource Name (ARN) of the solution containing the training configuration information.
', 'CreateSolutionVersionResponse$solutionVersionArn' => 'The ARN of the new solution version.
', 'Dataset$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset that you want metadata for.
', 'Dataset$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group.
', 'Dataset$schemaArn' => 'The ARN of the associated schema.
', 'DatasetExportJob$datasetExportJobArn' => 'The Amazon Resource Name (ARN) of the dataset export job.
', 'DatasetExportJob$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset to export.
', 'DatasetExportJob$roleArn' => 'The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket.
', 'DatasetExportJobSummary$datasetExportJobArn' => 'The Amazon Resource Name (ARN) of the dataset export job.
', 'DatasetGroup$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group.
', 'DatasetGroupSummary$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group.
', 'DatasetImportJob$datasetImportJobArn' => 'The ARN of the dataset import job.
', 'DatasetImportJob$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset that receives the imported data.
', 'DatasetImportJob$roleArn' => 'The ARN of the IAM role that has permissions to read from the Amazon S3 data source.
', 'DatasetImportJobSummary$datasetImportJobArn' => 'The Amazon Resource Name (ARN) of the dataset import job.
', 'DatasetSchema$schemaArn' => 'The Amazon Resource Name (ARN) of the schema.
', 'DatasetSchemaSummary$schemaArn' => 'The Amazon Resource Name (ARN) of the schema.
', 'DatasetSummary$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset.
', 'DatasetUpdateSummary$schemaArn' => 'The Amazon Resource Name (ARN) of the schema that replaced the previous schema of the dataset.
', 'DeleteCampaignRequest$campaignArn' => 'The Amazon Resource Name (ARN) of the campaign to delete.
', 'DeleteDatasetGroupRequest$datasetGroupArn' => 'The ARN of the dataset group to delete.
', 'DeleteDatasetRequest$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset to delete.
', 'DeleteEventTrackerRequest$eventTrackerArn' => 'The Amazon Resource Name (ARN) of the event tracker to delete.
', 'DeleteFilterRequest$filterArn' => 'The ARN of the filter to delete.
', 'DeleteMetricAttributionRequest$metricAttributionArn' => 'The metric attribution\'s Amazon Resource Name (ARN).
', 'DeleteRecommenderRequest$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender to delete.
', 'DeleteSchemaRequest$schemaArn' => 'The Amazon Resource Name (ARN) of the schema to delete.
', 'DeleteSolutionRequest$solutionArn' => 'The ARN of the solution to delete.
', 'DescribeAlgorithmRequest$algorithmArn' => 'The Amazon Resource Name (ARN) of the algorithm to describe.
', 'DescribeBatchInferenceJobRequest$batchInferenceJobArn' => 'The ARN of the batch inference job to describe.
', 'DescribeBatchSegmentJobRequest$batchSegmentJobArn' => 'The ARN of the batch segment job to describe.
', 'DescribeCampaignRequest$campaignArn' => 'The Amazon Resource Name (ARN) of the campaign.
', 'DescribeDatasetExportJobRequest$datasetExportJobArn' => 'The Amazon Resource Name (ARN) of the dataset export job to describe.
', 'DescribeDatasetGroupRequest$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group to describe.
', 'DescribeDatasetImportJobRequest$datasetImportJobArn' => 'The Amazon Resource Name (ARN) of the dataset import job to describe.
', 'DescribeDatasetRequest$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset to describe.
', 'DescribeEventTrackerRequest$eventTrackerArn' => 'The Amazon Resource Name (ARN) of the event tracker to describe.
', 'DescribeFeatureTransformationRequest$featureTransformationArn' => 'The Amazon Resource Name (ARN) of the feature transformation to describe.
', 'DescribeFilterRequest$filterArn' => 'The ARN of the filter to describe.
', 'DescribeMetricAttributionRequest$metricAttributionArn' => 'The metric attribution\'s Amazon Resource Name (ARN).
', 'DescribeRecipeRequest$recipeArn' => 'The Amazon Resource Name (ARN) of the recipe to describe.
', 'DescribeRecommenderRequest$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender to describe.
', 'DescribeSchemaRequest$schemaArn' => 'The Amazon Resource Name (ARN) of the schema to retrieve.
', 'DescribeSolutionRequest$solutionArn' => 'The Amazon Resource Name (ARN) of the solution to describe.
', 'DescribeSolutionVersionRequest$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version.
', 'EventTracker$eventTrackerArn' => 'The ARN of the event tracker.
', 'EventTracker$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group that receives the event data.
', 'EventTrackerSummary$eventTrackerArn' => 'The Amazon Resource Name (ARN) of the event tracker.
', 'FeatureTransformation$featureTransformationArn' => 'The Amazon Resource Name (ARN) of the FeatureTransformation object.
', 'Filter$filterArn' => 'The ARN of the filter.
', 'Filter$datasetGroupArn' => 'The ARN of the dataset group to which the filter belongs.
', 'FilterSummary$filterArn' => 'The ARN of the filter.
', 'FilterSummary$datasetGroupArn' => 'The ARN of the dataset group to which the filter belongs.
', 'GetSolutionMetricsRequest$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version for which to get metrics.
', 'GetSolutionMetricsResponse$solutionVersionArn' => 'The same solution version ARN as specified in the request.
', 'ListBatchInferenceJobsRequest$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version from which the batch inference jobs were created.
', 'ListBatchSegmentJobsRequest$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version that the batch segment jobs used to generate batch segments.
', 'ListCampaignsRequest$solutionArn' => 'The Amazon Resource Name (ARN) of the solution to list the campaigns for. When a solution is not specified, all the campaigns associated with the account are listed.
', 'ListDatasetExportJobsRequest$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset to list the dataset export jobs for.
', 'ListDatasetImportJobsRequest$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset to list the dataset import jobs for.
', 'ListDatasetsRequest$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group that contains the datasets to list.
', 'ListEventTrackersRequest$datasetGroupArn' => 'The ARN of a dataset group used to filter the response.
', 'ListFiltersRequest$datasetGroupArn' => 'The ARN of the dataset group that contains the filters.
', 'ListMetricAttributionMetricsRequest$metricAttributionArn' => 'The Amazon Resource Name (ARN) of the metric attribution to retrieve attributes for.
', 'ListMetricAttributionsRequest$datasetGroupArn' => 'The metric attributions\' dataset group Amazon Resource Name (ARN).
', 'ListRecommendersRequest$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the Domain dataset group to list the recommenders for. When a Domain dataset group is not specified, all the recommenders associated with the account are listed.
', 'ListSolutionVersionsRequest$solutionArn' => 'The Amazon Resource Name (ARN) of the solution.
', 'ListSolutionsRequest$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group.
', 'ListTagsForResourceRequest$resourceArn' => 'The resource\'s Amazon Resource Name.
', 'MetricAttribution$metricAttributionArn' => 'The metric attribution\'s Amazon Resource Name (ARN).
', 'MetricAttribution$datasetGroupArn' => 'The metric attribution\'s dataset group Amazon Resource Name (ARN).
', 'MetricAttributionSummary$metricAttributionArn' => 'The metric attribution\'s Amazon Resource Name (ARN).
', 'Recipe$recipeArn' => 'The Amazon Resource Name (ARN) of the recipe.
', 'Recipe$algorithmArn' => 'The Amazon Resource Name (ARN) of the algorithm that Amazon Personalize uses to train the model.
', 'Recipe$featureTransformationArn' => 'The ARN of the FeatureTransformation object.
', 'RecipeSummary$recipeArn' => 'The Amazon Resource Name (ARN) of the recipe.
', 'Recommender$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender.
', 'Recommender$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the Domain dataset group that contains the recommender.
', 'Recommender$recipeArn' => 'The Amazon Resource Name (ARN) of the recipe (Domain dataset group use case) that the recommender was created for.
', 'RecommenderSummary$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender.
', 'RecommenderSummary$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the Domain dataset group that contains the recommender.
', 'RecommenderSummary$recipeArn' => 'The Amazon Resource Name (ARN) of the recipe (Domain dataset group use case) that the recommender was created for.
', 'Solution$solutionArn' => 'The ARN of the solution.
', 'Solution$recipeArn' => 'The ARN of the recipe used to create the solution. This is required when performAutoML
is false.
The Amazon Resource Name (ARN) of the dataset group that provides the training data.
', 'SolutionSummary$solutionArn' => 'The Amazon Resource Name (ARN) of the solution.
', 'SolutionSummary$recipeArn' => 'The Amazon Resource Name (ARN) of the recipe used by the solution.
', 'SolutionVersion$solutionVersionArn' => 'The ARN of the solution version.
', 'SolutionVersion$solutionArn' => 'The ARN of the solution.
', 'SolutionVersion$recipeArn' => 'The ARN of the recipe used in the solution.
', 'SolutionVersion$datasetGroupArn' => 'The Amazon Resource Name (ARN) of the dataset group providing the training data.
', 'SolutionVersionSummary$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version.
', 'StartRecommenderRequest$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender to start.
', 'StartRecommenderResponse$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender you started.
', 'StopRecommenderRequest$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender to stop.
', 'StopRecommenderResponse$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender you stopped.
', 'StopSolutionVersionCreationRequest$solutionVersionArn' => 'The Amazon Resource Name (ARN) of the solution version you want to stop creating.
', 'TagResourceRequest$resourceArn' => 'The resource\'s Amazon Resource Name (ARN).
', 'UntagResourceRequest$resourceArn' => 'The resource\'s Amazon Resource Name (ARN).
', 'UpdateCampaignRequest$campaignArn' => 'The Amazon Resource Name (ARN) of the campaign.
', 'UpdateCampaignRequest$solutionVersionArn' => 'The ARN of a new solution version to deploy.
', 'UpdateCampaignResponse$campaignArn' => 'The same campaign ARN as given in the request.
', 'UpdateDatasetRequest$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset that you want to update.
', 'UpdateDatasetRequest$schemaArn' => 'The Amazon Resource Name (ARN) of the new schema you want use.
', 'UpdateDatasetResponse$datasetArn' => 'The Amazon Resource Name (ARN) of the dataset you updated.
', 'UpdateMetricAttributionRequest$metricAttributionArn' => 'The Amazon Resource Name (ARN) for the metric attribution to update.
', 'UpdateMetricAttributionResponse$metricAttributionArn' => 'The Amazon Resource Name (ARN) for the metric attribution that you updated.
', 'UpdateRecommenderRequest$recommenderArn' => 'The Amazon Resource Name (ARN) of the recommender to modify.
', 'UpdateRecommenderResponse$recommenderArn' => 'The same recommender Amazon Resource Name (ARN) as given in the request.
', ], ], 'ArnList' => [ 'base' => NULL, 'refs' => [ 'AutoMLConfig$recipeList' => 'The list of candidate recipes.
', ], ], 'AutoMLConfig' => [ 'base' => 'When the solution performs AutoML (performAutoML
is true in CreateSolution), Amazon Personalize determines which recipe, from the specified list, optimizes the given metric. Amazon Personalize then uses that recipe for the solution.
The AutoMLConfig object containing a list of recipes to search when AutoML is performed.
', ], ], 'AutoMLResult' => [ 'base' => 'When the solution performs AutoML (performAutoML
is true in CreateSolution), specifies the recipe that best optimized the specified metric.
When performAutoML
is true, specifies the best recipe found.
A schema in Avro JSON format.
', 'DatasetSchema$schema' => 'The schema.
', ], ], 'BatchInferenceJob' => [ 'base' => 'Contains information on a batch inference job.
', 'refs' => [ 'DescribeBatchInferenceJobResponse$batchInferenceJob' => 'Information on the specified batch inference job.
', ], ], 'BatchInferenceJobConfig' => [ 'base' => 'The configuration details of a batch inference job.
', 'refs' => [ 'BatchInferenceJob$batchInferenceJobConfig' => 'A string to string map of the configuration details of a batch inference job.
', 'CreateBatchInferenceJobRequest$batchInferenceJobConfig' => 'The configuration details of a batch inference job.
', ], ], 'BatchInferenceJobInput' => [ 'base' => 'The input configuration of a batch inference job.
', 'refs' => [ 'BatchInferenceJob$jobInput' => 'The Amazon S3 path that leads to the input data used to generate the batch inference job.
', 'CreateBatchInferenceJobRequest$jobInput' => 'The Amazon S3 path that leads to the input file to base your recommendations on. The input material must be in JSON format.
', ], ], 'BatchInferenceJobOutput' => [ 'base' => 'The output configuration parameters of a batch inference job.
', 'refs' => [ 'BatchInferenceJob$jobOutput' => 'The Amazon S3 bucket that contains the output data generated by the batch inference job.
', 'CreateBatchInferenceJobRequest$jobOutput' => 'The path to the Amazon S3 bucket where the job\'s output will be stored.
', ], ], 'BatchInferenceJobSummary' => [ 'base' => 'A truncated version of the BatchInferenceJob. The ListBatchInferenceJobs operation returns a list of batch inference job summaries.
', 'refs' => [ 'BatchInferenceJobs$member' => NULL, ], ], 'BatchInferenceJobs' => [ 'base' => NULL, 'refs' => [ 'ListBatchInferenceJobsResponse$batchInferenceJobs' => 'A list containing information on each job that is returned.
', ], ], 'BatchSegmentJob' => [ 'base' => 'Contains information on a batch segment job.
', 'refs' => [ 'DescribeBatchSegmentJobResponse$batchSegmentJob' => 'Information on the specified batch segment job.
', ], ], 'BatchSegmentJobInput' => [ 'base' => 'The input configuration of a batch segment job.
', 'refs' => [ 'BatchSegmentJob$jobInput' => 'The Amazon S3 path that leads to the input data used to generate the batch segment job.
', 'CreateBatchSegmentJobRequest$jobInput' => 'The Amazon S3 path for the input data used to generate the batch segment job.
', ], ], 'BatchSegmentJobOutput' => [ 'base' => 'The output configuration parameters of a batch segment job.
', 'refs' => [ 'BatchSegmentJob$jobOutput' => 'The Amazon S3 bucket that contains the output data generated by the batch segment job.
', 'CreateBatchSegmentJobRequest$jobOutput' => 'The Amazon S3 path for the bucket where the job\'s output will be stored.
', ], ], 'BatchSegmentJobSummary' => [ 'base' => 'A truncated version of the BatchSegmentJob datatype. ListBatchSegmentJobs operation returns a list of batch segment job summaries.
', 'refs' => [ 'BatchSegmentJobs$member' => NULL, ], ], 'BatchSegmentJobs' => [ 'base' => NULL, 'refs' => [ 'ListBatchSegmentJobsResponse$batchSegmentJobs' => 'A list containing information on each job that is returned.
', ], ], 'Boolean' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetImportJobRequest$publishAttributionMetricsToS3' => 'If you created a metric attribution, specify whether to publish metrics for this import job to Amazon S3
', 'CreateSolutionRequest$performHPO' => 'Whether to perform hyperparameter optimization (HPO) on the specified or selected recipe. The default is false
.
When performing AutoML, this parameter is always true
and you should not set it to false
.
Whether the job publishes metrics to Amazon S3 for a metric attribution.
', ], ], 'Campaign' => [ 'base' => 'An object that describes the deployment of a solution version. For more information on campaigns, see CreateCampaign.
', 'refs' => [ 'DescribeCampaignResponse$campaign' => 'The properties of the campaign.
', ], ], 'CampaignConfig' => [ 'base' => 'The configuration details of a campaign.
', 'refs' => [ 'Campaign$campaignConfig' => 'The configuration details of a campaign.
', 'CampaignUpdateSummary$campaignConfig' => NULL, 'CreateCampaignRequest$campaignConfig' => 'The configuration details of a campaign.
', 'UpdateCampaignRequest$campaignConfig' => 'The configuration details of a campaign.
', ], ], 'CampaignSummary' => [ 'base' => 'Provides a summary of the properties of a campaign. For a complete listing, call the DescribeCampaign API.
', 'refs' => [ 'Campaigns$member' => NULL, ], ], 'CampaignUpdateSummary' => [ 'base' => 'Provides a summary of the properties of a campaign update. For a complete listing, call the DescribeCampaign API.
', 'refs' => [ 'Campaign$latestCampaignUpdate' => NULL, ], ], 'Campaigns' => [ 'base' => NULL, 'refs' => [ 'ListCampaignsResponse$campaigns' => 'A list of the campaigns.
', ], ], 'CategoricalHyperParameterRange' => [ 'base' => 'Provides the name and range of a categorical hyperparameter.
', 'refs' => [ 'CategoricalHyperParameterRanges$member' => NULL, ], ], 'CategoricalHyperParameterRanges' => [ 'base' => NULL, 'refs' => [ 'HyperParameterRanges$categoricalHyperParameterRanges' => 'The categorical hyperparameters and their ranges.
', ], ], 'CategoricalValue' => [ 'base' => NULL, 'refs' => [ 'CategoricalValues$member' => NULL, ], ], 'CategoricalValues' => [ 'base' => NULL, 'refs' => [ 'CategoricalHyperParameterRange$values' => 'A list of the categories for the hyperparameter.
', 'DefaultCategoricalHyperParameterRange$values' => 'A list of the categories for the hyperparameter.
', ], ], 'ColumnName' => [ 'base' => NULL, 'refs' => [ 'ColumnNamesList$member' => NULL, ], ], 'ColumnNamesList' => [ 'base' => NULL, 'refs' => [ 'ExcludedDatasetColumns$value' => NULL, ], ], 'ContinuousHyperParameterRange' => [ 'base' => 'Provides the name and range of a continuous hyperparameter.
', 'refs' => [ 'ContinuousHyperParameterRanges$member' => NULL, ], ], 'ContinuousHyperParameterRanges' => [ 'base' => NULL, 'refs' => [ 'HyperParameterRanges$continuousHyperParameterRanges' => 'The continuous hyperparameters and their ranges.
', ], ], 'ContinuousMaxValue' => [ 'base' => NULL, 'refs' => [ 'ContinuousHyperParameterRange$maxValue' => 'The maximum allowable value for the hyperparameter.
', 'DefaultContinuousHyperParameterRange$maxValue' => 'The maximum allowable value for the hyperparameter.
', ], ], 'ContinuousMinValue' => [ 'base' => NULL, 'refs' => [ 'ContinuousHyperParameterRange$minValue' => 'The minimum allowable value for the hyperparameter.
', 'DefaultContinuousHyperParameterRange$minValue' => 'The minimum allowable value for the hyperparameter.
', ], ], 'CreateBatchInferenceJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateBatchInferenceJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateBatchSegmentJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateBatchSegmentJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateCampaignRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateCampaignResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetExportJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetExportJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetGroupRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetGroupResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetImportJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetImportJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateDatasetResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateEventTrackerRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateEventTrackerResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateFilterRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateFilterResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateMetricAttributionRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateMetricAttributionResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateRecommenderRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateRecommenderResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateSchemaRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateSchemaResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateSolutionRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateSolutionResponse' => [ 'base' => NULL, 'refs' => [], ], 'CreateSolutionVersionRequest' => [ 'base' => NULL, 'refs' => [], ], 'CreateSolutionVersionResponse' => [ 'base' => NULL, 'refs' => [], ], 'DataSource' => [ 'base' => 'Describes the data source that contains the data to upload to a dataset.
', 'refs' => [ 'CreateDatasetImportJobRequest$dataSource' => 'The Amazon S3 bucket that contains the training data to import.
', 'DatasetImportJob$dataSource' => 'The Amazon S3 bucket that contains the training data to import.
', ], ], 'Dataset' => [ 'base' => 'Provides metadata for a dataset.
', 'refs' => [ 'DescribeDatasetResponse$dataset' => 'A listing of the dataset\'s properties.
', ], ], 'DatasetExportJob' => [ 'base' => 'Describes a job that exports a dataset to an Amazon S3 bucket. For more information, see CreateDatasetExportJob.
A dataset export job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
Information about the dataset export job, including the status.
The status is one of the following values:
CREATE PENDING
CREATE IN_PROGRESS
ACTIVE
CREATE FAILED
The output configuration parameters of a dataset export job.
', 'refs' => [ 'CreateDatasetExportJobRequest$jobOutput' => 'The path to the Amazon S3 bucket where the job\'s output is stored.
', 'DatasetExportJob$jobOutput' => 'The path to the Amazon S3 bucket where the job\'s output is stored. For example:
s3://bucket-name/folder-name/
Provides a summary of the properties of a dataset export job. For a complete listing, call the DescribeDatasetExportJob API.
', 'refs' => [ 'DatasetExportJobs$member' => NULL, ], ], 'DatasetExportJobs' => [ 'base' => NULL, 'refs' => [ 'ListDatasetExportJobsResponse$datasetExportJobs' => 'The list of dataset export jobs.
', ], ], 'DatasetGroup' => [ 'base' => 'A dataset group is a collection of related datasets (Interactions, User, and Item). You create a dataset group by calling CreateDatasetGroup. You then create a dataset and add it to a dataset group by calling CreateDataset. The dataset group is used to create and train a solution by calling CreateSolution. A dataset group can contain only one of each type of dataset.
You can specify an Key Management Service (KMS) key to encrypt the datasets in the group.
', 'refs' => [ 'DescribeDatasetGroupResponse$datasetGroup' => 'A listing of the dataset group\'s properties.
', ], ], 'DatasetGroupSummary' => [ 'base' => 'Provides a summary of the properties of a dataset group. For a complete listing, call the DescribeDatasetGroup API.
', 'refs' => [ 'DatasetGroups$member' => NULL, ], ], 'DatasetGroups' => [ 'base' => NULL, 'refs' => [ 'ListDatasetGroupsResponse$datasetGroups' => 'The list of your dataset groups.
', ], ], 'DatasetImportJob' => [ 'base' => 'Describes a job that imports training data from a data source (Amazon S3 bucket) to an Amazon Personalize dataset. For more information, see CreateDatasetImportJob.
A dataset import job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
Information about the dataset import job, including the status.
The status is one of the following values:
CREATE PENDING
CREATE IN_PROGRESS
ACTIVE
CREATE FAILED
Provides a summary of the properties of a dataset import job. For a complete listing, call the DescribeDatasetImportJob API.
', 'refs' => [ 'DatasetImportJobs$member' => NULL, ], ], 'DatasetImportJobs' => [ 'base' => NULL, 'refs' => [ 'ListDatasetImportJobsResponse$datasetImportJobs' => 'The list of dataset import jobs.
', ], ], 'DatasetSchema' => [ 'base' => 'Describes the schema for a dataset. For more information on schemas, see CreateSchema.
', 'refs' => [ 'DescribeSchemaResponse$schema' => 'The requested schema.
', ], ], 'DatasetSchemaSummary' => [ 'base' => 'Provides a summary of the properties of a dataset schema. For a complete listing, call the DescribeSchema API.
', 'refs' => [ 'Schemas$member' => NULL, ], ], 'DatasetSummary' => [ 'base' => 'Provides a summary of the properties of a dataset. For a complete listing, call the DescribeDataset API.
', 'refs' => [ 'Datasets$member' => NULL, ], ], 'DatasetType' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetRequest$datasetType' => 'The type of dataset.
One of the following (case insensitive) values:
Interactions
Items
Users
One of the following values:
Interactions
Items
Users
The dataset type. One of the following values:
Interactions
Items
Users
Event-Interactions
Describes an update to a dataset.
', 'refs' => [ 'Dataset$latestDatasetUpdate' => 'Describes the latest update to the dataset.
', ], ], 'Datasets' => [ 'base' => NULL, 'refs' => [ 'ListDatasetsResponse$datasets' => 'An array of Dataset
objects. Each object provides metadata information.
The date and time (in Unix time) that the algorithm was created.
', 'Algorithm$lastUpdatedDateTime' => 'The date and time (in Unix time) that the algorithm was last updated.
', 'BatchInferenceJob$creationDateTime' => 'The time at which the batch inference job was created.
', 'BatchInferenceJob$lastUpdatedDateTime' => 'The time at which the batch inference job was last updated.
', 'BatchInferenceJobSummary$creationDateTime' => 'The time at which the batch inference job was created.
', 'BatchInferenceJobSummary$lastUpdatedDateTime' => 'The time at which the batch inference job was last updated.
', 'BatchSegmentJob$creationDateTime' => 'The time at which the batch segment job was created.
', 'BatchSegmentJob$lastUpdatedDateTime' => 'The time at which the batch segment job last updated.
', 'BatchSegmentJobSummary$creationDateTime' => 'The time at which the batch segment job was created.
', 'BatchSegmentJobSummary$lastUpdatedDateTime' => 'The time at which the batch segment job was last updated.
', 'Campaign$creationDateTime' => 'The date and time (in Unix format) that the campaign was created.
', 'Campaign$lastUpdatedDateTime' => 'The date and time (in Unix format) that the campaign was last updated.
', 'CampaignSummary$creationDateTime' => 'The date and time (in Unix time) that the campaign was created.
', 'CampaignSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the campaign was last updated.
', 'CampaignUpdateSummary$creationDateTime' => 'The date and time (in Unix time) that the campaign update was created.
', 'CampaignUpdateSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the campaign update was last updated.
', 'Dataset$creationDateTime' => 'The creation date and time (in Unix time) of the dataset.
', 'Dataset$lastUpdatedDateTime' => 'A time stamp that shows when the dataset was updated.
', 'DatasetExportJob$creationDateTime' => 'The creation date and time (in Unix time) of the dataset export job.
', 'DatasetExportJob$lastUpdatedDateTime' => 'The date and time (in Unix time) the status of the dataset export job was last updated.
', 'DatasetExportJobSummary$creationDateTime' => 'The date and time (in Unix time) that the dataset export job was created.
', 'DatasetExportJobSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the dataset export job status was last updated.
', 'DatasetGroup$creationDateTime' => 'The creation date and time (in Unix time) of the dataset group.
', 'DatasetGroup$lastUpdatedDateTime' => 'The last update date and time (in Unix time) of the dataset group.
', 'DatasetGroupSummary$creationDateTime' => 'The date and time (in Unix time) that the dataset group was created.
', 'DatasetGroupSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the dataset group was last updated.
', 'DatasetImportJob$creationDateTime' => 'The creation date and time (in Unix time) of the dataset import job.
', 'DatasetImportJob$lastUpdatedDateTime' => 'The date and time (in Unix time) the dataset was last updated.
', 'DatasetImportJobSummary$creationDateTime' => 'The date and time (in Unix time) that the dataset import job was created.
', 'DatasetImportJobSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the dataset import job status was last updated.
', 'DatasetSchema$creationDateTime' => 'The date and time (in Unix time) that the schema was created.
', 'DatasetSchema$lastUpdatedDateTime' => 'The date and time (in Unix time) that the schema was last updated.
', 'DatasetSchemaSummary$creationDateTime' => 'The date and time (in Unix time) that the schema was created.
', 'DatasetSchemaSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the schema was last updated.
', 'DatasetSummary$creationDateTime' => 'The date and time (in Unix time) that the dataset was created.
', 'DatasetSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the dataset was last updated.
', 'DatasetUpdateSummary$creationDateTime' => 'The creation date and time (in Unix time) of the dataset update.
', 'DatasetUpdateSummary$lastUpdatedDateTime' => 'The last update date and time (in Unix time) of the dataset.
', 'EventTracker$creationDateTime' => 'The date and time (in Unix format) that the event tracker was created.
', 'EventTracker$lastUpdatedDateTime' => 'The date and time (in Unix time) that the event tracker was last updated.
', 'EventTrackerSummary$creationDateTime' => 'The date and time (in Unix time) that the event tracker was created.
', 'EventTrackerSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the event tracker was last updated.
', 'FeatureTransformation$creationDateTime' => 'The creation date and time (in Unix time) of the feature transformation.
', 'FeatureTransformation$lastUpdatedDateTime' => 'The last update date and time (in Unix time) of the feature transformation.
', 'Filter$creationDateTime' => 'The time at which the filter was created.
', 'Filter$lastUpdatedDateTime' => 'The time at which the filter was last updated.
', 'FilterSummary$creationDateTime' => 'The time at which the filter was created.
', 'FilterSummary$lastUpdatedDateTime' => 'The time at which the filter was last updated.
', 'MetricAttribution$creationDateTime' => 'The metric attribution\'s creation date time.
', 'MetricAttribution$lastUpdatedDateTime' => 'The metric attribution\'s last updated date time.
', 'MetricAttributionSummary$creationDateTime' => 'The metric attribution\'s creation date time.
', 'MetricAttributionSummary$lastUpdatedDateTime' => 'The metric attribution\'s last updated date time.
', 'Recipe$creationDateTime' => 'The date and time (in Unix format) that the recipe was created.
', 'Recipe$lastUpdatedDateTime' => 'The date and time (in Unix format) that the recipe was last updated.
', 'RecipeSummary$creationDateTime' => 'The date and time (in Unix time) that the recipe was created.
', 'RecipeSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the recipe was last updated.
', 'Recommender$creationDateTime' => 'The date and time (in Unix format) that the recommender was created.
', 'Recommender$lastUpdatedDateTime' => 'The date and time (in Unix format) that the recommender was last updated.
', 'RecommenderSummary$creationDateTime' => 'The date and time (in Unix format) that the recommender was created.
', 'RecommenderSummary$lastUpdatedDateTime' => 'The date and time (in Unix format) that the recommender was last updated.
', 'RecommenderUpdateSummary$creationDateTime' => 'The date and time (in Unix format) that the recommender update was created.
', 'RecommenderUpdateSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the recommender update was last updated.
', 'Solution$creationDateTime' => 'The creation date and time (in Unix time) of the solution.
', 'Solution$lastUpdatedDateTime' => 'The date and time (in Unix time) that the solution was last updated.
', 'SolutionSummary$creationDateTime' => 'The date and time (in Unix time) that the solution was created.
', 'SolutionSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the solution was last updated.
', 'SolutionVersion$creationDateTime' => 'The date and time (in Unix time) that this version of the solution was created.
', 'SolutionVersion$lastUpdatedDateTime' => 'The date and time (in Unix time) that the solution was last updated.
', 'SolutionVersionSummary$creationDateTime' => 'The date and time (in Unix time) that this version of a solution was created.
', 'SolutionVersionSummary$lastUpdatedDateTime' => 'The date and time (in Unix time) that the solution version was last updated.
', ], ], 'DefaultCategoricalHyperParameterRange' => [ 'base' => 'Provides the name and default range of a categorical hyperparameter and whether the hyperparameter is tunable. A tunable hyperparameter can have its value determined during hyperparameter optimization (HPO).
', 'refs' => [ 'DefaultCategoricalHyperParameterRanges$member' => NULL, ], ], 'DefaultCategoricalHyperParameterRanges' => [ 'base' => NULL, 'refs' => [ 'DefaultHyperParameterRanges$categoricalHyperParameterRanges' => 'The categorical hyperparameters and their default ranges.
', ], ], 'DefaultContinuousHyperParameterRange' => [ 'base' => 'Provides the name and default range of a continuous hyperparameter and whether the hyperparameter is tunable. A tunable hyperparameter can have its value determined during hyperparameter optimization (HPO).
', 'refs' => [ 'DefaultContinuousHyperParameterRanges$member' => NULL, ], ], 'DefaultContinuousHyperParameterRanges' => [ 'base' => NULL, 'refs' => [ 'DefaultHyperParameterRanges$continuousHyperParameterRanges' => 'The continuous hyperparameters and their default ranges.
', ], ], 'DefaultHyperParameterRanges' => [ 'base' => 'Specifies the hyperparameters and their default ranges. Hyperparameters can be categorical, continuous, or integer-valued.
', 'refs' => [ 'Algorithm$defaultHyperParameterRanges' => 'Specifies the default hyperparameters, their ranges, and whether they are tunable. A tunable hyperparameter can have its value determined during hyperparameter optimization (HPO).
', ], ], 'DefaultIntegerHyperParameterRange' => [ 'base' => 'Provides the name and default range of a integer-valued hyperparameter and whether the hyperparameter is tunable. A tunable hyperparameter can have its value determined during hyperparameter optimization (HPO).
', 'refs' => [ 'DefaultIntegerHyperParameterRanges$member' => NULL, ], ], 'DefaultIntegerHyperParameterRanges' => [ 'base' => NULL, 'refs' => [ 'DefaultHyperParameterRanges$integerHyperParameterRanges' => 'The integer-valued hyperparameters and their default ranges.
', ], ], 'DeleteCampaignRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteDatasetGroupRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteDatasetRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteEventTrackerRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteFilterRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteMetricAttributionRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteRecommenderRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteSchemaRequest' => [ 'base' => NULL, 'refs' => [], ], 'DeleteSolutionRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeAlgorithmRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeAlgorithmResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeBatchInferenceJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeBatchInferenceJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeBatchSegmentJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeBatchSegmentJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeCampaignRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeCampaignResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetExportJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetExportJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetGroupRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetGroupResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetImportJobRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetImportJobResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeDatasetResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEventTrackerRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeEventTrackerResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeFeatureTransformationRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeFeatureTransformationResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeFilterRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeFilterResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeMetricAttributionRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeMetricAttributionResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeRecipeRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeRecipeResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeRecommenderRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeRecommenderResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeSchemaRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeSchemaResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeSolutionRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeSolutionResponse' => [ 'base' => NULL, 'refs' => [], ], 'DescribeSolutionVersionRequest' => [ 'base' => NULL, 'refs' => [], ], 'DescribeSolutionVersionResponse' => [ 'base' => NULL, 'refs' => [], ], 'Description' => [ 'base' => NULL, 'refs' => [ 'Recipe$description' => 'The description of the recipe.
', ], ], 'DockerURI' => [ 'base' => NULL, 'refs' => [ 'AlgorithmImage$dockerURI' => 'The URI of the Docker container for the algorithm image.
', ], ], 'Domain' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetGroupRequest$domain' => 'The domain of the dataset group. Specify a domain to create a Domain dataset group. The domain you specify determines the default schemas for datasets and the use cases available for recommenders. If you don\'t specify a domain, you create a Custom dataset group with solution versions that you deploy with a campaign.
', 'CreateDatasetGroupResponse$domain' => 'The domain for the new Domain dataset group.
', 'CreateSchemaRequest$domain' => 'The domain for the schema. If you are creating a schema for a dataset in a Domain dataset group, specify the domain you chose when you created the Domain dataset group.
', 'DatasetGroup$domain' => 'The domain of a Domain dataset group.
', 'DatasetGroupSummary$domain' => 'The domain of a Domain dataset group.
', 'DatasetSchema$domain' => 'The domain of a schema that you created for a dataset in a Domain dataset group.
', 'DatasetSchemaSummary$domain' => 'The domain of a schema that you created for a dataset in a Domain dataset group.
', 'ListRecipesRequest$domain' => 'Filters returned recipes by domain for a Domain dataset group. Only recipes (Domain dataset group use cases) for this domain are included in the response. If you don\'t specify a domain, all recipes are returned.
', 'RecipeSummary$domain' => 'The domain of the recipe (if the recipe is a Domain dataset group use case).
', ], ], 'ErrorMessage' => [ 'base' => NULL, 'refs' => [ 'InvalidInputException$message' => NULL, 'InvalidNextTokenException$message' => NULL, 'LimitExceededException$message' => NULL, 'ResourceAlreadyExistsException$message' => NULL, 'ResourceInUseException$message' => NULL, 'ResourceNotFoundException$message' => NULL, 'TooManyTagKeysException$message' => NULL, 'TooManyTagsException$message' => NULL, ], ], 'EventTracker' => [ 'base' => 'Provides information about an event tracker.
', 'refs' => [ 'DescribeEventTrackerResponse$eventTracker' => 'An object that describes the event tracker.
', ], ], 'EventTrackerSummary' => [ 'base' => 'Provides a summary of the properties of an event tracker. For a complete listing, call the DescribeEventTracker API.
', 'refs' => [ 'EventTrackers$member' => NULL, ], ], 'EventTrackers' => [ 'base' => NULL, 'refs' => [ 'ListEventTrackersResponse$eventTrackers' => 'A list of event trackers.
', ], ], 'EventType' => [ 'base' => NULL, 'refs' => [ 'CreateSolutionRequest$eventType' => 'When your have multiple event types (using an EVENT_TYPE
schema field), this parameter specifies which event type (for example, \'click\' or \'like\') is used for training the model.
If you do not provide an eventType
, Amazon Personalize will use all interactions for training with equal weight regardless of type.
The metric\'s event type.
', 'Solution$eventType' => 'The event type (for example, \'click\' or \'like\') that is used for training the model. If no eventType
is provided, Amazon Personalize uses all interactions for training with equal weight regardless of type.
The event type (for example, \'click\' or \'like\') that is used for training the model.
', ], ], 'EventValueThreshold' => [ 'base' => NULL, 'refs' => [ 'SolutionConfig$eventValueThreshold' => 'Only events with a value greater than or equal to this threshold are used for training a model.
', ], ], 'ExcludedDatasetColumns' => [ 'base' => NULL, 'refs' => [ 'TrainingDataConfig$excludedDatasetColumns' => 'Specifies the columns to exclude from training. Each key is a dataset type, and each value is a list of columns. Exclude columns to control what data Amazon Personalize uses to generate recommendations. For example, you might have a column that you want to use only to filter recommendations. You can exclude this column from training and Amazon Personalize considers it only when filtering.
', ], ], 'FailureReason' => [ 'base' => NULL, 'refs' => [ 'BatchInferenceJob$failureReason' => 'If the batch inference job failed, the reason for the failure.
', 'BatchInferenceJobSummary$failureReason' => 'If the batch inference job failed, the reason for the failure.
', 'BatchSegmentJob$failureReason' => 'If the batch segment job failed, the reason for the failure.
', 'BatchSegmentJobSummary$failureReason' => 'If the batch segment job failed, the reason for the failure.
', 'Campaign$failureReason' => 'If a campaign fails, the reason behind the failure.
', 'CampaignSummary$failureReason' => 'If a campaign fails, the reason behind the failure.
', 'CampaignUpdateSummary$failureReason' => 'If a campaign update fails, the reason behind the failure.
', 'DatasetExportJob$failureReason' => 'If a dataset export job fails, provides the reason why.
', 'DatasetExportJobSummary$failureReason' => 'If a dataset export job fails, the reason behind the failure.
', 'DatasetGroup$failureReason' => 'If creating a dataset group fails, provides the reason why.
', 'DatasetGroupSummary$failureReason' => 'If creating a dataset group fails, the reason behind the failure.
', 'DatasetImportJob$failureReason' => 'If a dataset import job fails, provides the reason why.
', 'DatasetImportJobSummary$failureReason' => 'If a dataset import job fails, the reason behind the failure.
', 'DatasetUpdateSummary$failureReason' => 'If updating a dataset fails, provides the reason why.
', 'Filter$failureReason' => 'If the filter failed, the reason for its failure.
', 'FilterSummary$failureReason' => 'If the filter failed, the reason for the failure.
', 'MetricAttribution$failureReason' => 'The metric attribution\'s failure reason.
', 'MetricAttributionSummary$failureReason' => 'The metric attribution\'s failure reason.
', 'Recommender$failureReason' => 'If a recommender fails, the reason behind the failure.
', 'RecommenderUpdateSummary$failureReason' => 'If a recommender update fails, the reason behind the failure.
', 'SolutionVersion$failureReason' => 'If training a solution version fails, the reason for the failure.
', 'SolutionVersionSummary$failureReason' => 'If a solution version fails, the reason behind the failure.
', ], ], 'FeatureTransformation' => [ 'base' => 'Provides feature transformation information. Feature transformation is the process of modifying raw input data into a form more suitable for model training.
', 'refs' => [ 'DescribeFeatureTransformationResponse$featureTransformation' => 'A listing of the FeatureTransformation properties.
', ], ], 'FeatureTransformationParameters' => [ 'base' => NULL, 'refs' => [ 'SolutionConfig$featureTransformationParameters' => 'Lists the feature transformation parameters.
', ], ], 'FeaturizationParameters' => [ 'base' => NULL, 'refs' => [ 'FeatureTransformation$defaultParameters' => 'Provides the default parameters for feature transformation.
', ], ], 'Filter' => [ 'base' => 'Contains information on a recommendation filter, including its ARN, status, and filter expression.
', 'refs' => [ 'DescribeFilterResponse$filter' => 'The filter\'s details.
', ], ], 'FilterExpression' => [ 'base' => NULL, 'refs' => [ 'CreateFilterRequest$filterExpression' => 'The filter expression defines which items are included or excluded from recommendations. Filter expression must follow specific format rules. For information about filter expression structure and syntax, see Filter expressions.
', 'Filter$filterExpression' => 'Specifies the type of item interactions to filter out of recommendation results. The filter expression must follow specific format rules. For information about filter expression structure and syntax, see Filter expressions.
', ], ], 'FilterSummary' => [ 'base' => 'A short summary of a filter\'s attributes.
', 'refs' => [ 'Filters$member' => NULL, ], ], 'Filters' => [ 'base' => NULL, 'refs' => [ 'ListFiltersResponse$Filters' => 'A list of returned filters.
', ], ], 'GetSolutionMetricsRequest' => [ 'base' => NULL, 'refs' => [], ], 'GetSolutionMetricsResponse' => [ 'base' => NULL, 'refs' => [], ], 'HPOConfig' => [ 'base' => 'Describes the properties for hyperparameter optimization (HPO).
', 'refs' => [ 'SolutionConfig$hpoConfig' => 'Describes the properties for hyperparameter optimization (HPO).
', ], ], 'HPOObjective' => [ 'base' => 'The metric to optimize during hyperparameter optimization (HPO).
Amazon Personalize doesn\'t support configuring the hpoObjective
at this time.
The metric to optimize during HPO.
Amazon Personalize doesn\'t support configuring the hpoObjective
at this time.
The type of the metric. Valid values are Maximize
and Minimize
.
The maximum number of training jobs when you create a solution version. The maximum value for maxNumberOfTrainingJobs
is 40
.
The maximum number of parallel training jobs when you create a solution version. The maximum value for maxParallelTrainingJobs
is 10
.
Describes the resource configuration for hyperparameter optimization (HPO).
', 'refs' => [ 'HPOConfig$hpoResourceConfig' => 'Describes the resource configuration for HPO.
', ], ], 'HyperParameterRanges' => [ 'base' => 'Specifies the hyperparameters and their ranges. Hyperparameters can be categorical, continuous, or integer-valued.
', 'refs' => [ 'HPOConfig$algorithmHyperParameterRanges' => 'The hyperparameters and their allowable ranges.
', ], ], 'HyperParameters' => [ 'base' => NULL, 'refs' => [ 'Algorithm$defaultHyperParameters' => 'Specifies the default hyperparameters.
', 'BatchInferenceJobConfig$itemExplorationConfig' => 'A string to string map specifying the exploration configuration hyperparameters, including explorationWeight
and explorationItemAgeCutOff
, you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. See User-Personalization.
Specifies the exploration configuration hyperparameters, including explorationWeight
and explorationItemAgeCutOff
, you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide itemExplorationConfig
data only if your solution uses the User-Personalization recipe.
Specifies the exploration configuration hyperparameters, including explorationWeight
and explorationItemAgeCutOff
, you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. Provide itemExplorationConfig
data only if your recommenders generate personalized recommendations for a user (not popular items or similar items).
Lists the hyperparameter names and ranges.
', 'TunedHPOParams$algorithmHyperParameters' => 'A list of the hyperparameter values of the best performing model.
', ], ], 'ImportMode' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetImportJobRequest$importMode' => 'Specify how to add the new records to an existing dataset. The default import mode is FULL
. If you haven\'t imported bulk records into the dataset previously, you can only specify FULL
.
Specify FULL
to overwrite all existing bulk data in your dataset. Data you imported individually is not replaced.
Specify INCREMENTAL
to append the new records to the existing data in your dataset. Amazon Personalize replaces any record with the same ID with the new one.
The import mode used by the dataset import job to import new records.
', 'DatasetImportJobSummary$importMode' => 'The import mode the dataset import job used to update the data in the dataset. For more information see Updating existing bulk data.
', ], ], 'IngestionMode' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetExportJobRequest$ingestionMode' => 'The data to export, based on how you imported the data. You can choose to export only BULK
data that you imported using a dataset import job, only PUT
data that you imported incrementally (using the console, PutEvents, PutUsers and PutItems operations), or ALL
for both types. The default value is PUT
.
The data to export, based on how you imported the data. You can choose to export BULK
data that you imported using a dataset import job, PUT
data that you imported incrementally (using the console, PutEvents, PutUsers and PutItems operations), or ALL
for both types. The default value is PUT
.
Provides the name and range of an integer-valued hyperparameter.
', 'refs' => [ 'IntegerHyperParameterRanges$member' => NULL, ], ], 'IntegerHyperParameterRanges' => [ 'base' => NULL, 'refs' => [ 'HyperParameterRanges$integerHyperParameterRanges' => 'The integer-valued hyperparameters and their ranges.
', ], ], 'IntegerMaxValue' => [ 'base' => NULL, 'refs' => [ 'DefaultIntegerHyperParameterRange$maxValue' => 'The maximum allowable value for the hyperparameter.
', 'IntegerHyperParameterRange$maxValue' => 'The maximum allowable value for the hyperparameter.
', ], ], 'IntegerMinValue' => [ 'base' => NULL, 'refs' => [ 'DefaultIntegerHyperParameterRange$minValue' => 'The minimum allowable value for the hyperparameter.
', 'IntegerHyperParameterRange$minValue' => 'The minimum allowable value for the hyperparameter.
', ], ], 'InvalidInputException' => [ 'base' => 'Provide a valid value for the field or parameter.
', 'refs' => [], ], 'InvalidNextTokenException' => [ 'base' => 'The token is not valid.
', 'refs' => [], ], 'ItemAttribute' => [ 'base' => NULL, 'refs' => [ 'OptimizationObjective$itemAttribute' => 'The numerical metadata column in an Items dataset related to the optimization objective. For example, VIDEO_LENGTH (to maximize streaming minutes), or PRICE (to maximize revenue).
', ], ], 'KmsKeyArn' => [ 'base' => NULL, 'refs' => [ 'CreateDatasetGroupRequest$kmsKeyArn' => 'The Amazon Resource Name (ARN) of a Key Management Service (KMS) key used to encrypt the datasets.
', 'DatasetGroup$kmsKeyArn' => 'The Amazon Resource Name (ARN) of the Key Management Service (KMS) key used to encrypt the datasets.
', 'S3DataConfig$kmsKeyArn' => 'The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files.
', ], ], 'LimitExceededException' => [ 'base' => 'The limit on the number of requests per second has been exceeded.
', 'refs' => [], ], 'ListBatchInferenceJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListBatchInferenceJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListBatchSegmentJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListBatchSegmentJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListCampaignsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListCampaignsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetExportJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetExportJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetGroupsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetGroupsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetImportJobsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetImportJobsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListDatasetsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListEventTrackersRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListEventTrackersResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListFiltersRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListFiltersResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListMetricAttributionMetricsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListMetricAttributionMetricsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListMetricAttributionsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListMetricAttributionsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListRecipesRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListRecipesResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListRecommendersRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListRecommendersResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListSchemasRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListSchemasResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListSolutionVersionsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListSolutionVersionsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListSolutionsRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListSolutionsResponse' => [ 'base' => NULL, 'refs' => [], ], 'ListTagsForResourceRequest' => [ 'base' => NULL, 'refs' => [], ], 'ListTagsForResourceResponse' => [ 'base' => NULL, 'refs' => [], ], 'MaxResults' => [ 'base' => NULL, 'refs' => [ 'ListBatchInferenceJobsRequest$maxResults' => 'The maximum number of batch inference job results to return in each page. The default value is 100.
', 'ListBatchSegmentJobsRequest$maxResults' => 'The maximum number of batch segment job results to return in each page. The default value is 100.
', 'ListCampaignsRequest$maxResults' => 'The maximum number of campaigns to return.
', 'ListDatasetExportJobsRequest$maxResults' => 'The maximum number of dataset export jobs to return.
', 'ListDatasetGroupsRequest$maxResults' => 'The maximum number of dataset groups to return.
', 'ListDatasetImportJobsRequest$maxResults' => 'The maximum number of dataset import jobs to return.
', 'ListDatasetsRequest$maxResults' => 'The maximum number of datasets to return.
', 'ListEventTrackersRequest$maxResults' => 'The maximum number of event trackers to return.
', 'ListFiltersRequest$maxResults' => 'The maximum number of filters to return.
', 'ListMetricAttributionMetricsRequest$maxResults' => 'The maximum number of metrics to return in one page of results.
', 'ListMetricAttributionsRequest$maxResults' => 'The maximum number of metric attributions to return in one page of results.
', 'ListRecipesRequest$maxResults' => 'The maximum number of recipes to return.
', 'ListRecommendersRequest$maxResults' => 'The maximum number of recommenders to return.
', 'ListSchemasRequest$maxResults' => 'The maximum number of schemas to return.
', 'ListSolutionVersionsRequest$maxResults' => 'The maximum number of solution versions to return.
', 'ListSolutionsRequest$maxResults' => 'The maximum number of solutions to return.
', ], ], 'MetricAttribute' => [ 'base' => 'Contains information on a metric that a metric attribution reports on. For more information, see Measuring impact of recommendations.
', 'refs' => [ 'MetricAttributes$member' => NULL, ], ], 'MetricAttributes' => [ 'base' => NULL, 'refs' => [ 'CreateMetricAttributionRequest$metrics' => 'A list of metric attributes for the metric attribution. Each metric attribute specifies an event type to track and a function. Available functions are SUM()
or SAMPLECOUNT()
. For SUM() functions, provide the dataset type (either Interactions or Items) and column to sum as a parameter. For example SUM(Items.PRICE).
The metrics for the specified metric attribution.
', 'UpdateMetricAttributionRequest$addMetrics' => 'Add new metric attributes to the metric attribution.
', ], ], 'MetricAttributesNamesList' => [ 'base' => NULL, 'refs' => [ 'UpdateMetricAttributionRequest$removeMetrics' => 'Remove metric attributes from the metric attribution.
', ], ], 'MetricAttribution' => [ 'base' => 'Contains information on a metric attribution. A metric attribution creates reports on the data that you import into Amazon Personalize. Depending on how you import the data, you can view reports in Amazon CloudWatch or Amazon S3. For more information, see Measuring impact of recommendations.
', 'refs' => [ 'DescribeMetricAttributionResponse$metricAttribution' => 'The details of the metric attribution.
', ], ], 'MetricAttributionOutput' => [ 'base' => 'The output configuration details for a metric attribution.
', 'refs' => [ 'CreateMetricAttributionRequest$metricsOutputConfig' => 'The output configuration details for the metric attribution.
', 'MetricAttribution$metricsOutputConfig' => 'The metric attribution\'s output configuration.
', 'UpdateMetricAttributionRequest$metricsOutputConfig' => 'An output config for the metric attribution.
', ], ], 'MetricAttributionSummary' => [ 'base' => 'Provides a summary of the properties of a metric attribution. For a complete listing, call the DescribeMetricAttribution.
', 'refs' => [ 'MetricAttributions$member' => NULL, ], ], 'MetricAttributions' => [ 'base' => NULL, 'refs' => [ 'ListMetricAttributionsResponse$metricAttributions' => 'The list of metric attributions.
', ], ], 'MetricExpression' => [ 'base' => NULL, 'refs' => [ 'MetricAttribute$expression' => 'The attribute\'s expression. Available functions are SUM()
or SAMPLECOUNT()
. For SUM() functions, provide the dataset type (either Interactions or Items) and column to sum as a parameter. For example SUM(Items.PRICE).
The metric to optimize.
', 'HPOObjective$metricName' => 'The name of the metric.
', 'MetricAttribute$metricName' => 'The metric\'s name. The name helps you identify the metric in Amazon CloudWatch or Amazon S3.
', 'MetricAttributesNamesList$member' => NULL, 'Metrics$key' => NULL, ], ], 'MetricRegex' => [ 'base' => NULL, 'refs' => [ 'HPOObjective$metricRegex' => 'A regular expression for finding the metric in the training job logs.
', ], ], 'MetricValue' => [ 'base' => NULL, 'refs' => [ 'Metrics$value' => NULL, ], ], 'Metrics' => [ 'base' => NULL, 'refs' => [ 'GetSolutionMetricsResponse$metrics' => 'The metrics for the solution version. For more information, see Evaluating a solution version with metrics .
', 'Recommender$modelMetrics' => 'Provides evaluation metrics that help you determine the performance of a recommender. For more information, see Evaluating a recommender.
', ], ], 'Name' => [ 'base' => NULL, 'refs' => [ 'Algorithm$name' => 'The name of the algorithm.
', 'AlgorithmImage$name' => 'The name of the algorithm image.
', 'BatchInferenceJob$jobName' => 'The name of the batch inference job.
', 'BatchInferenceJobSummary$jobName' => 'The name of the batch inference job.
', 'BatchSegmentJob$jobName' => 'The name of the batch segment job.
', 'BatchSegmentJobSummary$jobName' => 'The name of the batch segment job.
', 'Campaign$name' => 'The name of the campaign.
', 'CampaignSummary$name' => 'The name of the campaign.
', 'CreateBatchInferenceJobRequest$jobName' => 'The name of the batch inference job to create.
', 'CreateBatchSegmentJobRequest$jobName' => 'The name of the batch segment job to create.
', 'CreateCampaignRequest$name' => 'A name for the new campaign. The campaign name must be unique within your account.
', 'CreateDatasetExportJobRequest$jobName' => 'The name for the dataset export job.
', 'CreateDatasetGroupRequest$name' => 'The name for the new dataset group.
', 'CreateDatasetImportJobRequest$jobName' => 'The name for the dataset import job.
', 'CreateDatasetRequest$name' => 'The name for the dataset.
', 'CreateEventTrackerRequest$name' => 'The name for the event tracker.
', 'CreateFilterRequest$name' => 'The name of the filter to create.
', 'CreateMetricAttributionRequest$name' => 'A name for the metric attribution.
', 'CreateRecommenderRequest$name' => 'The name of the recommender.
', 'CreateSchemaRequest$name' => 'The name for the schema.
', 'CreateSolutionRequest$name' => 'The name for the solution.
', 'CreateSolutionVersionRequest$name' => 'The name of the solution version.
', 'Dataset$name' => 'The name of the dataset.
', 'DatasetExportJob$jobName' => 'The name of the export job.
', 'DatasetExportJobSummary$jobName' => 'The name of the dataset export job.
', 'DatasetGroup$name' => 'The name of the dataset group.
', 'DatasetGroupSummary$name' => 'The name of the dataset group.
', 'DatasetImportJob$jobName' => 'The name of the import job.
', 'DatasetImportJobSummary$jobName' => 'The name of the dataset import job.
', 'DatasetSchema$name' => 'The name of the schema.
', 'DatasetSchemaSummary$name' => 'The name of the schema.
', 'DatasetSummary$name' => 'The name of the dataset.
', 'EventTracker$name' => 'The name of the event tracker.
', 'EventTrackerSummary$name' => 'The name of the event tracker.
', 'FeatureTransformation$name' => 'The name of the feature transformation.
', 'Filter$name' => 'The name of the filter.
', 'FilterSummary$name' => 'The name of the filter.
', 'MetricAttribution$name' => 'The metric attribution\'s name.
', 'MetricAttributionSummary$name' => 'The name of the metric attribution.
', 'Recipe$name' => 'The name of the recipe.
', 'RecipeSummary$name' => 'The name of the recipe.
', 'Recommender$name' => 'The name of the recommender.
', 'RecommenderSummary$name' => 'The name of the recommender.
', 'Solution$name' => 'The name of the solution.
', 'SolutionSummary$name' => 'The name of the solution.
', 'SolutionVersion$name' => 'The name of the solution version.
', ], ], 'NextToken' => [ 'base' => NULL, 'refs' => [ 'ListBatchInferenceJobsRequest$nextToken' => 'The token to request the next page of results.
', 'ListBatchInferenceJobsResponse$nextToken' => 'The token to use to retrieve the next page of results. The value is null
when there are no more results to return.
The token to request the next page of results.
', 'ListBatchSegmentJobsResponse$nextToken' => 'The token to use to retrieve the next page of results. The value is null
when there are no more results to return.
A token returned from the previous call to ListCampaigns for getting the next set of campaigns (if they exist).
', 'ListCampaignsResponse$nextToken' => 'A token for getting the next set of campaigns (if they exist).
', 'ListDatasetExportJobsRequest$nextToken' => 'A token returned from the previous call to ListDatasetExportJobs
for getting the next set of dataset export jobs (if they exist).
A token for getting the next set of dataset export jobs (if they exist).
', 'ListDatasetGroupsRequest$nextToken' => 'A token returned from the previous call to ListDatasetGroups
for getting the next set of dataset groups (if they exist).
A token for getting the next set of dataset groups (if they exist).
', 'ListDatasetImportJobsRequest$nextToken' => 'A token returned from the previous call to ListDatasetImportJobs
for getting the next set of dataset import jobs (if they exist).
A token for getting the next set of dataset import jobs (if they exist).
', 'ListDatasetsRequest$nextToken' => 'A token returned from the previous call to ListDatasetImportJobs
for getting the next set of dataset import jobs (if they exist).
A token for getting the next set of datasets (if they exist).
', 'ListEventTrackersRequest$nextToken' => 'A token returned from the previous call to ListEventTrackers
for getting the next set of event trackers (if they exist).
A token for getting the next set of event trackers (if they exist).
', 'ListFiltersRequest$nextToken' => 'A token returned from the previous call to ListFilters
for getting the next set of filters (if they exist).
A token for getting the next set of filters (if they exist).
', 'ListMetricAttributionMetricsRequest$nextToken' => 'Specify the pagination token from a previous request to retrieve the next page of results.
', 'ListMetricAttributionMetricsResponse$nextToken' => 'Specify the pagination token from a previous ListMetricAttributionMetricsResponse
request to retrieve the next page of results.
Specify the pagination token from a previous request to retrieve the next page of results.
', 'ListMetricAttributionsResponse$nextToken' => 'Specify the pagination token from a previous request to retrieve the next page of results.
', 'ListRecipesRequest$nextToken' => 'A token returned from the previous call to ListRecipes
for getting the next set of recipes (if they exist).
A token for getting the next set of recipes.
', 'ListRecommendersRequest$nextToken' => 'A token returned from the previous call to ListRecommenders
for getting the next set of recommenders (if they exist).
A token for getting the next set of recommenders (if they exist).
', 'ListSchemasRequest$nextToken' => 'A token returned from the previous call to ListSchemas
for getting the next set of schemas (if they exist).
A token used to get the next set of schemas (if they exist).
', 'ListSolutionVersionsRequest$nextToken' => 'A token returned from the previous call to ListSolutionVersions
for getting the next set of solution versions (if they exist).
A token for getting the next set of solution versions (if they exist).
', 'ListSolutionsRequest$nextToken' => 'A token returned from the previous call to ListSolutions
for getting the next set of solutions (if they exist).
A token for getting the next set of solutions (if they exist).
', ], ], 'NumBatchResults' => [ 'base' => NULL, 'refs' => [ 'BatchInferenceJob$numResults' => 'The number of recommendations generated by the batch inference job. This number includes the error messages generated for failed input records.
', 'BatchSegmentJob$numResults' => 'The number of predicted users generated by the batch segment job for each line of input data. The maximum number of users per segment is 5 million.
', 'CreateBatchInferenceJobRequest$numResults' => 'The number of recommendations to retrieve.
', 'CreateBatchSegmentJobRequest$numResults' => 'The number of predicted users generated by the batch segment job for each line of input data. The maximum number of users per segment is 5 million.
', ], ], 'ObjectiveSensitivity' => [ 'base' => NULL, 'refs' => [ 'OptimizationObjective$objectiveSensitivity' => 'Specifies how Amazon Personalize balances the importance of your optimization objective versus relevance.
', ], ], 'OptimizationObjective' => [ 'base' => 'Describes the additional objective for the solution, such as maximizing streaming minutes or increasing revenue. For more information see Optimizing a solution.
', 'refs' => [ 'SolutionConfig$optimizationObjective' => 'Describes the additional objective for the solution, such as maximizing streaming minutes or increasing revenue. For more information see Optimizing a solution.
', ], ], 'ParameterName' => [ 'base' => NULL, 'refs' => [ 'CategoricalHyperParameterRange$name' => 'The name of the hyperparameter.
', 'ContinuousHyperParameterRange$name' => 'The name of the hyperparameter.
', 'DefaultCategoricalHyperParameterRange$name' => 'The name of the hyperparameter.
', 'DefaultContinuousHyperParameterRange$name' => 'The name of the hyperparameter.
', 'DefaultIntegerHyperParameterRange$name' => 'The name of the hyperparameter.
', 'FeatureTransformationParameters$key' => NULL, 'FeaturizationParameters$key' => NULL, 'HyperParameters$key' => NULL, 'IntegerHyperParameterRange$name' => 'The name of the hyperparameter.
', 'ResourceConfig$key' => NULL, ], ], 'ParameterValue' => [ 'base' => NULL, 'refs' => [ 'FeatureTransformationParameters$value' => NULL, 'FeaturizationParameters$value' => NULL, 'HyperParameters$value' => NULL, 'ResourceConfig$value' => NULL, ], ], 'PerformAutoML' => [ 'base' => NULL, 'refs' => [ 'CreateSolutionRequest$performAutoML' => 'We don\'t recommend enabling automated machine learning. Instead, match your use case to the available Amazon Personalize recipes. For more information, see Determining your use case.
Whether to perform automated machine learning (AutoML). The default is false
. For this case, you must specify recipeArn
.
When set to true
, Amazon Personalize analyzes your training data and selects the optimal USER_PERSONALIZATION recipe and hyperparameters. In this case, you must omit recipeArn
. Amazon Personalize determines the optimal recipe by running tests with different values for the hyperparameters. AutoML lengthens the training process as compared to selecting a specific recipe.
We don\'t recommend enabling automated machine learning. Instead, match your use case to the available Amazon Personalize recipes. For more information, see Determining your use case.
When true, Amazon Personalize performs a search for the best USER_PERSONALIZATION recipe from the list specified in the solution configuration (recipeArn
must not be specified). When false (the default), Amazon Personalize uses recipeArn
for training.
When true, Amazon Personalize searches for the most optimal recipe according to the solution configuration. When false (the default), Amazon Personalize uses recipeArn
.
Whether to perform hyperparameter optimization (HPO) on the chosen recipe. The default is false
.
Whether to perform hyperparameter optimization (HPO) on the chosen recipe. The default is false
.
Provides information about a recipe. Each recipe provides an algorithm that Amazon Personalize uses in model training when you use the CreateSolution operation.
', 'refs' => [ 'DescribeRecipeResponse$recipe' => 'An object that describes the recipe.
', ], ], 'RecipeProvider' => [ 'base' => NULL, 'refs' => [ 'ListRecipesRequest$recipeProvider' => 'The default is SERVICE
.
Provides a summary of the properties of a recipe. For a complete listing, call the DescribeRecipe API.
', 'refs' => [ 'Recipes$member' => NULL, ], ], 'RecipeType' => [ 'base' => NULL, 'refs' => [ 'Recipe$recipeType' => 'One of the following values:
PERSONALIZED_RANKING
RELATED_ITEMS
USER_PERSONALIZATION
The list of available recipes.
', ], ], 'Recommender' => [ 'base' => 'Describes a recommendation generator for a Domain dataset group. You create a recommender in a Domain dataset group for a specific domain use case (domain recipe), and specify the recommender in a GetRecommendations request.
', 'refs' => [ 'DescribeRecommenderResponse$recommender' => 'The properties of the recommender.
', ], ], 'RecommenderConfig' => [ 'base' => 'The configuration details of the recommender.
', 'refs' => [ 'CreateRecommenderRequest$recommenderConfig' => 'The configuration details of the recommender.
', 'Recommender$recommenderConfig' => 'The configuration details of the recommender.
', 'RecommenderSummary$recommenderConfig' => 'The configuration details of the recommender.
', 'RecommenderUpdateSummary$recommenderConfig' => 'The configuration details of the recommender update.
', 'UpdateRecommenderRequest$recommenderConfig' => 'The configuration details of the recommender.
', ], ], 'RecommenderSummary' => [ 'base' => 'Provides a summary of the properties of the recommender.
', 'refs' => [ 'Recommenders$member' => NULL, ], ], 'RecommenderUpdateSummary' => [ 'base' => 'Provides a summary of the properties of a recommender update. For a complete listing, call the DescribeRecommender API.
', 'refs' => [ 'Recommender$latestRecommenderUpdate' => 'Provides a summary of the latest updates to the recommender.
', ], ], 'Recommenders' => [ 'base' => NULL, 'refs' => [ 'ListRecommendersResponse$recommenders' => 'A list of the recommenders.
', ], ], 'ResourceAlreadyExistsException' => [ 'base' => 'The specified resource already exists.
', 'refs' => [], ], 'ResourceConfig' => [ 'base' => NULL, 'refs' => [ 'Algorithm$defaultResourceConfig' => 'Specifies the default maximum number of training jobs and parallel training jobs.
', ], ], 'ResourceInUseException' => [ 'base' => 'The specified resource is in use.
', 'refs' => [], ], 'ResourceNotFoundException' => [ 'base' => 'Could not find the specified resource.
', 'refs' => [], ], 'RoleArn' => [ 'base' => NULL, 'refs' => [ 'BatchInferenceJob$roleArn' => 'The ARN of the Amazon Identity and Access Management (IAM) role that requested the batch inference job.
', 'BatchSegmentJob$roleArn' => 'The ARN of the Amazon Identity and Access Management (IAM) role that requested the batch segment job.
', 'CreateBatchInferenceJobRequest$roleArn' => 'The ARN of the Amazon Identity and Access Management role that has permissions to read and write to your input and output Amazon S3 buckets respectively.
', 'CreateBatchSegmentJobRequest$roleArn' => 'The ARN of the Amazon Identity and Access Management role that has permissions to read and write to your input and output Amazon S3 buckets respectively.
', 'CreateDatasetExportJobRequest$roleArn' => 'The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket.
', 'CreateDatasetGroupRequest$roleArn' => 'The ARN of the Identity and Access Management (IAM) role that has permissions to access the Key Management Service (KMS) key. Supplying an IAM role is only valid when also specifying a KMS key.
', 'CreateDatasetImportJobRequest$roleArn' => 'The ARN of the IAM role that has permissions to read from the Amazon S3 data source.
', 'DatasetGroup$roleArn' => 'The ARN of the IAM role that has permissions to create the dataset group.
', 'MetricAttributionOutput$roleArn' => 'The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket and add metrics to Amazon CloudWatch. For more information, see Measuring impact of recommendations.
', ], ], 'S3DataConfig' => [ 'base' => 'The configuration details of an Amazon S3 input or output bucket.
', 'refs' => [ 'BatchInferenceJobInput$s3DataSource' => 'The URI of the Amazon S3 location that contains your input data. The Amazon S3 bucket must be in the same region as the API endpoint you are calling.
', 'BatchInferenceJobOutput$s3DataDestination' => 'Information on the Amazon S3 bucket in which the batch inference job\'s output is stored.
', 'BatchSegmentJobInput$s3DataSource' => NULL, 'BatchSegmentJobOutput$s3DataDestination' => NULL, 'DatasetExportJobOutput$s3DataDestination' => NULL, 'MetricAttributionOutput$s3DataDestination' => NULL, ], ], 'S3Location' => [ 'base' => NULL, 'refs' => [ 'DataSource$dataLocation' => 'The path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For example:
s3://bucket-name/folder-name/
The file path of the Amazon S3 bucket.
', ], ], 'Schemas' => [ 'base' => NULL, 'refs' => [ 'ListSchemasResponse$schemas' => 'A list of schemas.
', ], ], 'Solution' => [ 'base' => 'An object that provides information about a solution. A solution is a trained model that can be deployed as a campaign.
', 'refs' => [ 'DescribeSolutionResponse$solution' => 'An object that describes the solution.
', ], ], 'SolutionConfig' => [ 'base' => 'Describes the configuration properties for the solution.
', 'refs' => [ 'CreateSolutionRequest$solutionConfig' => 'The configuration to use with the solution. When performAutoML
is set to true, Amazon Personalize only evaluates the autoMLConfig
section of the solution configuration.
Amazon Personalize doesn\'t support configuring the hpoObjective
at this time.
Describes the configuration properties for the solution.
', 'SolutionVersion$solutionConfig' => 'Describes the configuration properties for the solution.
', ], ], 'SolutionSummary' => [ 'base' => 'Provides a summary of the properties of a solution. For a complete listing, call the DescribeSolution API.
', 'refs' => [ 'Solutions$member' => NULL, ], ], 'SolutionVersion' => [ 'base' => 'An object that provides information about a specific version of a Solution in a Custom dataset group.
', 'refs' => [ 'DescribeSolutionVersionResponse$solutionVersion' => 'The solution version.
', ], ], 'SolutionVersionSummary' => [ 'base' => 'Provides a summary of the properties of a solution version. For a complete listing, call the DescribeSolutionVersion API.
', 'refs' => [ 'Solution$latestSolutionVersion' => 'Describes the latest version of the solution, including the status and the ARN.
', 'SolutionVersions$member' => NULL, ], ], 'SolutionVersions' => [ 'base' => NULL, 'refs' => [ 'ListSolutionVersionsResponse$solutionVersions' => 'A list of solution versions describing the version properties.
', ], ], 'Solutions' => [ 'base' => NULL, 'refs' => [ 'ListSolutionsResponse$solutions' => 'A list of the current solutions.
', ], ], 'StartRecommenderRequest' => [ 'base' => NULL, 'refs' => [], ], 'StartRecommenderResponse' => [ 'base' => NULL, 'refs' => [], ], 'Status' => [ 'base' => NULL, 'refs' => [ 'BatchInferenceJob$status' => 'The status of the batch inference job. The status is one of the following values:
PENDING
IN PROGRESS
ACTIVE
CREATE FAILED
The status of the batch inference job. The status is one of the following values:
PENDING
IN PROGRESS
ACTIVE
CREATE FAILED
The status of the batch segment job. The status is one of the following values:
PENDING
IN PROGRESS
ACTIVE
CREATE FAILED
The status of the batch segment job. The status is one of the following values:
PENDING
IN PROGRESS
ACTIVE
CREATE FAILED
The status of the campaign.
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the campaign.
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the campaign update.
A campaign update can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the dataset.
A dataset can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the dataset export job.
A dataset export job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
The status of the dataset export job.
A dataset export job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
The current status of the dataset group.
A dataset group can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING
The status of the dataset group.
A dataset group can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING
The status of the dataset import job.
A dataset import job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
The status of the dataset import job.
A dataset import job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
The status of the dataset.
A dataset can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the dataset update.
', 'EventTracker$status' => 'The status of the event tracker.
An event tracker can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the event tracker.
An event tracker can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the feature transformation.
A feature transformation can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
The status of the filter.
', 'FilterSummary$status' => 'The status of the filter.
', 'MetricAttribution$status' => 'The metric attribution\'s status.
', 'MetricAttributionSummary$status' => 'The metric attribution\'s status.
', 'Recipe$status' => 'The status of the recipe.
', 'RecipeSummary$status' => 'The status of the recipe.
', 'Recommender$status' => 'The status of the recommender.
A recommender can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE
DELETE PENDING > DELETE IN_PROGRESS
The status of the recommender. A recommender can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE
DELETE PENDING > DELETE IN_PROGRESS
The status of the recommender update.
A recommender can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
STOP PENDING > STOP IN_PROGRESS > INACTIVE > START PENDING > START IN_PROGRESS > ACTIVE
DELETE PENDING > DELETE IN_PROGRESS
The status of the solution.
A solution can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the solution.
A solution can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
The status of the solution version.
A solution version can be in one of the following states:
CREATE PENDING
CREATE IN_PROGRESS
ACTIVE
CREATE FAILED
CREATE STOPPING
CREATE STOPPED
The status of the solution version.
A solution version can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
The optional metadata that you apply to resources to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. For more information see Tagging Amazon Personalize recources.
', 'refs' => [ 'Tags$member' => NULL, ], ], 'TagKey' => [ 'base' => NULL, 'refs' => [ 'Tag$tagKey' => 'One part of a key-value pair that makes up a tag. A key is a general label that acts like a category for more specific tag values.
', 'TagKeys$member' => NULL, ], ], 'TagKeys' => [ 'base' => NULL, 'refs' => [ 'UntagResourceRequest$tagKeys' => 'Keys to remove from the resource\'s tags.
', ], ], 'TagResourceRequest' => [ 'base' => NULL, 'refs' => [], ], 'TagResourceResponse' => [ 'base' => NULL, 'refs' => [], ], 'TagValue' => [ 'base' => NULL, 'refs' => [ 'Tag$tagValue' => 'The optional part of a key-value pair that makes up a tag. A value acts as a descriptor within a tag category (key).
', ], ], 'Tags' => [ 'base' => NULL, 'refs' => [ 'CreateBatchInferenceJobRequest$tags' => 'A list of tags to apply to the batch inference job.
', 'CreateBatchSegmentJobRequest$tags' => 'A list of tags to apply to the batch segment job.
', 'CreateCampaignRequest$tags' => 'A list of tags to apply to the campaign.
', 'CreateDatasetExportJobRequest$tags' => 'A list of tags to apply to the dataset export job.
', 'CreateDatasetGroupRequest$tags' => 'A list of tags to apply to the dataset group.
', 'CreateDatasetImportJobRequest$tags' => 'A list of tags to apply to the dataset import job.
', 'CreateDatasetRequest$tags' => 'A list of tags to apply to the dataset.
', 'CreateEventTrackerRequest$tags' => 'A list of tags to apply to the event tracker.
', 'CreateFilterRequest$tags' => 'A list of tags to apply to the filter.
', 'CreateRecommenderRequest$tags' => 'A list of tags to apply to the recommender.
', 'CreateSolutionRequest$tags' => 'A list of tags to apply to the solution.
', 'CreateSolutionVersionRequest$tags' => 'A list of tags to apply to the solution version.
', 'ListTagsForResourceResponse$tags' => 'The resource\'s tags.
', 'TagResourceRequest$tags' => 'Tags to apply to the resource. For more information see Tagging Amazon Personalize recources.
', ], ], 'TooManyTagKeysException' => [ 'base' => 'The request contains more tag keys than can be associated with a resource (50 tag keys per resource).
', 'refs' => [], ], 'TooManyTagsException' => [ 'base' => 'You have exceeded the maximum number of tags you can apply to this resource.
', 'refs' => [], ], 'TrackingId' => [ 'base' => NULL, 'refs' => [ 'CreateEventTrackerResponse$trackingId' => 'The ID of the event tracker. Include this ID in requests to the PutEvents API.
', 'EventTracker$trackingId' => 'The ID of the event tracker. Include this ID in requests to the PutEvents API.
', ], ], 'TrainingDataConfig' => [ 'base' => 'The training data configuration to use when creating a domain recommender or custom solution version (trained model).
', 'refs' => [ 'RecommenderConfig$trainingDataConfig' => 'Specifies the training data configuration to use when creating a domain recommender.
', 'SolutionConfig$trainingDataConfig' => 'Specifies the training data configuration to use when creating a custom solution version (trained model).
', ], ], 'TrainingHours' => [ 'base' => NULL, 'refs' => [ 'SolutionVersion$trainingHours' => 'The time used to train the model. You are billed for the time it takes to train a model. This field is visible only after Amazon Personalize successfully trains a model.
', ], ], 'TrainingInputMode' => [ 'base' => NULL, 'refs' => [ 'Algorithm$trainingInputMode' => 'The training input mode.
', ], ], 'TrainingMode' => [ 'base' => NULL, 'refs' => [ 'CreateSolutionVersionRequest$trainingMode' => 'The scope of training to be performed when creating the solution version. The FULL
option trains the solution version based on the entirety of the input solution\'s training data, while the UPDATE
option processes only the data that has changed in comparison to the input solution. Choose UPDATE
when you want to incrementally update your solution version instead of creating an entirely new one.
The UPDATE
option can only be used when you already have an active solution version created from the input solution using the FULL
option and the input solution was trained with the User-Personalization recipe or the HRNN-Coldstart recipe.
The scope of training to be performed when creating the solution version. The FULL
option trains the solution version based on the entirety of the input solution\'s training data, while the UPDATE
option processes only the data that has changed in comparison to the input solution. Choose UPDATE
when you want to incrementally update your solution version instead of creating an entirely new one.
The UPDATE
option can only be used when you already have an active solution version created from the input solution using the FULL
option and the input solution was trained with the User-Personalization recipe or the HRNN-Coldstart recipe.
Specifies the requested minimum provisioned transactions (recommendations) per second. A high minProvisionedTPS
will increase your bill. We recommend starting with 1 for minProvisionedTPS
(the default). Track your usage using Amazon CloudWatch metrics, and increase the minProvisionedTPS
as necessary.
Specifies the requested minimum provisioned transactions (recommendations) per second that Amazon Personalize will support.
', 'CreateCampaignRequest$minProvisionedTPS' => 'Specifies the requested minimum provisioned transactions (recommendations) per second that Amazon Personalize will support. A high minProvisionedTPS
will increase your bill. We recommend starting with 1 for minProvisionedTPS
(the default). Track your usage using Amazon CloudWatch metrics, and increase the minProvisionedTPS
as necessary.
Specifies the requested minimum provisioned recommendation requests per second that Amazon Personalize will support. A high minRecommendationRequestsPerSecond
will increase your bill. We recommend starting with 1 for minRecommendationRequestsPerSecond
(the default). Track your usage using Amazon CloudWatch metrics, and increase the minRecommendationRequestsPerSecond
as necessary.
Specifies the requested minimum provisioned transactions (recommendations) per second that Amazon Personalize will support. A high minProvisionedTPS
will increase your bill. We recommend starting with 1 for minProvisionedTPS
(the default). Track your usage using Amazon CloudWatch metrics, and increase the minProvisionedTPS
as necessary.
Whether the hyperparameter is tunable.
', 'DefaultContinuousHyperParameterRange$isTunable' => 'Whether the hyperparameter is tunable.
', 'DefaultIntegerHyperParameterRange$isTunable' => 'Indicates whether the hyperparameter is tunable.
', ], ], 'TunedHPOParams' => [ 'base' => 'If hyperparameter optimization (HPO) was performed, contains the hyperparameter values of the best performing model.
', 'refs' => [ 'SolutionVersion$tunedHPOParams' => 'If hyperparameter optimization was performed, contains the hyperparameter values of the best performing model.
', ], ], 'UntagResourceRequest' => [ 'base' => NULL, 'refs' => [], ], 'UntagResourceResponse' => [ 'base' => NULL, 'refs' => [], ], 'UpdateCampaignRequest' => [ 'base' => NULL, 'refs' => [], ], 'UpdateCampaignResponse' => [ 'base' => NULL, 'refs' => [], ], 'UpdateDatasetRequest' => [ 'base' => NULL, 'refs' => [], ], 'UpdateDatasetResponse' => [ 'base' => NULL, 'refs' => [], ], 'UpdateMetricAttributionRequest' => [ 'base' => NULL, 'refs' => [], ], 'UpdateMetricAttributionResponse' => [ 'base' => NULL, 'refs' => [], ], 'UpdateRecommenderRequest' => [ 'base' => NULL, 'refs' => [], ], 'UpdateRecommenderResponse' => [ 'base' => NULL, 'refs' => [], ], ],];