/* * Copyright 2018-2023 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with * the License. A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR * CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions * and limitations under the License. */ package com.amazonaws.services.sagemaker.model; import java.io.Serializable; import javax.annotation.Generated; import com.amazonaws.protocol.StructuredPojo; import com.amazonaws.protocol.ProtocolMarshaller; /** *
* Defines the training jobs launched by a hyperparameter tuning job. *
* * @see AWS API Documentation */ @Generated("com.amazonaws:aws-java-sdk-code-generator") public class HyperParameterTrainingJobDefinition implements Serializable, Cloneable, StructuredPojo { /** ** The job definition name. *
*/ private String definitionName; private HyperParameterTuningJobObjective tuningObjective; private ParameterRanges hyperParameterRanges; /** ** Specifies the values of hyperparameters that do not change for the tuning job. *
*/ private java.util.Map* The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the training * jobs that the tuning job launches. *
*/ private HyperParameterAlgorithmSpecification algorithmSpecification; /** ** The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches. *
*/ private String roleArn; /** ** An array of Channel * objects that specify the input for the training jobs that the tuning job launches. *
*/ private java.util.List* The VpcConfig object * that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches to connect * to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon * Virtual Private Cloud. *
*/ private VpcConfig vpcConfig; /** ** Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the tuning * job launches. *
*/ private OutputDataConfig outputDataConfig; /** ** The resources, including the compute instances and storage volumes, to use for the training jobs that the tuning * job launches. *
*
* Storage volumes store model artifacts and incremental states. Training algorithms might also use storage volumes
* for scratch space. If you want SageMaker to use the storage volume to store the training data, choose
* File
as the TrainingInputMode
in the algorithm specification. For distributed training
* algorithms, specify an instance count greater than 1.
*
* If you want to use hyperparameter optimization with instance type flexibility, use
* HyperParameterTuningResourceConfig
instead.
*
* Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a managed * spot training job has to complete. When the job reaches the time limit, SageMaker ends the training job. Use this * API to cap model training costs. *
*/ private StoppingCondition stoppingCondition; /** ** Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers * within a training cluster for distributed training. If network isolation is used for training jobs that are * configured to use a VPC, SageMaker downloads and uploads customer data and model artifacts through the specified * VPC, but the training container does not have network access. *
*/ private Boolean enableNetworkIsolation; /** *
* To encrypt all communications between ML compute instances in distributed training, choose True
.
* Encryption provides greater security for distributed training, but training might take longer. How long it takes
* depends on the amount of communication between compute instances, especially if you use a deep learning algorithm
* in distributed training.
*
* A Boolean indicating whether managed spot training is enabled (True
) or not (False
).
*
* The number of times to retry the job when the job fails due to an InternalServerError
.
*
* The configuration for the hyperparameter tuning resources, including the compute instances and storage volumes,
* used for training jobs launched by the tuning job. By default, storage volumes hold model artifacts and
* incremental states. Choose File
for TrainingInputMode
in the
* AlgorithmSpecification
parameter to additionally store training data in the storage volume
* (optional).
*
* An environment variable that you can pass into the SageMaker CreateTrainingJob * API. You can use an existing environment variable from the training container or use your own. See Define metrics and variables for more information. *
*
* The maximum number of items specified for Map Entries
refers to the maximum number of environment
* variables for each TrainingJobDefinition
and also the maximum for the hyperparameter tuning job
* itself. That is, the sum of the number of environment variables for all the training job definitions can't exceed
* the maximum number specified.
*
* The job definition name. *
* * @param definitionName * The job definition name. */ public void setDefinitionName(String definitionName) { this.definitionName = definitionName; } /** ** The job definition name. *
* * @return The job definition name. */ public String getDefinitionName() { return this.definitionName; } /** ** The job definition name. *
* * @param definitionName * The job definition name. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withDefinitionName(String definitionName) { setDefinitionName(definitionName); return this; } /** * @param tuningObjective */ public void setTuningObjective(HyperParameterTuningJobObjective tuningObjective) { this.tuningObjective = tuningObjective; } /** * @return */ public HyperParameterTuningJobObjective getTuningObjective() { return this.tuningObjective; } /** * @param tuningObjective * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withTuningObjective(HyperParameterTuningJobObjective tuningObjective) { setTuningObjective(tuningObjective); return this; } /** * @param hyperParameterRanges */ public void setHyperParameterRanges(ParameterRanges hyperParameterRanges) { this.hyperParameterRanges = hyperParameterRanges; } /** * @return */ public ParameterRanges getHyperParameterRanges() { return this.hyperParameterRanges; } /** * @param hyperParameterRanges * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withHyperParameterRanges(ParameterRanges hyperParameterRanges) { setHyperParameterRanges(hyperParameterRanges); return this; } /** ** Specifies the values of hyperparameters that do not change for the tuning job. *
* * @return Specifies the values of hyperparameters that do not change for the tuning job. */ public java.util.Map* Specifies the values of hyperparameters that do not change for the tuning job. *
* * @param staticHyperParameters * Specifies the values of hyperparameters that do not change for the tuning job. */ public void setStaticHyperParameters(java.util.Map* Specifies the values of hyperparameters that do not change for the tuning job. *
* * @param staticHyperParameters * Specifies the values of hyperparameters that do not change for the tuning job. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withStaticHyperParameters(java.util.Map* The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the training * jobs that the tuning job launches. *
* * @param algorithmSpecification * The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the * training jobs that the tuning job launches. */ public void setAlgorithmSpecification(HyperParameterAlgorithmSpecification algorithmSpecification) { this.algorithmSpecification = algorithmSpecification; } /** ** The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the training * jobs that the tuning job launches. *
* * @return The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the * training jobs that the tuning job launches. */ public HyperParameterAlgorithmSpecification getAlgorithmSpecification() { return this.algorithmSpecification; } /** ** The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the training * jobs that the tuning job launches. *
* * @param algorithmSpecification * The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the * training jobs that the tuning job launches. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withAlgorithmSpecification(HyperParameterAlgorithmSpecification algorithmSpecification) { setAlgorithmSpecification(algorithmSpecification); return this; } /** ** The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches. *
* * @param roleArn * The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job * launches. */ public void setRoleArn(String roleArn) { this.roleArn = roleArn; } /** ** The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches. *
* * @return The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job * launches. */ public String getRoleArn() { return this.roleArn; } /** ** The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job launches. *
* * @param roleArn * The Amazon Resource Name (ARN) of the IAM role associated with the training jobs that the tuning job * launches. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withRoleArn(String roleArn) { setRoleArn(roleArn); return this; } /** ** An array of Channel * objects that specify the input for the training jobs that the tuning job launches. *
* * @return An array of Channel objects * that specify the input for the training jobs that the tuning job launches. */ public java.util.List* An array of Channel * objects that specify the input for the training jobs that the tuning job launches. *
* * @param inputDataConfig * An array of Channel objects that * specify the input for the training jobs that the tuning job launches. */ public void setInputDataConfig(java.util.Collection* An array of Channel * objects that specify the input for the training jobs that the tuning job launches. *
** NOTE: This method appends the values to the existing list (if any). Use * {@link #setInputDataConfig(java.util.Collection)} or {@link #withInputDataConfig(java.util.Collection)} if you * want to override the existing values. *
* * @param inputDataConfig * An array of Channel objects that * specify the input for the training jobs that the tuning job launches. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withInputDataConfig(Channel... inputDataConfig) { if (this.inputDataConfig == null) { setInputDataConfig(new java.util.ArrayList* An array of Channel * objects that specify the input for the training jobs that the tuning job launches. *
* * @param inputDataConfig * An array of Channel objects that * specify the input for the training jobs that the tuning job launches. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withInputDataConfig(java.util.Collection* The VpcConfig object * that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches to connect * to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon * Virtual Private Cloud. *
* * @param vpcConfig * The VpcConfig * object that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches * to connect to. Control access to and from your training container by configuring the VPC. For more * information, see Protect Training * Jobs by Using an Amazon Virtual Private Cloud. */ public void setVpcConfig(VpcConfig vpcConfig) { this.vpcConfig = vpcConfig; } /** ** The VpcConfig object * that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches to connect * to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon * Virtual Private Cloud. *
* * @return The VpcConfig * object that specifies the VPC that you want the training jobs that this hyperparameter tuning job * launches to connect to. Control access to and from your training container by configuring the VPC. For * more information, see Protect * Training Jobs by Using an Amazon Virtual Private Cloud. */ public VpcConfig getVpcConfig() { return this.vpcConfig; } /** ** The VpcConfig object * that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches to connect * to. Control access to and from your training container by configuring the VPC. For more information, see Protect Training Jobs by Using an Amazon * Virtual Private Cloud. *
* * @param vpcConfig * The VpcConfig * object that specifies the VPC that you want the training jobs that this hyperparameter tuning job launches * to connect to. Control access to and from your training container by configuring the VPC. For more * information, see Protect Training * Jobs by Using an Amazon Virtual Private Cloud. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withVpcConfig(VpcConfig vpcConfig) { setVpcConfig(vpcConfig); return this; } /** ** Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the tuning * job launches. *
* * @param outputDataConfig * Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the * tuning job launches. */ public void setOutputDataConfig(OutputDataConfig outputDataConfig) { this.outputDataConfig = outputDataConfig; } /** ** Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the tuning * job launches. *
* * @return Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that * the tuning job launches. */ public OutputDataConfig getOutputDataConfig() { return this.outputDataConfig; } /** ** Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the tuning * job launches. *
* * @param outputDataConfig * Specifies the path to the Amazon S3 bucket where you store model artifacts from the training jobs that the * tuning job launches. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withOutputDataConfig(OutputDataConfig outputDataConfig) { setOutputDataConfig(outputDataConfig); return this; } /** ** The resources, including the compute instances and storage volumes, to use for the training jobs that the tuning * job launches. *
*
* Storage volumes store model artifacts and incremental states. Training algorithms might also use storage volumes
* for scratch space. If you want SageMaker to use the storage volume to store the training data, choose
* File
as the TrainingInputMode
in the algorithm specification. For distributed training
* algorithms, specify an instance count greater than 1.
*
* If you want to use hyperparameter optimization with instance type flexibility, use
* HyperParameterTuningResourceConfig
instead.
*
* Storage volumes store model artifacts and incremental states. Training algorithms might also use storage
* volumes for scratch space. If you want SageMaker to use the storage volume to store the training data,
* choose File
as the TrainingInputMode
in the algorithm specification. For
* distributed training algorithms, specify an instance count greater than 1.
*
* If you want to use hyperparameter optimization with instance type flexibility, use
* HyperParameterTuningResourceConfig
instead.
*
* The resources, including the compute instances and storage volumes, to use for the training jobs that the tuning * job launches. *
*
* Storage volumes store model artifacts and incremental states. Training algorithms might also use storage volumes
* for scratch space. If you want SageMaker to use the storage volume to store the training data, choose
* File
as the TrainingInputMode
in the algorithm specification. For distributed training
* algorithms, specify an instance count greater than 1.
*
* If you want to use hyperparameter optimization with instance type flexibility, use
* HyperParameterTuningResourceConfig
instead.
*
* Storage volumes store model artifacts and incremental states. Training algorithms might also use storage
* volumes for scratch space. If you want SageMaker to use the storage volume to store the training data,
* choose File
as the TrainingInputMode
in the algorithm specification. For
* distributed training algorithms, specify an instance count greater than 1.
*
* If you want to use hyperparameter optimization with instance type flexibility, use
* HyperParameterTuningResourceConfig
instead.
*
* The resources, including the compute instances and storage volumes, to use for the training jobs that the tuning * job launches. *
*
* Storage volumes store model artifacts and incremental states. Training algorithms might also use storage volumes
* for scratch space. If you want SageMaker to use the storage volume to store the training data, choose
* File
as the TrainingInputMode
in the algorithm specification. For distributed training
* algorithms, specify an instance count greater than 1.
*
* If you want to use hyperparameter optimization with instance type flexibility, use
* HyperParameterTuningResourceConfig
instead.
*
* Storage volumes store model artifacts and incremental states. Training algorithms might also use storage
* volumes for scratch space. If you want SageMaker to use the storage volume to store the training data,
* choose File
as the TrainingInputMode
in the algorithm specification. For
* distributed training algorithms, specify an instance count greater than 1.
*
* If you want to use hyperparameter optimization with instance type flexibility, use
* HyperParameterTuningResourceConfig
instead.
*
* Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a managed * spot training job has to complete. When the job reaches the time limit, SageMaker ends the training job. Use this * API to cap model training costs. *
* * @param stoppingCondition * Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a * managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the * training job. Use this API to cap model training costs. */ public void setStoppingCondition(StoppingCondition stoppingCondition) { this.stoppingCondition = stoppingCondition; } /** ** Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a managed * spot training job has to complete. When the job reaches the time limit, SageMaker ends the training job. Use this * API to cap model training costs. *
* * @return Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a * managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the * training job. Use this API to cap model training costs. */ public StoppingCondition getStoppingCondition() { return this.stoppingCondition; } /** ** Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a managed * spot training job has to complete. When the job reaches the time limit, SageMaker ends the training job. Use this * API to cap model training costs. *
* * @param stoppingCondition * Specifies a limit to how long a model hyperparameter training job can run. It also specifies how long a * managed spot training job has to complete. When the job reaches the time limit, SageMaker ends the * training job. Use this API to cap model training costs. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withStoppingCondition(StoppingCondition stoppingCondition) { setStoppingCondition(stoppingCondition); return this; } /** ** Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers * within a training cluster for distributed training. If network isolation is used for training jobs that are * configured to use a VPC, SageMaker downloads and uploads customer data and model artifacts through the specified * VPC, but the training container does not have network access. *
* * @param enableNetworkIsolation * Isolates the training container. No inbound or outbound network calls can be made, except for calls * between peers within a training cluster for distributed training. If network isolation is used for * training jobs that are configured to use a VPC, SageMaker downloads and uploads customer data and model * artifacts through the specified VPC, but the training container does not have network access. */ public void setEnableNetworkIsolation(Boolean enableNetworkIsolation) { this.enableNetworkIsolation = enableNetworkIsolation; } /** ** Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers * within a training cluster for distributed training. If network isolation is used for training jobs that are * configured to use a VPC, SageMaker downloads and uploads customer data and model artifacts through the specified * VPC, but the training container does not have network access. *
* * @return Isolates the training container. No inbound or outbound network calls can be made, except for calls * between peers within a training cluster for distributed training. If network isolation is used for * training jobs that are configured to use a VPC, SageMaker downloads and uploads customer data and model * artifacts through the specified VPC, but the training container does not have network access. */ public Boolean getEnableNetworkIsolation() { return this.enableNetworkIsolation; } /** ** Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers * within a training cluster for distributed training. If network isolation is used for training jobs that are * configured to use a VPC, SageMaker downloads and uploads customer data and model artifacts through the specified * VPC, but the training container does not have network access. *
* * @param enableNetworkIsolation * Isolates the training container. No inbound or outbound network calls can be made, except for calls * between peers within a training cluster for distributed training. If network isolation is used for * training jobs that are configured to use a VPC, SageMaker downloads and uploads customer data and model * artifacts through the specified VPC, but the training container does not have network access. * @return Returns a reference to this object so that method calls can be chained together. */ public HyperParameterTrainingJobDefinition withEnableNetworkIsolation(Boolean enableNetworkIsolation) { setEnableNetworkIsolation(enableNetworkIsolation); return this; } /** ** Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers * within a training cluster for distributed training. If network isolation is used for training jobs that are * configured to use a VPC, SageMaker downloads and uploads customer data and model artifacts through the specified * VPC, but the training container does not have network access. *
* * @return Isolates the training container. No inbound or outbound network calls can be made, except for calls * between peers within a training cluster for distributed training. If network isolation is used for * training jobs that are configured to use a VPC, SageMaker downloads and uploads customer data and model * artifacts through the specified VPC, but the training container does not have network access. */ public Boolean isEnableNetworkIsolation() { return this.enableNetworkIsolation; } /** *
* To encrypt all communications between ML compute instances in distributed training, choose True
.
* Encryption provides greater security for distributed training, but training might take longer. How long it takes
* depends on the amount of communication between compute instances, especially if you use a deep learning algorithm
* in distributed training.
*
True
. Encryption provides greater security for distributed training, but training might take
* longer. How long it takes depends on the amount of communication between compute instances, especially if
* you use a deep learning algorithm in distributed training.
*/
public void setEnableInterContainerTrafficEncryption(Boolean enableInterContainerTrafficEncryption) {
this.enableInterContainerTrafficEncryption = enableInterContainerTrafficEncryption;
}
/**
*
* To encrypt all communications between ML compute instances in distributed training, choose True
.
* Encryption provides greater security for distributed training, but training might take longer. How long it takes
* depends on the amount of communication between compute instances, especially if you use a deep learning algorithm
* in distributed training.
*
True
. Encryption provides greater security for distributed training, but training might take
* longer. How long it takes depends on the amount of communication between compute instances, especially if
* you use a deep learning algorithm in distributed training.
*/
public Boolean getEnableInterContainerTrafficEncryption() {
return this.enableInterContainerTrafficEncryption;
}
/**
*
* To encrypt all communications between ML compute instances in distributed training, choose True
.
* Encryption provides greater security for distributed training, but training might take longer. How long it takes
* depends on the amount of communication between compute instances, especially if you use a deep learning algorithm
* in distributed training.
*
True
. Encryption provides greater security for distributed training, but training might take
* longer. How long it takes depends on the amount of communication between compute instances, especially if
* you use a deep learning algorithm in distributed training.
* @return Returns a reference to this object so that method calls can be chained together.
*/
public HyperParameterTrainingJobDefinition withEnableInterContainerTrafficEncryption(Boolean enableInterContainerTrafficEncryption) {
setEnableInterContainerTrafficEncryption(enableInterContainerTrafficEncryption);
return this;
}
/**
*
* To encrypt all communications between ML compute instances in distributed training, choose True
.
* Encryption provides greater security for distributed training, but training might take longer. How long it takes
* depends on the amount of communication between compute instances, especially if you use a deep learning algorithm
* in distributed training.
*
True
. Encryption provides greater security for distributed training, but training might take
* longer. How long it takes depends on the amount of communication between compute instances, especially if
* you use a deep learning algorithm in distributed training.
*/
public Boolean isEnableInterContainerTrafficEncryption() {
return this.enableInterContainerTrafficEncryption;
}
/**
*
* A Boolean indicating whether managed spot training is enabled (True
) or not (False
).
*
True
) or not (
* False
).
*/
public void setEnableManagedSpotTraining(Boolean enableManagedSpotTraining) {
this.enableManagedSpotTraining = enableManagedSpotTraining;
}
/**
*
* A Boolean indicating whether managed spot training is enabled (True
) or not (False
).
*
True
) or not (
* False
).
*/
public Boolean getEnableManagedSpotTraining() {
return this.enableManagedSpotTraining;
}
/**
*
* A Boolean indicating whether managed spot training is enabled (True
) or not (False
).
*
True
) or not (
* False
).
* @return Returns a reference to this object so that method calls can be chained together.
*/
public HyperParameterTrainingJobDefinition withEnableManagedSpotTraining(Boolean enableManagedSpotTraining) {
setEnableManagedSpotTraining(enableManagedSpotTraining);
return this;
}
/**
*
* A Boolean indicating whether managed spot training is enabled (True
) or not (False
).
*
True
) or not (
* False
).
*/
public Boolean isEnableManagedSpotTraining() {
return this.enableManagedSpotTraining;
}
/**
* @param checkpointConfig
*/
public void setCheckpointConfig(CheckpointConfig checkpointConfig) {
this.checkpointConfig = checkpointConfig;
}
/**
* @return
*/
public CheckpointConfig getCheckpointConfig() {
return this.checkpointConfig;
}
/**
* @param checkpointConfig
* @return Returns a reference to this object so that method calls can be chained together.
*/
public HyperParameterTrainingJobDefinition withCheckpointConfig(CheckpointConfig checkpointConfig) {
setCheckpointConfig(checkpointConfig);
return this;
}
/**
*
* The number of times to retry the job when the job fails due to an InternalServerError
.
*
InternalServerError
.
*/
public void setRetryStrategy(RetryStrategy retryStrategy) {
this.retryStrategy = retryStrategy;
}
/**
*
* The number of times to retry the job when the job fails due to an InternalServerError
.
*
InternalServerError
.
*/
public RetryStrategy getRetryStrategy() {
return this.retryStrategy;
}
/**
*
* The number of times to retry the job when the job fails due to an InternalServerError
.
*
InternalServerError
.
* @return Returns a reference to this object so that method calls can be chained together.
*/
public HyperParameterTrainingJobDefinition withRetryStrategy(RetryStrategy retryStrategy) {
setRetryStrategy(retryStrategy);
return this;
}
/**
*
* The configuration for the hyperparameter tuning resources, including the compute instances and storage volumes,
* used for training jobs launched by the tuning job. By default, storage volumes hold model artifacts and
* incremental states. Choose File
for TrainingInputMode
in the
* AlgorithmSpecification
parameter to additionally store training data in the storage volume
* (optional).
*
File
for TrainingInputMode
in the
* AlgorithmSpecification
parameter to additionally store training data in the storage volume
* (optional).
*/
public void setHyperParameterTuningResourceConfig(HyperParameterTuningResourceConfig hyperParameterTuningResourceConfig) {
this.hyperParameterTuningResourceConfig = hyperParameterTuningResourceConfig;
}
/**
*
* The configuration for the hyperparameter tuning resources, including the compute instances and storage volumes,
* used for training jobs launched by the tuning job. By default, storage volumes hold model artifacts and
* incremental states. Choose File
for TrainingInputMode
in the
* AlgorithmSpecification
parameter to additionally store training data in the storage volume
* (optional).
*
File
for TrainingInputMode
in the
* AlgorithmSpecification
parameter to additionally store training data in the storage volume
* (optional).
*/
public HyperParameterTuningResourceConfig getHyperParameterTuningResourceConfig() {
return this.hyperParameterTuningResourceConfig;
}
/**
*
* The configuration for the hyperparameter tuning resources, including the compute instances and storage volumes,
* used for training jobs launched by the tuning job. By default, storage volumes hold model artifacts and
* incremental states. Choose File
for TrainingInputMode
in the
* AlgorithmSpecification
parameter to additionally store training data in the storage volume
* (optional).
*
File
for TrainingInputMode
in the
* AlgorithmSpecification
parameter to additionally store training data in the storage volume
* (optional).
* @return Returns a reference to this object so that method calls can be chained together.
*/
public HyperParameterTrainingJobDefinition withHyperParameterTuningResourceConfig(HyperParameterTuningResourceConfig hyperParameterTuningResourceConfig) {
setHyperParameterTuningResourceConfig(hyperParameterTuningResourceConfig);
return this;
}
/**
* * An environment variable that you can pass into the SageMaker CreateTrainingJob * API. You can use an existing environment variable from the training container or use your own. See Define metrics and variables for more information. *
*
* The maximum number of items specified for Map Entries
refers to the maximum number of environment
* variables for each TrainingJobDefinition
and also the maximum for the hyperparameter tuning job
* itself. That is, the sum of the number of environment variables for all the training job definitions can't exceed
* the maximum number specified.
*
* The maximum number of items specified for Map Entries
refers to the maximum number of
* environment variables for each TrainingJobDefinition
and also the maximum for the
* hyperparameter tuning job itself. That is, the sum of the number of environment variables for all the
* training job definitions can't exceed the maximum number specified.
*
* An environment variable that you can pass into the SageMaker CreateTrainingJob * API. You can use an existing environment variable from the training container or use your own. See Define metrics and variables for more information. *
*
* The maximum number of items specified for Map Entries
refers to the maximum number of environment
* variables for each TrainingJobDefinition
and also the maximum for the hyperparameter tuning job
* itself. That is, the sum of the number of environment variables for all the training job definitions can't exceed
* the maximum number specified.
*
* The maximum number of items specified for Map Entries
refers to the maximum number of
* environment variables for each TrainingJobDefinition
and also the maximum for the
* hyperparameter tuning job itself. That is, the sum of the number of environment variables for all the
* training job definitions can't exceed the maximum number specified.
*
* An environment variable that you can pass into the SageMaker CreateTrainingJob * API. You can use an existing environment variable from the training container or use your own. See Define metrics and variables for more information. *
*
* The maximum number of items specified for Map Entries
refers to the maximum number of environment
* variables for each TrainingJobDefinition
and also the maximum for the hyperparameter tuning job
* itself. That is, the sum of the number of environment variables for all the training job definitions can't exceed
* the maximum number specified.
*
* The maximum number of items specified for Map Entries
refers to the maximum number of
* environment variables for each TrainingJobDefinition
and also the maximum for the
* hyperparameter tuning job itself. That is, the sum of the number of environment variables for all the
* training job definitions can't exceed the maximum number specified.
*