/* * Copyright 2018-2023 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with * the License. A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR * CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions * and limitations under the License. */ package com.amazonaws.services.sagemaker.model; import java.io.Serializable; import javax.annotation.Generated; import com.amazonaws.protocol.StructuredPojo; import com.amazonaws.protocol.ProtocolMarshaller; /** *
* Describes the container, as part of model definition. *
* * @see AWS API * Documentation */ @Generated("com.amazonaws:aws-java-sdk-code-generator") public class ContainerDefinition implements Serializable, Cloneable, StructuredPojo { /** *
* This parameter is ignored for models that contain only a PrimaryContainer
.
*
* When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
* identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
* to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
* ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
* based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
* ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
* you must specify a value for the ContainerHostName
parameter of every
* ContainerDefinition
in that pipeline.
*
* The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
* registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
* custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker
* requirements. SageMaker supports both registry/repository[:tag]
and
* registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
* SageMaker.
*
* The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container * Registry must be in the same region as the model or endpoint you are creating. *
** Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon * Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a * Private Docker Registry for Real-Time Inference Containers. *
** The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container * Registry must be in the same region as the model or endpoint you are creating. *
** Whether the container hosts a single model or multiple models. *
*/ private String mode; /** ** The S3 path where the model artifacts, which result from model training, are stored. This path must point to a * single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, * but not if you use your own algorithms. For more information on built-in algorithms, see Common * Parameters. *
** The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. *
** If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download * model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services * account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate * Amazon Web Services STS for that region. For more information, see Activating and * Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity * and Access Management User Guide. *
*
* If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model
* artifacts in ModelDataUrl
.
*
* The environment variables to set in the Docker container. Each key and value in the Environment
* string to string map can have length of up to 1024. We support up to 16 entries in the map.
*
* The name or Amazon Resource Name (ARN) of the model package to use to create the model. *
*/ private String modelPackageName; /** ** The inference specification name in the model package version. *
*/ private String inferenceSpecificationName; /** ** Specifies additional configuration for multi-model endpoints. *
*/ private MultiModelConfig multiModelConfig; /** ** Specifies the location of ML model data to deploy. *
*
* Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker
* serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
*
* This parameter is ignored for models that contain only a PrimaryContainer
.
*
* When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
* identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
* to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
* ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
* based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
* ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
* you must specify a value for the ContainerHostName
parameter of every
* ContainerDefinition
in that pipeline.
*
PrimaryContainer
.
*
* When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
* uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
* Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
* ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
* assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
* value for the ContainerHostName
for any ContainerDefinition
that is part of an
* inference pipeline, you must specify a value for the ContainerHostName
parameter of every
* ContainerDefinition
in that pipeline.
*/
public void setContainerHostname(String containerHostname) {
this.containerHostname = containerHostname;
}
/**
*
* This parameter is ignored for models that contain only a PrimaryContainer
.
*
* When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
* identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
* to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
* ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
* based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
* ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
* you must specify a value for the ContainerHostName
parameter of every
* ContainerDefinition
in that pipeline.
*
PrimaryContainer
.
*
* When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
* uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
* Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
* ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
* assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
* value for the ContainerHostName
for any ContainerDefinition
that is part of an
* inference pipeline, you must specify a value for the ContainerHostName
parameter of every
* ContainerDefinition
in that pipeline.
*/
public String getContainerHostname() {
return this.containerHostname;
}
/**
*
* This parameter is ignored for models that contain only a PrimaryContainer
.
*
* When a ContainerDefinition
is part of an inference pipeline, the value of the parameter uniquely
* identifies the container for the purposes of logging and metrics. For information, see Use Logs and Metrics
* to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
* ContainerDefinition
that is part of an inference pipeline, a unique name is automatically assigned
* based on the position of the ContainerDefinition
in the pipeline. If you specify a value for the
* ContainerHostName
for any ContainerDefinition
that is part of an inference pipeline,
* you must specify a value for the ContainerHostName
parameter of every
* ContainerDefinition
in that pipeline.
*
PrimaryContainer
.
*
* When a ContainerDefinition
is part of an inference pipeline, the value of the parameter
* uniquely identifies the container for the purposes of logging and metrics. For information, see Use Logs and
* Metrics to Monitor an Inference Pipeline. If you don't specify a value for this parameter for a
* ContainerDefinition
that is part of an inference pipeline, a unique name is automatically
* assigned based on the position of the ContainerDefinition
in the pipeline. If you specify a
* value for the ContainerHostName
for any ContainerDefinition
that is part of an
* inference pipeline, you must specify a value for the ContainerHostName
parameter of every
* ContainerDefinition
in that pipeline.
* @return Returns a reference to this object so that method calls can be chained together.
*/
public ContainerDefinition withContainerHostname(String containerHostname) {
setContainerHostname(containerHostname);
return this;
}
/**
*
* The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
* registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
* custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker
* requirements. SageMaker supports both registry/repository[:tag]
and
* registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
* SageMaker.
*
* The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container * Registry must be in the same region as the model or endpoint you are creating. *
*registry/repository[:tag]
and
* registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with
* Amazon SageMaker. * The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 * Container Registry must be in the same region as the model or endpoint you are creating. *
*/ public void setImage(String image) { this.image = image; } /** *
* The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
* registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
* custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker
* requirements. SageMaker supports both registry/repository[:tag]
and
* registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
* SageMaker.
*
* The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container * Registry must be in the same region as the model or endpoint you are creating. *
*registry/repository[:tag]
and
* registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms
* with Amazon SageMaker. * The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 * Container Registry must be in the same region as the model or endpoint you are creating. *
*/ public String getImage() { return this.image; } /** *
* The path where inference code is stored. This can be either in Amazon EC2 Container Registry or in a Docker
* registry that is accessible from the same VPC that you configure for your endpoint. If you are using your own
* custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker
* requirements. SageMaker supports both registry/repository[:tag]
and
* registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with Amazon
* SageMaker.
*
* The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container * Registry must be in the same region as the model or endpoint you are creating. *
*registry/repository[:tag]
and
* registry/repository[@digest]
image path formats. For more information, see Using Your Own Algorithms with
* Amazon SageMaker. * The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 * Container Registry must be in the same region as the model or endpoint you are creating. *
* @return Returns a reference to this object so that method calls can be chained together. */ public ContainerDefinition withImage(String image) { setImage(image); return this; } /** ** Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon * Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a * Private Docker Registry for Real-Time Inference Containers. *
** The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container * Registry must be in the same region as the model or endpoint you are creating. *
** The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 * Container Registry must be in the same region as the model or endpoint you are creating. *
*/ public void setImageConfig(ImageConfig imageConfig) { this.imageConfig = imageConfig; } /** ** Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon * Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a * Private Docker Registry for Real-Time Inference Containers. *
** The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container * Registry must be in the same region as the model or endpoint you are creating. *
** The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 * Container Registry must be in the same region as the model or endpoint you are creating. *
*/ public ImageConfig getImageConfig() { return this.imageConfig; } /** ** Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon * Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a * Private Docker Registry for Real-Time Inference Containers. *
** The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 Container * Registry must be in the same region as the model or endpoint you are creating. *
** The model artifacts in an Amazon S3 bucket and the Docker image for inference container in Amazon EC2 * Container Registry must be in the same region as the model or endpoint you are creating. *
* @return Returns a reference to this object so that method calls can be chained together. */ public ContainerDefinition withImageConfig(ImageConfig imageConfig) { setImageConfig(imageConfig); return this; } /** ** Whether the container hosts a single model or multiple models. *
* * @param mode * Whether the container hosts a single model or multiple models. * @see ContainerMode */ public void setMode(String mode) { this.mode = mode; } /** ** Whether the container hosts a single model or multiple models. *
* * @return Whether the container hosts a single model or multiple models. * @see ContainerMode */ public String getMode() { return this.mode; } /** ** Whether the container hosts a single model or multiple models. *
* * @param mode * Whether the container hosts a single model or multiple models. * @return Returns a reference to this object so that method calls can be chained together. * @see ContainerMode */ public ContainerDefinition withMode(String mode) { setMode(mode); return this; } /** ** Whether the container hosts a single model or multiple models. *
* * @param mode * Whether the container hosts a single model or multiple models. * @return Returns a reference to this object so that method calls can be chained together. * @see ContainerMode */ public ContainerDefinition withMode(ContainerMode mode) { this.mode = mode.toString(); return this; } /** ** The S3 path where the model artifacts, which result from model training, are stored. This path must point to a * single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, * but not if you use your own algorithms. For more information on built-in algorithms, see Common * Parameters. *
** The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. *
** If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download * model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services * account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate * Amazon Web Services STS for that region. For more information, see Activating and * Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity * and Access Management User Guide. *
*
* If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model
* artifacts in ModelDataUrl
.
*
* The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are * creating. *
** If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to * download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon * Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you * need to reactivate Amazon Web Services STS for that region. For more information, see Activating * and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web * Services Identity and Access Management User Guide. *
*
* If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the
* model artifacts in ModelDataUrl
.
*
* The S3 path where the model artifacts, which result from model training, are stored. This path must point to a * single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, * but not if you use your own algorithms. For more information on built-in algorithms, see Common * Parameters. *
** The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. *
** If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download * model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services * account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate * Amazon Web Services STS for that region. For more information, see Activating and * Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity * and Access Management User Guide. *
*
* If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model
* artifacts in ModelDataUrl
.
*
* The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are * creating. *
** If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to * download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your * Amazon Web Services account by default. If you previously deactivated Amazon Web Services STS for a * region, you need to reactivate Amazon Web Services STS for that region. For more information, see Activating and * Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services * Identity and Access Management User Guide. *
*
* If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the
* model artifacts in ModelDataUrl
.
*
* The S3 path where the model artifacts, which result from model training, are stored. This path must point to a * single gzip compressed tar archive (.tar.gz suffix). The S3 path is required for SageMaker built-in algorithms, * but not if you use your own algorithms. For more information on built-in algorithms, see Common * Parameters. *
** The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are creating. *
** If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to download * model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon Web Services * account by default. If you previously deactivated Amazon Web Services STS for a region, you need to reactivate * Amazon Web Services STS for that region. For more information, see Activating and * Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web Services Identity * and Access Management User Guide. *
*
* If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the model
* artifacts in ModelDataUrl
.
*
* The model artifacts must be in an S3 bucket that is in the same region as the model or endpoint you are * creating. *
** If you provide a value for this parameter, SageMaker uses Amazon Web Services Security Token Service to * download model artifacts from the S3 path you provide. Amazon Web Services STS is activated in your Amazon * Web Services account by default. If you previously deactivated Amazon Web Services STS for a region, you * need to reactivate Amazon Web Services STS for that region. For more information, see Activating * and Deactivating Amazon Web Services STS in an Amazon Web Services Region in the Amazon Web * Services Identity and Access Management User Guide. *
*
* If you use a built-in algorithm to create a model, SageMaker requires that you provide a S3 path to the
* model artifacts in ModelDataUrl
.
*
* The environment variables to set in the Docker container. Each key and value in the Environment
* string to string map can have length of up to 1024. We support up to 16 entries in the map.
*
Environment
string to string map can have length of up to 1024. We support up to 16 entries
* in the map.
*/
public java.util.Map
* The environment variables to set in the Docker container. Each key and value in the Environment
* string to string map can have length of up to 1024. We support up to 16 entries in the map.
*
Environment
string to string map can have length of up to 1024. We support up to 16 entries
* in the map.
*/
public void setEnvironment(java.util.Map
* The environment variables to set in the Docker container. Each key and value in the Environment
* string to string map can have length of up to 1024. We support up to 16 entries in the map.
*
Environment
string to string map can have length of up to 1024. We support up to 16 entries
* in the map.
* @return Returns a reference to this object so that method calls can be chained together.
*/
public ContainerDefinition withEnvironment(java.util.Map* The name or Amazon Resource Name (ARN) of the model package to use to create the model. *
* * @param modelPackageName * The name or Amazon Resource Name (ARN) of the model package to use to create the model. */ public void setModelPackageName(String modelPackageName) { this.modelPackageName = modelPackageName; } /** ** The name or Amazon Resource Name (ARN) of the model package to use to create the model. *
* * @return The name or Amazon Resource Name (ARN) of the model package to use to create the model. */ public String getModelPackageName() { return this.modelPackageName; } /** ** The name or Amazon Resource Name (ARN) of the model package to use to create the model. *
* * @param modelPackageName * The name or Amazon Resource Name (ARN) of the model package to use to create the model. * @return Returns a reference to this object so that method calls can be chained together. */ public ContainerDefinition withModelPackageName(String modelPackageName) { setModelPackageName(modelPackageName); return this; } /** ** The inference specification name in the model package version. *
* * @param inferenceSpecificationName * The inference specification name in the model package version. */ public void setInferenceSpecificationName(String inferenceSpecificationName) { this.inferenceSpecificationName = inferenceSpecificationName; } /** ** The inference specification name in the model package version. *
* * @return The inference specification name in the model package version. */ public String getInferenceSpecificationName() { return this.inferenceSpecificationName; } /** ** The inference specification name in the model package version. *
* * @param inferenceSpecificationName * The inference specification name in the model package version. * @return Returns a reference to this object so that method calls can be chained together. */ public ContainerDefinition withInferenceSpecificationName(String inferenceSpecificationName) { setInferenceSpecificationName(inferenceSpecificationName); return this; } /** ** Specifies additional configuration for multi-model endpoints. *
* * @param multiModelConfig * Specifies additional configuration for multi-model endpoints. */ public void setMultiModelConfig(MultiModelConfig multiModelConfig) { this.multiModelConfig = multiModelConfig; } /** ** Specifies additional configuration for multi-model endpoints. *
* * @return Specifies additional configuration for multi-model endpoints. */ public MultiModelConfig getMultiModelConfig() { return this.multiModelConfig; } /** ** Specifies additional configuration for multi-model endpoints. *
* * @param multiModelConfig * Specifies additional configuration for multi-model endpoints. * @return Returns a reference to this object so that method calls can be chained together. */ public ContainerDefinition withMultiModelConfig(MultiModelConfig multiModelConfig) { setMultiModelConfig(multiModelConfig); return this; } /** ** Specifies the location of ML model data to deploy. *
*
* Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker
* serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
*
* Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform,
* SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
*
* Specifies the location of ML model data to deploy. *
*
* Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker
* serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
*
* Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform,
* SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
*
* Specifies the location of ML model data to deploy. *
*
* Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform, SageMaker
* serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
*
* Currently you cannot use ModelDataSource
in conjunction with SageMaker batch transform,
* SageMaker serverless endpoints, SageMaker multi-model endpoints, and SageMaker Marketplace.
*