So you don't have to manage servers. It also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment. With native support for bring your own algorithms and frameworks, SageMaker offers flexible distributed training options that adjust to your specific workloads. Deploy a model into a secure and scalable environment by launching it with a few clicks from SageMaker Studio or the SageMaker Console. Nation models are extremely powerful models able to solve a wide array of tasks. To solve most tasks efficiently, these models require some form of customization. The recommended way to first customize a foundation model to a specific use case is through prompt engineering. Providing your foundation model with well-engineered, context-rich prompts can help achieve desired results without any fine-tuning or changing of model weights. For more information, see prompt engineering for foundation models. If prompt engineering alone is not enough to customize your foundation model to a specific task, you can fine-tune the foundation model on additional domain-specific data. The fine-tuning process involves changing model weights. If you want to customize your model with information from a knowledge library without any retraining, see retrieval augmented generation. Prompt engineering is the process of designing and refining the prompts or input for a large model to generate specific types of output. Prompt engineering involves selecting appropriate keywords, providing context, and shaping the input in a way that encourages the model to produce the desired response and is the vital technique to actively shape the behavior and output of foundation models. Effective prompt engineering is crucial for directing model behavior and achieving desired responses. Through prompt engineering, you can control a model's tone, style, and domain expertise without more involved customization measures like fine-tuning. We recommend dedicating time to prompt engineering before you consider fine-tuning a model on additional data. The goal is to provide sufficient context and guidance to the model so that it can generalize and perform well on unseen or limited data scenarios. Find training a foundation model. Foundation models are computationally expensive and trend on a large unlabeled corpus. Fine-tuning a pre-trained foundation model is an affordable way to take advantage of their broad capabilities while customizing a model on your own small corpus. Fine-tuning is the customization method that involved further training and does change the weights of your model. Fun training might be useful to you if you need to customize your model to specific business needs, your model to successfully work with domain-specific language such as industry jargon, technical terms, or other specialized vocabulary. Enhanced performance for specific tasks. Accurate, relative, and context-aware responses in applications.