# Kor-LLM-On-SageMaker 현재 작업 중 입니다 # Reference: - Blog: Deploy BLOOM-176B and OPT-30B on Amazon SageMaker with large model inference Deep Learning Containers and DeepSpeed - https://aws.amazon.com/blogs/machine-learning/deploy-bloom-176b-and-opt-30b-on-amazon-sagemaker-with-large-model-inference-deep-learning-containers-and-deepspeed/ - Blog: Deploy large models on Amazon SageMaker using DJL Serving and DeepSpeed model parallel inference - https://aws.amazon.com/blogs/machine-learning/deploy-large-models-on-amazon-sagemaker-using-djlserving-and-deepspeed-model-parallel-inference/ - SageMaker Large model inference tutorials - https://docs.aws.amazon.com/sagemaker/latest/dg/large-model-inference-tutorials.html - Use DJL with the SageMaker Python SDK - https://sagemaker.readthedocs.io/en/stable/frameworks/djl/using_djl.html - Example notebooks: AWS SageMaker examples GitHub repository. - https://github.com/aws/amazon-sagemaker-examples/tree/7bcbec65be55a8c160bc757b051d7508c9114846/inference/nlp/realtime/llm