# Serve a CV model on GPU with an Amazon SageMaker Multi-Model Endpoint (MME) The Jupyter notebook `triton-cv-mme-tensorflow-backend` shows you how to deploy two trained Tensorflow CV models using the NVIDIA Triton Inference Server on an Amazon SageMaker MME. ## Steps to run the notebook 1. Launch the notebook in SageMaker studio. This notebook was tested with the Data Science Python3 kernel. 1. The notebook is self-contained with no local dependencies. Further details on the workings of the code can be found within the notebook itself.