--- title: "Model inference" date: 2019-08-27T00:00:00-08:00 weight: 50 draft: false --- ### Model Inference After the model is trained and stored in S3 bucket, the next step is to use that model for inference. This chapter explains how to use the previously trained model and run inference using TensorFlow and Keras on Amazon EKS. #### Run inference pod A model from training was stored in the S3 bucket in previous section. Make sure `S3_BUCKET` and `AWS_REGION` environment variables are set correctly. ``` curl -LO https://eksworkshop.com/advanced/420_kubeflow/kubeflow.files/mnist-inference.yaml envsubst }} Data: {"instances": [[[[0.0], [0.0], [0.0], [0.0], [0.0] ... 0.0], [0.0]]]], "signature_name": "serving_default"} The model thought this was a Ankle boot (class 9), and it was actually a Ankle boot (class 9) {{< /output >}} #### Cleanup Now that we saw how to run training job and inference, let's terminate these pods to free up resources ``` kubectl delete -f mnist-training.yaml kubectl delete -f mnist-inference.yaml ```