# aws-k8s-tester [![Go Report Card](https://goreportcard.com/badge/github.com/aws/aws-k8s-tester)](https://goreportcard.com/report/github.com/aws/aws-k8s-tester) [![Godoc](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](https://pkg.go.dev/github.com/aws/aws-k8s-tester) [![Releases](https://img.shields.io/github/release/aws/aws-k8s-tester/all.svg?style=flat-square)](https://github.com/aws/aws-k8s-tester/releases) [![LICENSE](https://img.shields.io/github/license/aws/aws-k8s-tester.svg?style=flat-square)](https://github.com/aws/aws-k8s-tester/blob/master/LICENSE) https://github.com/kubernetes/enhancements/blob/master/keps/provider-aws/20181126-aws-k8s-tester.md `aws-k8s-tester` is a set of utilities and libraries for "testing" Kubernetes on AWS. - Implements [`test-infra/kubetest2` interface](https://github.com/kubernetes/test-infra/tree/master/kubetest2). - Uses AWS CloudFormation for resource creation. - Supports automatic rollback and resource deletion. - Flexible add-on support via environmental variables. - Extensible as a Go package; `eks.Tester.Up` to create EKS. - Performance tests suites. The main goal is to create "temporary" EC2 instances or EKS clusters for "testing" purposes: - Upstream conformance tests - https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-cloud-provider/aws/eks/eks-periodics.yaml - https://github.com/kubernetes/test-infra/pull/16890 - CNI plugin conformance tests - https://github.com/aws/amazon-vpc-cni-k8s/blob/master/scripts/lib/cluster.sh - https://github.com/aws/amazon-vpc-cni-k8s/pull/875 - https://github.com/aws/amazon-vpc-cni-k8s/pull/878 - https://github.com/aws/amazon-vpc-cni-k8s/pull/951 - https://github.com/aws/amazon-vpc-cni-k8s/pull/957 - AppMesh scalability testing - https://github.com/aws/aws-app-mesh-controller-for-k8s/blob/master/scripts/lib/cluster.sh - https://github.com/aws/aws-app-mesh-controller-for-k8s/pull/137 ## Install https://github.com/aws/aws-k8s-tester/releases ## `aws-k8s-tester eks` Make sure AWS credential is located in your machine: ```bash # confirm credential is valid aws sts get-caller-identity --query Arn --output text ``` See the following for more fields: - https://github.com/aws/aws-k8s-tester/blob/master/eksconfig/README.md - https://pkg.go.dev/github.com/aws/aws-k8s-tester/eksconfig?tab=doc - https://github.com/aws/aws-k8s-tester/blob/master/eksconfig/default.yaml ```bash # easiest way, use the defaults # creates role, VPC, EKS cluster rm -rf /tmp/${USER}-test-eks-prod* aws-k8s-tester eks create cluster --enable-prompt=true -p /tmp/${USER}-test-prod-eks.yaml aws-k8s-tester eks delete cluster --enable-prompt=true -p /tmp/${USER}-test-prod-eks.yaml # advanced options can be set via environmental variables # e.g. node groups, managed node groups, add-ons rm -rf /tmp/${USER}-test-eks* AWS_K8S_TESTER_EKS_PARTITION=aws \ AWS_K8S_TESTER_EKS_REGION=us-west-2 \ AWS_K8S_TESTER_EKS_LOG_COLOR=true \ AWS_K8S_TESTER_EKS_S3_BUCKET_CREATE=true \ AWS_K8S_TESTER_EKS_S3_BUCKET_CREATE_KEEP=true \ AWS_K8S_TESTER_EKS_COMMAND_AFTER_CREATE_CLUSTER="aws eks describe-cluster --name GetRef.Name" \ AWS_K8S_TESTER_EKS_COMMAND_AFTER_CREATE_ADD_ONS="aws eks describe-cluster --name GetRef.Name" \ AWS_K8S_TESTER_EKS_PARAMETERS_ENCRYPTION_CMK_CREATE=true \ AWS_K8S_TESTER_EKS_PARAMETERS_ROLE_CREATE=true \ AWS_K8S_TESTER_EKS_PARAMETERS_VERSION=1.17 \ AWS_K8S_TESTER_EKS_PARAMETERS_VPC_CREATE=true \ AWS_K8S_TESTER_EKS_CLIENTS=5 \ AWS_K8S_TESTER_EKS_CLIENT_QPS=30 \ AWS_K8S_TESTER_EKS_CLIENT_BURST=20 \ AWS_K8S_TESTER_EKS_ADD_ON_NODE_GROUPS_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_NODE_GROUPS_ROLE_CREATE=true \ AWS_K8S_TESTER_EKS_ADD_ON_NODE_GROUPS_FETCH_LOGS=false \ AWS_K8S_TESTER_EKS_ADD_ON_NODE_GROUPS_ASGS='{"GetRef.Name-ng-al2-cpu":{"name":"GetRef.Name-ng-al2-cpu","remote-access-user-name":"ec2-user","ami-type":"AL2_x86_64","image-id":"","image-id-ssm-parameter":"/aws/service/eks/optimized-ami/1.17/amazon-linux-2/recommended/image_id","instance-types":["c5.xlarge"],"volume-size":40,"asg-min-size":2,"asg-max-size":2,"asg-desired-capacity":2,"kubelet-extra-args":""},"GetRef.Name-ng-al2-gpu":{"name":"GetRef.Name-ng-al2-gpu","remote-access-user-name":"ec2-user","ami-type":"AL2_x86_64_GPU","image-id":"","image-id-ssm-parameter":"/aws/service/eks/optimized-ami/1.17/amazon-linux-2-gpu/recommended/image_id","instance-types":["p3.8xlarge"],"volume-size":40,"asg-min-size":1,"asg-max-size":1,"asg-desired-capacity":1,"kubelet-extra-args":""},"GetRef.Name-ng-bottlerocket":{"name":"GetRef.Name-ng-bottlerocket","remote-access-user-name":"ec2-user","ami-type":"BOTTLEROCKET_x86_64","image-id":"","image-id-ssm-parameter":"/aws/service/bottlerocket/aws-k8s-1.15/x86_64/latest/image_id","ssm-document-cfn-stack-name":"GetRef.Name-install-bottlerocket","ssm-document-name":"GetRef.Name-InstallBottlerocket","ssm-document-create":true,"ssm-document-commands":"enable-admin-container","ssm-document-execution-timeout-seconds":3600,"instance-types":["c5.xlarge"],"volume-size":40,"asg-min-size":2,"asg-max-size":2,"asg-desired-capacity":2}}' \ AWS_K8S_TESTER_EKS_ADD_ON_MANAGED_NODE_GROUPS_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_MANAGED_NODE_GROUPS_ROLE_CREATE=true \ AWS_K8S_TESTER_EKS_ADD_ON_MANAGED_NODE_GROUPS_FETCH_LOGS=false \ AWS_K8S_TESTER_EKS_ADD_ON_MANAGED_NODE_GROUPS_MNGS='{"GetRef.Name-mng-al2-cpu":{"name":"GetRef.Name-mng-al2-cpu","remote-access-user-name":"ec2-user","release-version":"","ami-type":"AL2_x86_64","instance-types":["c5.xlarge"],"volume-size":40,"asg-min-size":2,"asg-max-size":2,"asg-desired-capacity":2},"GetRef.Name-mng-al2-gpu":{"name":"GetRef.Name-mng-al2-gpu","remote-access-user-name":"ec2-user","release-version":"","ami-type":"AL2_x86_64_GPU","instance-types":["p3.8xlarge"],"volume-size":40,"asg-min-size":1,"asg-max-size":1,"asg-desired-capacity":1}}' \ AWS_K8S_TESTER_EKS_ADD_ON_FLUENTD_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_METRICS_SERVER_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_CONFORMANCE_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_APP_MESH_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_CSI_EBS_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_KUBERNETES_DASHBOARD_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_PROMETHEUS_GRAFANA_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_NLB_HELLO_WORLD_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_NLB_GUESTBOOK_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_ALB_2048_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_JOBS_PI_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_JOBS_ECHO_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_CRON_JOBS_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_CSRS_LOCAL_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_CONFIGMAPS_LOCAL_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_SECRETS_LOCAL_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_WORDPRESS_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_JUPYTER_HUB_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_CUDA_VECTOR_ADD_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_CLUSTER_LOADER_LOCAL_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_HOLLOW_NODES_LOCAL_ENABLE=true \ AWS_K8S_TESTER_EKS_ADD_ON_STRESSER_LOCAL_ENABLE=true \ aws-k8s-tester eks create cluster --enable-prompt=true -p /tmp/${USER}-test-eks.yaml < the simplest approach is to get/put every object after upgrades. objects that don't need migration will no-op (they won't even increment resourceVersion in etcd). objects that do need migration will persist in the new preferred storage version Which means there's no way in client-side to find all resources created with deprecated API groups. The only way to ensure API group upgrades is list all resources, and execute *get and put* with the latest API group version. If the resource has already latest API version, it will be no-op. Otherwise, it will upgrade to the latest API version. `eks-utils apis` will help with the list calls with proper pagination and generate *get and put* scripts for the cluster: ```bash # to check supported API groups from current kube-apiserver eks-utils apis \ --kubeconfig /tmp/kubeconfig.yaml \ supported # to write API upgrade/rollback scripts and YAML files in "/tmp/eks-utils" # # make sure to set proper "--batch-limit" and "--batch-interval" # to not overload EKS master; if it's set too high, it can affect # production workloads slowing down kube-apiserver rm -rf /tmp/eks-utils-resources eks-utils apis \ --kubeconfig /tmp/kubeconfig.yaml \ --enable-prompt \ deprecate \ --batch-limit 10 \ --batch-interval 2s \ --dir /tmp/eks-utils-resources # this command does not apply or create any resources # it only lists the resources that need be upgraded # if there's any resources that needs upgrade, # it writes patched YAML file, original YAML file, # bash scripts to update and rollback find /tmp/eks-utils-resources ``` ## `etcd-utils k8s list` `etcd-utils k8s list` helps with API deprecation (e.g. https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#deprecations-and-removals). **WARNING**: `kubectl` internally converts API versions in the response (see [`kubernetes/issues#58131`](https://github.com/kubernetes/kubernetes/issues/58131#issuecomment-403829566)). Which means `kubectl get` output may have different API versions than the one persisted in `etcd` . Upstream Kubernetes recommends upgrading deprecated API with *get and put*: > the simplest approach is to get/put every object after upgrades. objects that don't need migration will no-op (they won't even increment resourceVersion in etcd). objects that do need migration will persist in the new preferred storage version To minimize the impact of list calls, `etcd-utils k8s list` reads keys with leadership election and pagination; only a single worker can run at a time. ```bash # to list all deployments with etcd pagination + k8s decoder etcd-utils k8s \ --endpoints http://localhost:2379 \ list \ --prefixes /registry/deployments \ --output /tmp/etcd-utils-k8s-list.output.yaml # or ".json" ```