+++ title = "Architecture discussion" chapter = false weight = 1 +++ :imagesdir: /images ## ROSA Architecture ROSA being a managed solution translates to the implementation being pre-configured for a small common set of use cases. Customers seeking a high degree of control and flexibility may not find ROSA an ideal fit and should explore OpenShift Container platform OCP instead. This is not a Lab section but rather a technical discussion and deep dive into the various options for deploying ROSA and some of the related considerations. ### Common deployment components: All ROSA implementations will have 3 x Master nodes in order to cater for cluster quorum and to ensure proper fail over and resilience of OpenShift. At least 2 infrastructure nodes to ensure resilience of the OpenShift router layer which provides end use application access. A collection of AWS elastic load balancers, some of which will provide end user access to the OpenShift router layer, providing access to the application workloads running on OpenShift, other AWS elastic load balancers will expose end points used for cluster administration and management by the SRE teams. The OpenShift Master nodes cater for API end point for cluster administration and management, Controllers, and etcd. The OpenShift Infrastructure nodes cater for build in OpenShift container registry, OpenShift router layer and monitoring. ROSA clusters will require AWS VPC subnets per AZ. For Single AZ implementations two subnets will be required ( one public one private) for Multi AZ implementations six subnets will be needed (one public and one private per AZ), for private clusters with Private link 3 private subnets will be required. ### Default deployment (Single AZ) ---- rosa create cluster ---- The default cluster config will deploy a basic ROSA cluster into a single AWS AZ, this will create a new VPC with two subnets ( one public and one private) within the same AZ. The OpenShift control plane and data plane i.e Masters, Infrastructure and Workers will all be placed into the same AZ in the private subnet. This is the simplest implementation and a good way to start playing with ROSA from a developer point of view. This implementation is not recommended for scale, resilience or production. image::rosa-arch-single.png[Rosa single arch] ### Multi AZ Cluster ---- rosa create cluster or rosa create cluster -- interactive or ---- image::rosa-arch-multi-create.png[Rosa multi create] The multi AZ implementation will make use of three AWS AZs, with a public and private subnet in each AZ ( total of six subnets). If not deploying into an existing VPC the ROSA provisioning process will create a VPC to meets these requirements. Multi AZ implementations will deploy 3 Master nodes, 3 Infrastructure nodes, spread across three AZs. This takes advantage of the resilience constructs of the AWS Multi AZ VPC design and combines it with the resilience model of OpenShift. AWS Availability Zones are the closest construct AWS has to a traditional data center, it should however be noted that AWS AZs are made up of multiple physical data centers. Each AZ is far enough apart so that any event impacting one AZ will not impact another. At the same time AZs are close enough to ensure that customers will not experience a performance difference between AZs. #### Subnet sizing and address spaces: When deploying ROSA there are three IP address CIDRs that warrant discussion: image::rosa-arch-cidr.png[Rosa multi create] * Machine CIDR 10.0.0.0/16 This is the IP address space for the AWS VPC, either existing or to be created. If deploying into an existing VPC ensure that this is instead to reflect the VPC CIDR of the VPC being deployed into. If not deploying into an existing VPC the six subnets created will be the same size, equal divisions of the VPC or machine CIDR. It should be noted that there are not a large number of resources within the public subnets, mainly load balancers and NAT gateway interfaces. * Service CIDR * POD CIDR The Service and POD CIDRs are private address spaces internal to OpenShift, these are used by the SDN. You can deploy multiple ROSA clusters and re-use these address spaces as they sit behind the routing layer within OpenShift and will not internal with the same address space on other clusters. This is similar to private IP use in residential homes, every comcast customer has a 10.0.0.0/16. It should however be noted that if the application workloads need to reach data sources and other services outside of OpenShift that the target address space should not overlap these address spaces. This will result in routing issues internal to OpenShift. * Host prefix 23 - 26 The Host prefix has nothing to do with the AWS VPC. This takes the above POD CIDR and defines how this is divided across all of the underlying container hosts or Worker nodes. This will be a consideration linked to how many and how large are the instance types for the Worker nodes. ### Deploying ROSA into an existing VPC Customers looking looking for more granular control of subnet address space sizing should consider creating the VPC and then deploying ROSA into an existing VPC. Customers who have business unit segregation where the platform owners who would be residential for OpenShift are a different team from the infrastructure or cloud team deploying into an existing VPC may be ideal. When deploying ROSA into an existing VPC, the installer will prompt for subnets to install into. The Installer will require six subnets (three public and three private). The installer at this stage simply allows you to select subnets from a list of subnet ids. It would be helpful if you document the subnet ids being deployed into. image::rosa-arch-existing.png[Rosa existing arch] ### Public, Private and Private-Link ROSA clusters There are three implementations to compare, ROSA public vs private clusters and then private clusters with AWS PrivateLink. Public and private ROSA clusters refer to where the application workloads running on OpenShift will be accessible from. Selecting a public cluster will create an AWS Classic Load Balancer (https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html), which provides access to port 80 and 443 which is internet facing, and can be accessed from within the VPC or via peering, AWS Direct Connect, or transit gateway as well as from the public internet. Selecting a private cluster will create an AWS Classic Load Balancer (https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html), which provides access to port 80 and 443 which is internet facing, and can be accessed from within the VPC or via peering, AWS Direct Connect, or transit gateway. This internal Load balancer has the infrastructure nodes as targets and will forward to the OpenShift router layer. There is no public or internet facing load balancer so application workloads can not be accessed from the internet. Private clusters will still require public subnets, which in turn, will require an IGW, public route table, and route to the internet via the IGW (https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Networking.html). This is required for the provisioning process to create the public facing AWS Network Load Balancers that will provide access to the cluster for administrative and management by Red Hat SRE. The only difference between selecting public vs private is the classic load balancer for apps is internet facing for public and internal for private. In June 2021 ROSA Private clusters with AWS Private Link were released. ROSA private clusters with private link are completely private Red Hat SRE teams will make use of private link endpoints to access the cluster for management. no public subnets, route tables or IGW are required. #### ROSA Private cluster image::rosa-arch-private.png[Rosa private arch] #### ROSA Public cluster image::rosa-arch-multi.png[Rosa multi arch] #### ROSA Private cluster with Private-Link ---- rosa create cluster --cluster-name rosaprivatelink --multi-az --region us-west-2 --version 4.8.2 --enable-autoscaling --min-replicas 3 --max-replicas 3 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 --private-link --subnet-ids subnet-00cbba3684292677e,subnet-0f16db5662540af92,subnet-0647e829f3b771f0e ---- image::rosa-create-privatelink.png[Rosa tarnsitgw arch] ROSA Private link clusters can only be deployed into an existing VPC. image::rosa-arch-privatelink.png[Rosa privatelink arch] Typically ROSA private clusters with private link will be implemented as part of a greater transit gateway implementation where the VPC for ROSA will not have internet access. traffic will flow from the ROSA VPC to either on premises or another VPC or AWS account which provides an a single controlled point of egress. image::rosa-arch-transitgw.png[Rosa tarnsitgw arch] #### Connection flow: *Customer/application consumer connection flow:* Customers connecting to application workloads running on OpenShift will take place over port 80 or 443. Looking at a public ROSA cluster, there is both an internal and internet facing Classic Load Balancer exposing these applications. Client connections from the internet will resolve to the public facing Classic Load Balancer, which will forward connections to the OpenShift routing layer running on the infrastructure nodes. Connections coming from within the same VPC, or via VPN, AWS Direct Connect, or transit gateway will come via the internal Classic Load Balancer that forwards connections to the OpenShift Routing layer running on the infrastructure nodes. *Administrative or SRE connection flow:* Developers, administrators, and SRE teams follow a different path. These connections will make use of Port 6443 and connect to a Network Load Balancer, which connects to the OpenShift API or OpenShift web console. This could be for users and SRE members to access the OpenShift web console and provide a graphical means of operational administration. This could also be used by DevOps solutions such as automated pipeline, build, and deploy processes to deploy application workloads onto the OpenShift cluster. These connections if coming from within the VPC, via AWS Direct Connect, Peering, VPN, or transit gateway would hit the internal Network Load Balancer and be forwarded to the API endpoint on one of the OpenShift Master nodes. Connections coming from the internet would hit the internet facing network load balancer and then forwarded to one of the Master nodes. ROSA is still OpenShift so the OpenShift CLI “oc” is still used for much of the above administration and automation functions. The OpenShift CLI “oc” is an extension of the Kubernetes kubectl and included OpenShift specific abstractions such as Routes, Projects, etc. Load balancers for AWS services: Customers wanting to expose the workloads running on OpenShift to other AWS accounts and VPCs within their Organization via AWS PrivateLink (https://aws.amazon.com/privatelink) will need to replace the Classic Load Balancer with a Network Load Balancer. This is not supported via the ROSA CLI at this stage and will require the manual creation of the NLB and changes to the OpenShift cluster egress. This does not hinder admin or SRE access to the cluster for administration; this change does not hinder delete of ROSA clusters via the ROSA CLI. Similarly, customers looking to make use of AWS Web Application Firewall (https://aws.amazon.com/waf/) as a security solution in combination with OpenShift application workloads will need to implement an additional Application Load Balancer in front of the Classic Load Balancer as a target for AWS WAF. ### Shared VPC https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html At this stage ROSA does not support deployment into a shared VPC. ### Multi Region The ROSA provisioning process like most AWS products and services does not provide a multi region deployment. Customer seeking multi region availability will need to deploy separate clusters in each region. CICD pipelines and automation will need to be updated to deploy to the respective clusters. DNS name resolution will be used to resolve application URLs to the respective clusters and control failover. It is recommended that Amazon Route 53 form part of this design.