Kubernetes-as-a-Service in Cloud Director 9.7 – Reference Architecture
In this reference architecture, we have detailed the various components required for a service provider to build a Kubernetes-as-a-Service offering on top of Cloud Director managed environments. The Container Service Extension is utilized to allow cloud consumers to create Kubernetes clusters within their OrgVDCs with a single command.
Container Service Extension
Container Service Extension (CSE) is a VMware Cloud Director extension that helps tenants create, lifecycle manage, and interact with Kubernetes clusters in Cloud Director managed environments.
There are currently two versions of CSE: Standard and Enterprise. CSE Standard brings Kubernetes-as-a-service to vCD by creating customized VM templates and enabling tenant/organization administrators to deploy fully functional Kubernetes clusters in self-contained vApps. CSE Standard cluster creation can be enabled on existing NSX-V backed OrgVDCs in a tenant’s environment. With the release of CSE Enterprise in the CSE 2.0 release, VMware has also added the ability for tenants to provision VMware Enterprise PKS Kubernetes clusters back by NSX-T resources in Cloud Director managed environments.
Tenant admins are able to use the existing identity management solution within vCD and RBAC functionality provided by the CSE server to assign custom permissions to cloud consumers to allow them to provision CSE Standard cluster, CSE Enterprise clusters, or both within their OrgVDCs.
CSE Standard Overview
CSE Standard Kubernetes clusters are compiled from vApp templates that are created automatically by the CSE server on install. These templates are comprised of Photon or Ubuntu-based VMs as well as post-provisioning scripts that utilize kubeadm to automate the installation and configuration of the Kubernetes resources to form a functioning cluster.
Users provide a cluster name, OrgVDC external network, and quantity of worker nodes via the cluster create command and the CSE server automates the deployment of the Kubernetes cluster as a vApp in the user’s OvDC. CSE Standard clusters utilize Weave as the Container Network Interface (CNI) and allows for the static provisioning of persistent storage via an NFS share, which can be automatically added to the vApp via CSE.
CSE Enterprise Overview
CSE Enterprise introduces the ability of the CSE server to utilize a service account to communicate directly with the PKS and NSX-T APIs to automate the deployment of Enterprise PKS Kubernetes clusters and its supporting networking resources in a vCD managed environment.
Enterprise PKS brings additional, enterprise-grade features and functionality in addition to what is provided with CSE Standard Kubernetes clusters. This includes, but is not limited to:
- HA, multi-master Kubernetes clusters
- Dynamic persistent storage provisioning with the vSphere Cloud Provider integration
- Automated Day 1 and Day 2 Kubernetes cluster management via Bosh Director
- Microsegmentation capability for Kubernentes resources via integration with NSX-T
- Automated creation of Kubernetes service type LoadBalancer and ingress resrouces via NSX-T L4/L7 load balancers
- Support for Harbor, an open source cloud native registry
Please refer to our Enterprise PKS reference architectures for additional details around deployment architecture for VMware Enterprise PKS.
The vCD Cloud Admin is responsible for building out the vSphere infrastructure to support both CSE Standard and Enterprise cluster creation. CSE Standard cluster creation can be enabled on existing, NSX-V backed OrgVDCs. CSE Standard Kubernetes clusters can run alongside existing vApps in the OrgVDC.
Enabling CSE Enterprise cluster creation requires the Cloud Admin create a separate vSphere cluster with hosts configured as NSX-T transport nodes. The Cloud Admin also installs the Enterprise PKS control plane, which includes OpsMan, Bosh Director, the PKS API server, and Harbor Container Regsitry, in the NSX-T backed vSphere environment and creates a new Provider Virtual Data Center (PvDC) that will be used to create OrgVDCs to support CSE Enterprise Kubernetes cluster creation. For more information on installing Enterprise PKS backed by NSX-T, please refer to the official documentation.
Finally, the Cloud Admin installs the CSE server on a suitable host in the vCD management environment. For more details on the CSE Server installation, please refer to the official documentation. The CSE server passes the Kubernetes cluster creation commands along to vCD for CSE Standard cluster creation. The CSE server passes cluster creation commands directly to the PKS and NSX-T API for CSE Enterprise cluster creation.
Tenant IaaS Admins are responsible for enabling OrgVDCs for a specific type of Kubernetes provider: CSE Standard, CSE Enterprise or none. The IaaS Admins are also responsible for granting rights to individual end users to allow them to provision Kubernetes clusters within their organization.
Org Admins and/or developers (depending on RBAC rules) can then provision Kubernetes clusters via their assigned Kubernetes provider with a single vcd-cli command, provided by CSE. Upon completion of cluster creation, users utilize the vcd-cli to pull down the cluster configuration files required to access their Kubernetes clusters with native Kubernetes management tools such as kubectl. IaaS Admins can also be responsible for handing Kubernetes config files over to developers to allow for access to the Kubernetes clusters. vCD access is not required for developers in this scenario.
Please refer to the Developer-Ready Cloud service page for more resources on how to design a cloud platform built for developers and devops.