Close
Kubernetes-as-a-Service with VMware Tanzu Basic and VMware Cloud Director

Kubernetes-as-a-Service with VMware Tanzu Basic and VMware Cloud Director

This reference architecture details how a cloud provider can deploy VMware Cloud Director with VMware Tanzu Basic on cloud on-premises. All the networking information depicted is provided as generic examples. Providers can customize the design based on the desired outcome.


The focus of this reference architecture is on:  
  • Ingress/Egress for Tanzu Kubernetes Clusters (TKG) 
  • VRF reference design with vSphere with Tanzu and VMware Cloud Director 
  • Network Isolation of Kubernetes clusters for the customer organization

Network terminology:  
  1. Supervisor cluster: In a vSphere with Tanzu environment, a Supervisor Cluster, group of 3 VMs, is responsible for connectivity to Kubernetes control plane VMs, services, and workloads. The supervisor cluster uses an NSX Edge load balancer to provide connectivity to external services. 
  2. Workload Network: The workload network configuration consists of POD CIDR, Service CIDR, Ingress CIDR, and Egress CIDR. The POD CIDR, Service CIDR network ranges should not overlap with Ingress/Egress CIDR ranges. Provider administrator provisions these values while creating the supervisor cluster. 
  3. Management Network: The supervisor cluster VMs uses a management network to communicate with each other. The VCD uses this supervisor cluster's Management network endpoint to connect using SSL. 
  1. VCD External Network: The External network consists of a pool of public IP addresses. 


VRF support in VCD with Tanzu Basic 

There are two edge clusters Edge Cluster 1 connects to the provider's core router via a Provider Tier-0 (Internet Tier-0). This Edge cluster connects to all Tier-1 routes managed by VCD and TKG along with the Internet Tier-0 router. The Edge cluster 2 has a Tier-0 Gateway that will be deploying a VRF for each tenant (Note that this Tier-0 is different from the Internet Tier-0, and its primary purpose is to provide VRF per tenant organization). This design isolates the VRF and connection to the core router. For onboarding a new customer, the provider can create a new VRF/Tier-0 gateway. The tenant's VM workloads can connect to the MPLS endpoint with this design from the customer's VRF domain. Both Edge cluster's Tier-0 Gateways connection results via BGP. Each Edge cluster can hosts up to ten Edge nodes (Eight Edge nodes for ECMP A/A of Tier-0). Each Tier-0 gateway from Edge Cluster 1 and VRF should be configured in Active/Active mode and can reside on up to 8 Edge nodes for ECMP. The proposed design supports both Bare-metal and virtual Edge nodes on vSphere Infrastructure. 


Ingress/Egress configuration for Tanzu Kubernetes Clusters:  

The proposed design allows the provider to configure a private range of IP addresses. This approach eliminates the requirement for the provider to manage tenant public IPs.  


Network Isolation and accessing Kubernetes clusters  

The SNAT rule ensures inbound access to the TKG clusters within the customer organization. The Gateway Firewall rule blocks all inbound traffic to the TKG clusters except for the Supervisor cluster, Edge gateway of the customer Kubernetes policy (Namespace in the vSphere), and all VCD managed Tier-1 gateways. The customers can use a Jump host to access and manage the TKG clusters using kubectl commands. The customer admin or DevOps can download the kubeconfig file from VCD UI and provide it to the desired users to connect to the TKG cluster.

Existing Partners

Log in to Partner Connect.

Become a Partner

Join the VMware Cloud Provider Connect Partner ecosystem.

Get Cloud Verified

Earn the logo that will win you more business.