vSphere with Tanzu has received an exciting upgrade with the release of vSphere 8.0 Update 1. This update removes the restriction that previously required NSX-based networking to deploy Supervisor Services. Now, customers with only a VDS-based Supervisor can also benefit from the various Supervisor Services that vSphere with Tanzu supports!
For those unfamiliar, Supervisor Services are deployed as vSphere Pods, which are tiny VMs that boot up a Photon OS kernel and are configured with just enough resources to run one or more Linux containers. In earlier releases, vSphere Pods required an NSX-based Supervisor. With this restriction removed in vSphere 8.0 Update 1, it seems deploying vSphere Pods should also be possible with just a VDS-based Supervisor.
I attempted to deploy a container to my Supervisor Cluster in the hopes of deploying the workload as a vSphere Pod. However, it immediately returned with an error stating:
Error from server (workload management cluster uses vSphere Networking, which does not support action on kind Deployment): error when creating 'deployment.yaml': admission webhook 'default.validating.license.supervisor.vmware.com' denied the request: workload management cluster uses vSphere Networking, which does not support activity on kind Deployment.
Based on the message, it appears that deploying vSphere Pods when using a VDS-based Supervisor is not a technical limitation but rather only supported when using an NSX-based Supervisor.
I was still curious about how this was working by default. When you log into the Supervisor Cluster, you have limited privileges. So, I decided to log into the Supervisor Cluster via vCenter Server by going through the troubleshooting workflow, which puts you on one of the Supervisor Control Plane VMs.
I regularly prefer to utilize Kubectl from my local desktop for ease of access and the pleasant colored console output. I figured, why not just copy the .kube/config file from the Administrator Control Plane VM to my desktop and assess it that way? Initially, nothing stood out to me about how the demands were being caught, and I was around to call it a day. I thought since I have an admin setting to the Administrator Cluster, possibly this might do something different if I attempted to deploy the container again.
To my complete shock, it worked, and it effectively deployed a vSphere Pod in the vSphere Namespace that I had created earlier! The screenshot above is a vSphere Pod running with a VDS-based Administrator Cluster utilizing the WordPress vSphere Pod Illustration from the VMware documentation.
Disclaimer: This is not officially supported by VMware; utilize it at your own risk.
From an instruction and investigation point of view, I think this can be super useful, especially if you need to run a handful of containers without having to turn on a full Tanzu Kubernetes Grid (TKG) Workload Cluster! For illustration, I, as of late saw that we had launched a new vSAN Object Viewer Fling, which is given as a Docker container. Extraordinary! We can effortlessly take that container and deploy it into Kubernetes, specifically running it as a vSphere Pod, as I have done by creating this basic YAML manifest example, as shared in the tweet underneath.
If you want to explore vSphere Pods with a VDS-based Supervisor Cluster using HAProxy for load balancing, follow these steps:
Step 1. Enable vSphere with Tanzu using HAProxy or NSX-ALB on your vSphere Cluster.
Step 2. Create a vSphere Namespace in the vSphere UI to manage your vSphere Pods.
Step 3. SSH to the VCSA and run /usr/lib/vmware-wcp/decryptK8Pwd.py
to get Supervisor Cluster Control Plane VM details.
Step 4. SSH to the Control Plane VM using the provided credentials.
Step 5. Copy the .kube/config
contents to your local .kube
directory.
- Note: If you already have an existing file, it will overwrite the contents. Consider backing up the original file in case you need to revert to your previous configuration.
Step 6. Now, we must modify the .kube/config file for local use. Firstly, remove the certificate-authority-data section and replace it with the insecure-skip-tls-verify flag, as demonstrated below. Secondly, substitute the localhost address (127.0.0.1) with the IP Address of your Supervisor Control Plane, resembling the snippet below:
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://10.10.0.65:6443
snip .....
Step 7. Deploy your Kubernetes manifest, specifying the vSphere Namespace with -n
.
For example
kubectl -n primp-industries apply -f [your-manifests].yaml
If you're looking for a vSphere Pod workload example to deploy, you can use this WordPress example.