What is a Node Pool and Why it is required?
Node pool define properties for a set of worker node in a Tanzu Kubernetes Clusters. That means, it enables a single workload cluster to contain and manage different type of nodes needed to support different resource requirement from application. In this post, I will be discussing about Tanzu Kubernetes Cluster running on a VMware environment.
This is supported from TKG 1.4 version. When you deploy workload clusters using TKG 1.4, it create on node pool by default. You can validate that by running the following command.
# List the node pool in a workload cluster # Ref command $ tanzu cluster node-pool list $ tanzu cluster node-pool list tkg-wkld01 NAME NAMESPACE PHASE REPLICAS READY UPDATED UNAVAILABLE tkg-wkld01-md-0 default Running 3 3 3 0
Pre-requirements for Creating a new node pool
Create a node pool definition file
Below is sample definition file for creating a new node pool. You can modify this as per your environment and requirement.
name: tkg-wkld02 replicas: 2 labels: key1: wkld02 key2: app-workload vsphere: memoryMiB: 4196 diskGiB: 40 numCPUs: 4 datacenter: datacenter datastore: wkld02 storagePolicyName: tkg-policy folder: tkg-folder resourcePool: tkg-pool network: net-seg2
Let’s understand the parameters:
name: name of the node pool
replicas: number of node to be created
labels: you can specify the labels for better identification of nodes and later if you want to use in selector
vsphere: vsphere specific parameters e.g. size of nodes when it will be created, datacenter name etc.
Note: If you don’s specify, these vsphere settings, they will be inherited from first node pool that was creating during installation time.
There are more parameters available that you can specify within
vsphere section, you can find there here.
Creating node pool
Run below command to create the node pool.
# Add the new node pool $ tanzu cluster node-pool set tkg-wkld01 -f nodepool # List the node pool in a cluster $ tanzu cluster node-pool list tkg-wkld01 NAME NAMESPACE PHASE REPLICAS READY UPDATED UNAVAILABLE tkg-wkld01-md-0 default Running 3 3 3 0 tkg-wkld02 default ScalingUp 2 0 2 2
You can see the progress on vCenter console and also keep monitoring the node pool status.
# List the node pool again in a cluster $ tanzu cluster node-pool list tkg-wkld01 NAME NAMESPACE PHASE REPLICAS READY UPDATED UNAVAILABLE tkg-wkld01-md-0 default Running 3 3 3 0 tkg-wkld02 default ScalingUp 2 0 2 2
You can also run the below command to see the node status
# Below are the newly added nodes in a cluster $ kubectl get nodes tkg-wkld02-78dfd5865c-8j2rb Ready 6m22s v1.21.2+vmware.1 tkg-wkld02-78dfd5865c-l2q9l Ready 6m23s v1.21.2+vmware.1
Deleting node pool
Now, I will delete the above node pool created just now. It’s very simple, you can run the below command to delete it.
# Delete the node pool $ tanzu cluster node-pool delete tkg-wkld01 --name tkg-wkld02
- –name is the name of node pool object
Validate the node status from
# Validate if the nodes are deleted from list $ kubectl get nodes
Updating node pool
Updating the existing node pool is simple and straight forward. e.g. below is the reference file for node pool. e.g.,
let say you want to change the number of node in a cluster to 3, change the following.
name: tkg-wkld01 replicas: 3 labels: key1: wkld01 key2: app-workload vsphere:
Run the following command
# Update the existing node pool for number of node change $ tanzu cluster node-pool set tkg-wkld01 -f nodepool
let say you want to update the memory of nodes to 4GB
name: tkg-wkld01 replicas: 3 labels: key1: wkld01 key2: app-workload vsphere: memoryMiB: 4196
Run the following command
# Update the existing node pool for memory update $ tanzu cluster node-pool set tkg-wkld01 -f nodepool
You will notice that the worker node will start getting created one by one and old one will be deleted. All the process runs in background and you can monitor that in vCenter.