When we setup TKGm on Azure, by default it creates a “storageclass” for
AzureDisk based persistent volumes.
ReadWriteOnce mode but not
ReadWriteMany. So, in order to have
RWX mode persistent volumes, we can use
There are different ways to create PV using AzureFile, In this post, I will help you to by using AzureFile CSI driver. So, Let’s get started.
- TKG workload cluster is running on Azure
Setting up Azure File CSI driver
- Run the command below to setup a CSI driver
curl -skSL https://raw.githubusercontent.com/kubernetes-sigs/azurefile-csi-driver/v1.17.0/deploy/install-driver.sh | bash -s v1.17.0 --
- You can also find different methods for setting up CSI Driver here
- Once, CSI Driver is installed on your TKG workload cluster, validate it by running the following command.
$ k get po -n kube-system | grep -i csi csi-azurefile-controller-7b85fcfb5b-8hg48 6/6 Running 0 17h csi-azurefile-controller-7b85fcfb5b-tf7zk 6/6 Running 0 17h csi-azurefile-node-dttzk 3/3 Running 0 17h csi-azurefile-node-jcml6 3/3 Running 0 17h
- You can see above that the CSI pods are up and running. CSI driver setup is completed.
Creating Storage Class
Since we want to create azurefile based persistent volumes on demand, hence we will be creating a
- Create a yaml file with the following content for creating a storage class
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: my-azurefile provisioner: file.csi.azure.com reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true mountOptions: - dir_mode=0640 - file_mode=0640 - uid=0 - gid=0 - mfsymlinks - cache=strict # https://linux.die.net/man/8/mount.cifs - nosharesock parameters: skuName: Standard_LRS
- Apply the above yaml file on your TKG workload cluster
- You will notice that now there are two
storageclassavailable. One was created during TKG workload cluster installation and one we have just created now.
k get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE default (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true 18h my-azurefile file.csi.azure.com Delete Immediate true 16h
- We will be using
- Now, Let’s create a PVC with the following content.
Creating PVC & PV
Save the following yaml content for creating PVC.
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-azurefile spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: my-azurefile
- Apply the yaml file for creating a PVC, It will create PV automatically.
- Validate it by running the following command
k get pvc,pv NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/pvc-azurefile Bound pvc-21e33e1c-f141-4615-a531-0c1ce173f284 10Gi RWX my-azurefile 23m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-21e33e1c-f141-4615-a531-0c1ce173f284 10Gi RWX Delete Bound default/pvc-azurefile my-azurefile 22m
- In my case, I have ran into an issue related to role and faced the following error during PVC creation.
Warning ProvisioningFailed 3s persistentvolume-controller Failed to provision volume with StorageClass "my-azurefile": couldn't create secret secrets is forbidden: User "system:serviceaccount:kube-system:persistent-volume-binder" cannot create resource "secrets" in API group "" in the namespace "default"
- To resolve, the above error, I have created the following ClusterRole and ClusterRolebinding.
--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: system:azure-cloud-provider roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:azure-cloud-provider subjects: - kind: ServiceAccount name: persistent-volume-binder namespace: kube-system
- Once, PVC and PV is created, you will notice that now you have a storage account and fileshare created on Azure as well. Login to your azure portal and validate the same.
Deploy a POD
Now, our PVC is ready for use, so, Let’s go ahead and deploy a pod that will use the above PVC. You can use the following definition file for testing it. Apply the pod definition file on your TKG workload cluster.
--- kind: Pod apiVersion: v1 metadata: name: nginx-azurefile spec: nodeSelector: "kubernetes.io/os": linux containers: - image: mcr.microsoft.com/oss/nginx/nginx:1.19.5 name: nginx-azurefile command: - "/bin/bash" - "-c" - set -euo pipefail; while true; do echo $(date) >> /mnt/azurefile/outfile; sleep 1; done volumeMounts: - name: persistent-storage mountPath: "/mnt/azurefile" volumes: - name: persistent-storage persistentVolumeClaim: claimName: pvc-azurefile
You will see the pod is running.
k get po NAME READY STATUS RESTARTS AGE nginx-azurefile 1/1 Running 0 27m
Let’s get inside above pod and create file on volume attached to pod. Run the below command to login to pod.
k exec -it nginx-azurefile -- bash # Run df -kh command to see the attached volume
- cd to
/mnt/azurefileas this is where our volume is mounted to.
cd /mnt/azurefile root@nginx-azurefile:/mnt/azurefile#
- Create a file and a test folder.
$ touch testfile $ root@nginx-azurefile:/mnt/azurefile# mkdir testfolder $ root@nginx-azurefile:/mnt/azurefile# ls -ltr total 50 -rw-r----- 1 root root 0 May 9 06:49 testfile drw-r----- 2 root root 0 May 9 06:49 testfolder
- Login back to Azure portal and verify if you can see above file and folder on the file share.
Great, So we have successfully created a AzureFile share based volume and attached to a pod. Also, any data that is written from pod is visible to azure file share. If you want to create another pod and attach the same volume to validate the
RWX behaviour, you can use the following yaml for deploying pod.
In the below yaml file, I have just changed the pod name and deployed it.
--- kind: Pod apiVersion: v1 metadata: name: nginx-azurefile-1 spec: nodeSelector: "kubernetes.io/os": linux containers: - image: mcr.microsoft.com/oss/nginx/nginx:1.19.5 name: nginx-azurefile command: - "/bin/bash" - "-c" - set -euo pipefail; while true; do echo $(date) >> /mnt/azurefile/outfile; sleep 1; done volumeMounts: - name: persistent-storage mountPath: "/mnt/azurefile" volumes: - name: persistent-storage persistentVolumeClaim: claimName: pvc-azurefile
After deploying the pod using above yaml file, login to it and validate the content of
/mnt/azurefile directory. You can use the below commands.
$ k exec -it nginx-azurefile-1 -- sh # cd /mnt/ # ls azurefile # cd azurefile # ls outfile testfile testfolder
Great, so we have validated the
RWX nature too.
You can also refer the following document from Microsoft about Azure File CSI Driver