K8S-StorageClass资源-实践【补充知识点】

发布时间 2023-04-15 10:55:55作者: 小粉优化大师

Kubernetes学习目录

1、准备工作

1.1、官方文档

支持的存储制备器 :https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/#provisioner

NFS provisioner: https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/#nfs

1.2、nfs-subdir-external-provisioner项目地址

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

1.3、下载项目至本地

wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner.git

2、部署nfs-subdir-external-provisioner

2.1、创建RBAC资源

2.1.1、定义资源配置清单

cat > rbac.yaml <<'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
EOF

2.1.2、应用资源配置清单

deploy]# kubectl apply -f rbac.yaml 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

2.2、准备离线镜像

docker pull registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 192.168.10.33:80/k8s/sig-storage/nfs-subdir-external-provisioner:v4.0.2
docker push 192.168.10.33:80/k8s/sig-storage/nfs-subdir-external-provisioner:v4.0.2

2.3、创建deployment资源

2.3.1、定义资源配置清单

cat > deployment.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: 192.168.10.33:80/k8s/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.10.33
            - name: NFS_PATH
              value: /nfs-data/promtheus_data
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.10.33
            path: /nfs-data/promtheus_data
EOF

# 属性解析:
NFS_SERVER、NFS_PATH 配置nfs信息 ,PROVISIONER_NAME

2.3.2、应用资源配置清单

deploy]# kubectl apply -f deployment.yaml 
deployment.apps/nfs-client-provisioner created

2.3.3、查询deployment运行状态

deploy]# kubectl get pod
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-f9465f6c4-w8gf5   1/1     Running   0          4m9s

2.3.4、查询运行日志

deploy]# kubectl logs nfs-client-provisioner-f9465f6c4-w8gf5 
I0414 14:47:49.501029       1 leaderelection.go:242] attempting to acquire leader lease  default/k8s-sigs.io-nfs-subdir-external-provisioner...
I0414 14:47:49.507138       1 leaderelection.go:252] successfully acquired lease default/k8s-sigs.io-nfs-subdir-external-provisioner
I0414 14:47:49.507374       1 controller.go:820] Starting provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-f9465f6c4-w8gf5_9c86b64e-b49c-4873-b344-c8d6d3a83240!
I0414 14:47:49.508515       1 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"k8s-sigs.io-nfs-subdir-external-provisioner", UID:"21e32682-a95a-4e7b-96e7-c285bd35e110", 
APIVersion:"v1", ResourceVersion:"40247", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' nfs-client-provisioner-f9465f6c4-w8gf5_9c86b64e-b49c-4873-b344-c8d6d3a83240 became leader I0414 14:47:49.608224 1 controller.go:869] Started provisioner controller k8s-sigs.io/nfs-subdir-external-provisioner_nfs-client-provisioner-f9465f6c4-w8gf5_9c86b64e-b49c-4873-b344-c8d6d3a83240!

2.4、NFS配置

2.4.1、exports

]# cat /etc/exports
/nfs-data/promtheus_data *(rw,all_squash,anonuid=1000,anongid=1000)
/nfs-data/alertmanager_data *(rw,all_squash,anonuid=1000,anongid=1000)

2.4.2、创建用户

groupadd -g 1000 nfs
useradd -u 1000 -g 1000 nfs

]# id nfs
uid=1000(nfs) gid=1000(nfs) groups=1000(nfs)

2.4.3、目录修改所属组

# 注意:需要修改所属用户和组
chown -R nfs.nfs alertmanager_data promtheus_data
]# ll /nfs-data/
drwxr-xr-x 2 nfs  nfs   4096 Apr 14 00:30 alertmanager_data
drwxr-xr-x 3 nfs  nfs   4096 Apr 14 23:14 promtheus_data

2.4.4、重启nfs服务

systemctl restart nfs

2.4、创建StorageClass资源

2.4.1、定义资源配置清单

cat > class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-sc
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"
EOF

# provisioner名字要和PROVISIONER_NAME 环境变量保持一致。

2.4.2、应用资源配置清单

deploy]# kubectl apply -f class.yaml 
storageclass.storage.k8s.io/nfs-sc created

2.4.3、查询sc资源

deploy]# kubectl get sc
NAME     PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-sc   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  11s

3、创建PVC资源-测试-实践

3.1、目标

1、主要确认pv是否自动创建和删除
2、删除pvc的时候,数据是否会被清空

3.2、创建pvc资源

3.2.1、定义资源配置清单

cat > test-claim.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
EOF

3.2.2、应用资源配置清单

deploy]# kubectl apply -f test-claim.yaml 
persistentvolumeclaim/test-claim created

3.3、分析运行情况

3.3.1、查询nfs-client-provisioner日志

~]# kubectl logs nfs-client-provisioner-f9465f6c4-w8gf5
I0414 15:28:31.580103 1 controller.go:1317] provision "default/test-claim" class "nfs-sc": started I0414 15:28:31.582795 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-claim", UID:"cac9523c-e952-46c6-9c9d-2fbbd45a0177",
APIVersion:"v1", ResourceVersion:"45275", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test-claim" I0414 15:28:31.587202 1 controller.go:1420] provision "default/test-claim" class "nfs-sc": volume "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177" provisioned I0414 15:28:31.587257 1 controller.go:1437] provision "default/test-claim" class "nfs-sc": succeeded I0414 15:28:31.587262 1 volume_store.go:212] Trying to save persistentvolume "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177" I0414 15:28:31.590381 1 volume_store.go:219] persistentvolume "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177" saved I0414 15:28:31.590614 1 event.go:278] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-claim", UID:"cac9523c-e952-46c6-9c9d-2fbbd45a0177",
APIVersion:"v1", ResourceVersion:"45275", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177
# 说明:已经自动创建成功PV

3.3.2、查询pv资源

~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS   REASON   AGE
pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177   1Mi        RWX            Delete           Bound    default/test-claim   nfs-sc                  2m38s

3.3.3、查询nfs的目录

nfs-data]# tree 
.
├── alertmanager_data
├── lost+found
└── promtheus_data
    └── default-test-claim-pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177
# 自动创建一个标记文件

3.4、删除pvc进行分析

3.4.1、删除pvc

deploy]# kubectl delete -f test-claim.yaml 
persistentvolumeclaim "test-claim" deleted

3.4.2、查询nfs-client-provisioner日志


I0414 15:35:59.999635 1 controller.go:1450] delete "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177": started I0414 15:36:00.006549 1 controller.go:1478] delete "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177": volume deleted I0414 15:36:00.010271 1 controller.go:1524] delete "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177": persistentvolume deleted I0414 15:36:00.010294 1 controller.go:1526] delete "pvc-cac9523c-e952-46c6-9c9d-2fbbd45a0177": succeeded

# 说明已经删除成功

3.4.3、查询nfs目录

nfs-data]# tree 
.
├── alertmanager_data
├── lost+found
└── promtheus_data

# 标记已经被删除

4、创建Pod-测试-实践

4.1、目标

1、验证pod的挂载写数据
2、需要再次创建pvc,请看小节:【3.2.1、定义资源配置清单】

4.2、创建Pod

4.2.1、定义资源配置清单

cat > test-claim.yaml <<EOF
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox:stable
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim
EOF

4.2.2、应用资源配置清单

deploy]# kubectl apply -f test-pod.yaml 
pod/test-pod created

4.3、分析运行情况

4.3.1、查询pod运行状态

deploy]# kubectl get pod -w
NAME                                     READY   STATUS              RESTARTS   AGE
nfs-client-provisioner-f9465f6c4-w8gf5   1/1     Running             0          55m
test-pod                                 0/1     ContainerCreating   0          12s
test-pod                                 0/1     Completed           0          27s
test-pod                                 0/1     Completed           0          29s

# 挂载好目录,写入文件,就结束pod

4.3.2、查询NFS目录数据

nfs-data]# tree 
.
├── alertmanager_data
├── lost+found
└── promtheus_data
    └── default-test-claim-pvc-55088be5-f8c3-43e8-871f-5c38e482639b
        └── SUCCESS # 说明,已经成功挂载写入数据

5、关于StorageClass回收策略对数据的影响

5.1、archiveOnDelete: false & reclaimPolicy: Delete

archiveOnDelete: "false"  
reclaimPolicy: Delete   #默认没有配置,默认值为Delete

结果
1、pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2、sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3、删除PVC后,PV被删除且NFS Server对应数据被删除

5.2、archiveOnDelete: true & reclaimPolicy: Delete

archiveOnDelete: "ture"  
reclaimPolicy: Delete 

1、pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2、sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3、删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4、重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

5.3、archiveOnDelete: false & reclaimPolicy: Retain 

archiveOnDelete: "false"
reclaimPolicy: Retain

1、pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2、sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3、删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4、重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

5.4、archiveOnDelete: true & reclaimPolicy: Retain 

archiveOnDelete: "ture"  
reclaimPolicy: Retain  

1、pod删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
2、sc删除重建后数据依然存在,旧pod名称及数据依然保留给新pod使用
3、删除PVC后,PV不会别删除,且状态由Bound变为Released,NFS Server对应数据被保留
4、重建sc后,新建PVC会绑定新的pv,旧数据可以通过拷贝到新的PV中

5.5、小结

除以第一种配置外,其他三种配置在PV/PVC被删除后数据依然保留