《Kubernetes权威指南》学习笔记第十篇-共享存储

1、为什么需要存储

对于数据需要持久化的应用或者有状态的docker容器,不仅需要将容器内的目录挂载到宿主机同时还需要构建可靠的存储来存放重要数据以方便应用重建后能够再次使用之前的数据

2、pv与pvc

pv抽象底层存储将其定义为一种资源,pv由集群管理员进行创建和配置
pvc相当于一个申请,pvc定义了对pv的使用。就像pod使用node的资源一样

3、storageClass

pv、pvc并不能完全满足各种类型的应用程序,因为不同的应用程序对存储的性能会有各种不同的要求,包括读写性能、并发性能、数据冗余等,为此就有了新的对象storageClass来解决这类问题,它可以标记存储资源的性能和特性,将pv定义成某种类型的class,通过CSI(容器存储接口)来实现动态按需分配存储空间

4、pv配置

看一个简单实例

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  capacity:
    storage: 5Gi  # 存储空间为5G
  accessModes:
    - ReadWriteOnce # 访问模式为只读一次
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow # 存储类的名字
  nfs:   # 后端存储类型
    path: /tmp   # nfs路径
    server: 172.17.0.2  # nfs存储地址

pv主要包括一些参数

  • capacity
    存储能力
  • volume Mode
    存储卷模式,默认为文件系统,还有块
    可以的块设备有:RBD、iscsi、Local volume、FC等
    如果存储卷是块类型,那么pv配置该如何配置?
apiVersion: v1
kind: PersistentVolume
metadata:
  name: block-pv
spec:
  capacity:
    storage: 5Gi  # 存储空间为5G
  accessModes:
    - ReadWriteOnce # 访问模式为只读一次
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow # 存储类的名字
  volumeMode: Block
  fc:
    targetWWNS: ["xxxxxx"] 
    lun: 0
    readOnly: false
  • accessModes
    访问模式,该参数用来描述应用对存储资源的访问权限
    有三种模式
    1、ReadWriteOnce 读写,只能被单个Node挂载
    2、ReadOnly 只读,允许多个Node挂载
    3、ReadWriteMany 读写,允许多个Node挂载

  • storageClass
    存储类别
    制定了storageClass的pv只能与请求了该类别的pvc绑定

  • Mount Options
    挂载选项,pv挂载到Node时,有些后端存储可能需要额外挂载一些东西

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gce-disk-1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  mountOptions:
    - hard
    - nolock
    - nfsvers=3
  gcePersistentDisk:
    fsType: ext4
    pdName: gce-disk-1
  • Node Affinity
    节点亲和性
    设置pv只能通过某些Node来访问,这样就可以将需要使用的pv的Pod调度到满足条件的Node上
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - my-node

5、pv的生命周期

四个状态

  • avaiable 可用,未绑定pvc
  • bound pvc绑定
  • release pvc解绑,但未回收
  • failed 回收

6、pvc配置

先看一个常规的pvc配置

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: slow
  selector:
    matchLabels:
      release: "stable"
    matchExpressions:
      - {key: environment,operator: In,values: [dev]}

简单解释上面配置各参数

  • resources: 对存储资源的请求大小
  • accessModes: 应用对存储资源的访问权限
  • selector: 标签选择器,选择出具有对应标签的pv进行绑定
  • storageClass: 设置存储类别,只有与设置了同名Class的pv才能绑定

这里关于storageClass,上面的这个例子时设置了storageClass字段,其实也可以不设置该字段,如果不设置该字段有以下两种情况
1、DefaultStorageClass未启用
storageClassName="",系统会选择未设定Class的PV进行绑定
2、DefaultStorageClass启用
首先需要设置默认的StoragClass,然后系统会自动为pvc创建一个pv进行绑定,注意自动创建的pv使用的是使用了默认storageClass的后端存储

pv与pvc必须处于同一namespace才能进行绑定,pod在引用pvc时也需要在该namespace中才能挂载
pvc如果同时设置了storageClassName和selector这两个参数,那么只有同时满足的pv才能进行绑定

7、pv与pvc的生命周期

04abc

资源供应
资源供应的结果就是创建好的pv

  • 静态模式
    手动创建pv,而且必须设置后端存储特性

  • 动态模式
    不用手动创建pv,创建storageClass,然后设置pvc的Class为创建的Class,这样pvc会自动创建以该类Class为后端存储的pv并进行绑定

资源绑定
pvc根据自身配置去请求存储,通过存在的pv选择满足pvc要求的pv,找到满足要求的后与pvc绑定,就可以使用这个pvc了,找不到pvc就会处于pending状态,注意pv被绑定到pvc后就无法为其他pvc绑定,如果pvc请求的资源少于pv,那么为了不浪费资源可以让资源供应模式为动态模式,这样pvc找到合适的storageClass后自动创建一个pv与自身进行绑定

资源使用
Pod使用Volume定义来挂载pvc,pvc可以被多个Pod进行挂载,容器挂载pvc后就可以被持续使用

资源释放
存储资源使用完后删除pvc解绑对应的pv,但此时的pv还不能立刻被其他pvc绑定,因为之前的pvc写入的数据可能还存在该存储资源上

资源回收
上面说过pvc解绑后还不能马上使用对应的pv,还有遗留数据需要处理。要理解回收过程,需要对资源供存储使用的整体流程做一个理解

静态模式资源供应
04bu-huo

动态模式资源供应
05str1

8、StorageClass配置

StorageClass是对存储资源的抽象定义,无须管理员手动创建pv,而由系统自动创建pv并与pvc绑定实现动态资源的供应
StorageClass一旦被创建将无法被更改,包括名字、后端存储提供者、后端存储的相关参数配置,如果必须修改只能删除重建

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

下面简单解释各个参数的意思

  • provisioner
    存储资源的提供者,以kubernetes.io/开头

  • parameters
    provisioner的参数设置

下面以GlusterFS对对StorageClass的定义作说明

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://127.0.0.1:8081"
  clusterid: "xxxxxxxxxxxxxxxxxx"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "default"
  secretName: "heketi-secret"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"   

简单说明一下上免得参数意义
resturl:heketi地址
secretNamespace、secretName: 保存GlusterFS REST服务密码的Secret资源对象名
gidMin、gidMax: StorageClass的GID范围,用于动态供应的pv的GID范围

当启用默认的StorageClass相关配置后有利于减少pvc配置的重复配置工作,那么如何设置默认StorageClass呢?

两步走

  • kube-apiserver服务添加启动参数--enable-admission-plugins=DefaultStorageClass=...,DefaultStorageClass
  • 在想要配置成默认StorageClass的某个StorageClass配置中添加annotation
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gold
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/xxx   
parameters:
  type: pd-ssd

kubectl get sc查看

ALLOWVOLUMEEXPANSION AGE
gold (default) kubernetes.io/gce-pd Delete Immediate false 19s

9、存储实战

以GlusterFS为例来创建共享存储,包括定义StorageClass、创建GlusterFS和Heketi服务、用户申请pvc、Pod使用存储资源,本文采用Heketi来实现,先下载所需要的资源
wget https://github.com/heketi/heketi/releases/download/v10.0.0/heketi-v10.0.0.linux.amd64.tar.gz

9.1、各节点创建GlusterFS客户端

yum -y install glusterfs glusterfs-fuse

modprobe dm_snapshot
modprobe dm_mirror
modprobe dm_thin_pool

部署GlusterFS容器集群前需要使apiserver处于特权模式,添加启动参数--allow-privileged=true

因为需要在每个节点上都部署GlusterFS容器服务,这就需要给每一个工作节点打赏标签,然后通过DaemonSet来部署服务到每一个节点,但是GlusterFS集群至少需要三个节点,而这里二进制部署的集群当时只部署了两个工作节点,master节点完全用于管理集群了,现在需要将master节点也作为工作节点,可以参照二进制部署集群中的node节点服务安装部分

给各个节点打标签

kubectl label nodes 192.168.0.161 storagenode=glusterfs 
kubectl label nodes 192.168.0.162 storagenode=glusterfs
kubectl label nodes 192.168.0.163 storagenode=glusterfs

kubectl get nodes --show-labels

9.2、创建glusterfs容器服务

在各个节点创建一个gluster容器管理服务
cat glusterfs-daemonset.yaml

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: glusterfs
  labels:
    glusterfs: deployment
  annotations:
    description: GlusterFS Daemon Set
    tags: glusterfs
spec:
  selector:
    matchLabels: 
      glusterfs-node: daemonset
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs-node: daemonset
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
        - image: 'gluster/gluster-centos:latest'
          imagePullPolicy: Always
          name: glusterfs
          volumeMounts:
            - name: glusterfs-heketi
              mountPath: /var/lib/heketi
            - name: glusterfs-run
              mountPath: /run
            - name: glusterfs-lvm
              mountPath: /run/lvm
            - name: glusterfs-etc
              mountPath: /etc/glusterfs
            - name: glusterfs-logs
              mountPath: /var/log/glusterfs
            - name: glusterfs-config
              mountPath: /var/lib/glusterd
            - name: glusterfs-dev
              mountPath: /dev
            - name: glusterfs-cgroup
              mountPath: /sys/fs/cgroup
          securityContext:
            capabilities: {}
            privileged: true
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 60
            exec:
              command:
                - /bin/bash
                - '-c'
                - systemctl status glusterd.service
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 60
            exec:
              command:
                - /bin/bash
                - '-c'
                - systemctl status glusterd.service
      volumes:
        - name: glusterfs-heketi
          hostPath:
            path: /var/lib/heketi
        - name: glusterfs-run
        - name: glusterfs-lvm
          hostPath:
            path: /run/lvm
        - name: glusterfs-etc
          hostPath:
            path: /etc/glusterfs
        - name: glusterfs-logs
          hostPath:
            path: /var/log/glusterfs
        - name: glusterfs-config
          hostPath:
            path: /var/lib/glusterd
        - name: glusterfs-dev
          hostPath:
            path: /dev
        - name: glusterfs-cgroup
          hostPath:
            path: /sys/fs/cgroup

创建服务
kubectl create -f glusterfs-daemonset.yaml
查看服务
kubectl get po

NAME READY STATUS RESTARTS AGE
glusterfs-5nsq5 1/1 Running 0 5m23s
glusterfs-jb7tt 1/1 Running 0 5m23s
glusterfs-xjv6d 1/1 Running 0 5m23s

这里可能会有个错误就是如果po无法创建,建议重启kubelet

9.3、部署heketi服务

GlusterFSf服务容器创建后需要对其进行配置组成集群,这里使用GlusterFS框架Heketi进行配置

在部署Heketi前险创建对应的ServiceAccount
cat heketi-service-account.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account

创建serviceaccount
kubectl create -f heketi-service-account.yaml

创建Heketi对应的权限和secret
kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
kubectl create secret generic heketi-config-secret --from-file=./heketi.json

初始化部署heketi服务
cat heketi-bootstrap.yaml

kind: List
apiVersion: v1
items:
  - kind: Service
    apiVersion: v1
    metadata:
      name: deploy-heketi
      labels:
        glusterfs: heketi-service
        deploy-heketi: support
      annotations:
        description: Exposes Heketi Service
    spec:
      selector:
        name: deploy-heketi
      ports:
        - name: deploy-heketi
          port: 8080
          targetPort: 8080
  - kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: deploy-heketi
      labels:
        glusterfs: heketi-deployment
        deploy-heketi: deployment
      annotations:
        description: Defines how to deploy Heketi
    spec:
      replicas: 1
      selector:
        matchLabels:
          glusterfs: heketi-pod
      template:
        metadata:
          name: deploy-heketi
          labels:
            name: deploy-heketi
            glusterfs: heketi-pod
            deploy-heketi: pod
        spec:
          serviceAccountName: heketi-service-account
          containers:
            - image: 'heketi/heketi:dev'
              imagePullPolicy: Always
              name: deploy-heketi
              env:
                - name: HEKETI_EXECUTOR
                  value: kubernetes
                - name: HEKETI_DB_PATH
                  value: /var/lib/heketi/heketi.db
                - name: HEKETI_FSTAB
                  value: /var/lib/heketi/fstab
                - name: HEKETI_SNAPSHOT_LIMIT
                  value: '14'
                - name: HEKETI_KUBE_GLUSTER_DAEMONSET
                  value: 'y'
              ports:
                - containerPort: 8080
              volumeMounts:
                - name: db
                  mountPath: /var/lib/heketi
                - name: config
                  mountPath: /etc/heketi
              readinessProbe:
                timeoutSeconds: 3
                initialDelaySeconds: 3
                httpGet:
                  path: /hello
                  port: 8080
              livenessProbe:
                timeoutSeconds: 3
                initialDelaySeconds: 30
                httpGet:
                  path: /hello
                  port: 8080
          volumes:
            - name: db
            - name: config
              secret:
                secretName: heketi-config-secret

kubectl create -f heketi-bootstrap.yaml

cd /tmp/heketi-client/bin && cp heketi-cli /usr/local/bin/

kubect get svc查看hekei-deploy的service ip

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deploy-heketi NodePort 10.0.0.150 8080:42774/TCP 3h26m

export HEKETI_CLI_SERVER=http://10.0.0.150:8080
因为heketi-deploy被部署到160上了,这里的地址需要为apiserver主机可以访问到的地址,可以是clusterIP+端口,也可以是podip+端口,更可以是NodePortIP +端口,这里使用nnodeportIp+端口是因为集群部署出了点问题,在158(apiserver所在节点)无法通过clusterip或者podIP访问160上的heketi-deploy服务,这里猜测应该是kubernetes集群部署的有问题,后面准备再部署一遍

配置集群部署文件
cat toplogy.json

{
  "clusters": [
   {
     "nodes": [
       {
         "node": {
            "hostnames": {
             "manage": [
               "192.168.0.161"
                ],
                "storage": [
                  "192.168.0.161"
                 ]
                },
                "zone": 1
                },
                "devices": [
                  "/dev/sdb"
                ]
               },
               {                 
                  "node": {
                   "hostnames": {
                    "manage": [
                      "192.168.0.162"
                       ],
                      "storage": [
                          "192.168.0.162"
                        ]
                        },
                      "zone": 1
                      },
                     "devices": [
                      "/dev/sdb"
                   ]
                  },
                 {
                  "node": {
                   "hostnames": {
                    "manage": [
                      "192.168.0.163"
                       ],
                      "storage": [
                          "192.168.0.163"
                        ]
                        },
                      "zone": 1
                      },
                     "devices": [
                      "/dev/sdb"
                   ]
                  }
                ]
              }
            ]
          }

ps:manage字段本来应该写主机名,表示在该主机节点去配置gluster容器服务,但是本文基于的kubernetes集群节点使用的是ip,没有做dns解析,所以这里也直接写的ip,如果要使用域名则需要修改kubernetes集群配置使用主机名表示节点值

heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology load --json=topology.json
ps:注意这里默认情况下heketi是有账号密码的,不能直接使用命令heketi-cli topology load --json=topology.json,否则会报错Error: Invalid JWT token: Token missing iss claim

这里报了两个错

ERROR 2021/05/07 08:50:17 heketi/pkg/remoteexec/kube/target.go:145:kube.TargetDaemonSet.GetTargetPod: Unable to find a GlusterFS pod on host 192.168.0.161 with a label key glusterfs-node

这是因为节点上的glusterfs的pod中的label不对,修改前面glusterfs-deamonset.yaml

....
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs-node: pod
....

ERROR 2021/05/18 03:26:33 heketi/pkg/remoteexec/kube/target.go:134:kube.TargetDaemonSet.GetTargetPod: Get "https://10.0.0.1:443/api/v1/namespaces/default/pods?labelSelector=glusterfs-node": x509: certificate signed by unknown authority
[kubeexec] ERROR 2021/05/18 03:26:33 heketi/pkg/remoteexec/kube/target.go:135:kube.TargetDaemonSet.GetTargetPod: Failed to get list of pods
[cmdexec] ERROR 2021/05/18 03:26:33 heketi/executors/cmdexec/peer.go:80:cmdexec.(*CmdExecutor).GlusterdCheck: Failed to get list of pods
[heketi] ERROR 2021/05/18 03:26:33 heketi/apps/glusterfs/app_node.go:107:glusterfs.(*App).NodeAdd: Failed to get list of pods
[heketi] ERROR 2021/05/18 03:26:33 heketi/apps/glusterfs/app_node.go:108:glusterfs.(*App).NodeAdd: New Node doesn't have glusterd running

解决方式:

kubectl apply -f glusterfs-daemonset.yaml更新服务
再次使用heketi执行配置

Creating cluster ... ID: a0c5178e4bf4b0aba55cc50e776d115c
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node 192.168.0.158 ... ID: 54767a0eec3a860e87b7542a9376f5da
Adding device /dev/sdb ... OK
Creating node 192.168.0.159 ... ID: 37f937aa83aa2f3b3c3c5a8a971c3535
Adding device /dev/sdb ... OK
Creating node 192.168.0.160 ... ID: 9b64da7ca8c4e779452269e7c8bf8d60
Adding device /dev/sdb ... OK

heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology info查看

Cluster Id: a0c5178e4bf4b0aba55cc50e776d115c

    File:  true
    Block: true

    Volumes:


    Nodes:

        Node Id: 37f937aa83aa2f3b3c3c5a8a971c3535
        State: online
        Cluster Id: a0c5178e4bf4b0aba55cc50e776d115c
        Zone: 1
        Management Hostnames: 192.168.0.161
        Storage Hostnames: 192.168.0.161
        Devices:
                Id:bb52f3bad91c1369184408aee3cce48e   State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: 54767a0eec3a860e87b7542a9376f5da
        State: online
        Cluster Id: a0c5178e4bf4b0aba55cc50e776d115c
        Zone: 1
        Management Hostnames: 192.168.0.162
        Storage Hostnames: 192.168.0.162
        Devices:
                Id:fc7c8566f4901b0fc02168fec7f34707   State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: 9b64da7ca8c4e779452269e7c8bf8d60
        State: online
        Cluster Id: a0c5178e4bf4b0aba55cc50e776d115c
        Zone: 1
        Management Hostnames: 192.168.0.163
        Storage Hostnames: 192.168.0.163
        Devices:
                Id:8204b8e6fdba12e4b5e6308934c56b47   State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
                        Known Paths: /dev/sdb

                        Bricks:

从上面打印出的信息看显然还没有创建volume和brick

yum install device-mapper* -y
heketi-cli setup-openshift-heketi-storage --user admin --secret 'My Secret'
kubectl create -f heketi-storage.json在glusterfs集群中创建一个叫heketidbstorage的卷
,同时创建了服务heketi-storage-endpoints,这些都将在接下来对heketi进行持久化部署中起重要作用

kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"

创建持久化Heketi
cat heketi-deployment.yaml

kind: List
apiVersion: v1
items:
  - kind: Secret
    apiVersion: v1
    metadata:
      name: heketi-db-backup
      labels:
        glusterfs: heketi-db
        heketi: db
    data: {}
    type: Opaque
  - kind: Service
    apiVersion: v1
    metadata:
      name: heketi
      labels:
        glusterfs: heketi-service
        deploy-heketi: support
      annotations:
        description: Exposes Heketi Service
    spec:
      selector:
        name: heketi  
  #   clusterIP: 10.0.0.126
      ports:
        - name: heketi
          port: 8080
          targetPort: 8080
  - kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: heketi
      labels:
        glusterfs: heketi-deployment
      annotations:
        description: Defines how to deploy Heketi
    spec:
      replicas: 1
      selector:
        matchLabels:
          glusterfs: heketi-pod
      template:
        metadata:
          name: heketi
          labels:
            name: heketi
            glusterfs: heketi-pod
        spec:
          serviceAccountName: heketi-service-account
          containers:
            - image: 'heketi/heketi:dev'
              imagePullPolicy: Always
              name: heketi
              env:
                - name: HEKETI_EXECUTOR
                  value: kubernetes
                - name: HEKETI_DB_PATH
                  value: /var/lib/heketi/heketi.db
                - name: HEKETI_FSTAB
                  value: /var/lib/heketi/fstab
                - name: HEKETI_SNAPSHOT_LIMIT
                  value: '14'
                - name: HEKETI_KUBE_GLUSTER_DAEMONSET
                  value: 'y'
              ports:
                - containerPort: 8080
              volumeMounts:
                - mountPath: /backupdb
                  name: heketi-db-secret
                - name: db
                  mountPath: /var/lib/heketi
                - name: config
                  mountPath: /etc/heketi
              readinessProbe:
                timeoutSeconds: 3
                initialDelaySeconds: 3
                httpGet:
                  path: /hello
                  port: 8080
              livenessProbe:
                timeoutSeconds: 3
                initialDelaySeconds: 30
                httpGet:
                  path: /hello
                  port: 8080
          volumes:
            - name: db
              glusterfs:
                endpoints: heketi-storage-endpoints
                path: heketidbstorage
            - name: heketi-db-secret
              secret:
                secretName: heketi-db-backup
            - name: config
              secret:
                secretName: heketi-config-secret

kubectl create -f heketi-deployment.yaml
kubectl get svc查看最新service

[root@node158 kubernetes-yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heketi NodePort 10.0.0.126 8080:36290/TCP 5m16s

export HEKETI_CLI_SERVER=http://10.0.0.126:8080
curl http://10.0.0.126:8080/hello

Hello from Heketi

ps:其实这里可以在service部分设置字段clusterIP来固定svc的ip,这样方便后面设置sc,不用担心因为某些情况导致svc的ip发生变化而导致sc配置失效,当然一般情况下是不会变化的

heketi-cli topology info --user admin --secret 'My Secret'


Cluster Id: 323e7dff66d0033cf940b96b66e818d1

    File:  true
    Block: true

    Volumes:

        Name: heketidbstorage
        Size: 2
        Id: 934b19edf9b795dddb536f90dc7064ee
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Mount: 192.168.0.163:heketidbstorage
        Mount Options: backup-volfile-servers=192.168.0.161,192.168.0.162
        Durability Type: replicate
        Replica: 3
        Snapshot: Disabled

                Bricks:
                        Id: 57de4489c5479b5f1ef9936250da2199
                        Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_57de4489c5479b5f1ef9936250da2199/brick
                        Size (GiB): 2
                        Node: d08e0f61b722b0a50c271b494df243f6
                        Device: a1548ff11036dbf7d3ac6512e9059c88

                        Id: cd08de0f2e9993de6b680db2b4960a13
                        Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cd08de0f2e9993de6b680db2b4960a13/brick
                        Size (GiB): 2
                        Node: 9f39ee2d2269583bc96b8d1392077d61
                        Device: 72b8c7cb20ef42219a660dec34f4f121

                        Id: d2e8fcdc1cdc8d45facb0a369734b1a5
                        Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_d2e8fcdc1cdc8d45facb0a369734b1a5/brick
                        Size (GiB): 2
                        Node: 93114b9ccaa04a9358fc998926f7b96c
                        Device: 5c94d502aa689a60cfdc201e7c2e93ef



    Nodes:

        Node Id: 93114b9ccaa04a9358fc998926f7b96c
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.163
        Storage Hostnames: 192.168.0.163
        Devices:
                Id:5c94d502aa689a60cfdc201e7c2e93ef   State:online    Size (GiB):29      Used (GiB):2       Free (GiB):27      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:d2e8fcdc1cdc8d45facb0a369734b1a5   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_d2e8fcdc1cdc8d45facb0a369734b1a5/brick

        Node Id: 9f39ee2d2269583bc96b8d1392077d61
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.161
        Storage Hostnames: 192.168.0.161
        Devices:
                Id:72b8c7cb20ef42219a660dec34f4f121   State:online    Size (GiB):29      Used (GiB):2       Free (GiB):27      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:cd08de0f2e9993de6b680db2b4960a13   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cd08de0f2e9993de6b680db2b4960a13/brick

        Node Id: d08e0f61b722b0a50c271b494df243f6
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.162
        Storage Hostnames: 192.168.0.162
        Devices:
                Id:a1548ff11036dbf7d3ac6512e9059c88   State:online    Size (GiB):29      Used (GiB):2       Free (GiB):27      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:57de4489c5479b5f1ef9936250da2199   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_57de4489c5479b5f1ef9936250da2199/brick

从上面反馈的信息可以看到已经为heketi服务创建大小为2G的复制卷heketidbstorage作为heketi的存储

可以进入pod查看即可知道
查看pod
kubectl get pods | grep gluster

glusterfs-4pvjw          1/1     Running   1          26h
glusterfs-lj5zd          1/1     Running   1          26h
glusterfs-mpl88          1/1     Running   1          26h

查看卷挂载情况

kubectl exec glusterfs-4pvjw -- df -h   
kubectl exec glusterfs-lj5zd -- df -h
kubectl exec glusterfs-mpl88  -- df -h

返回

/dev/mapper/vg_a1548ff11036dbf7d3ac6512e9059c88-brick_57de4489c5479b5f1ef9936250da2199  2.0G   33M  2.0G   2% /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_57de4489c5479b5f1ef9936250da2199
/dev/mapper/vg_5c94d502aa689a60cfdc201e7c2e93ef-brick_d2e8fcdc1cdc8d45facb0a369734b1a5  2.0G   33M  2.0G   2% /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_d2e8fcdc1cdc8d45facb0a369734b1a5  
/dev/mapper/vg_72b8c7cb20ef42219a660dec34f4f121-brick_cd08de0f2e9993de6b680db2b4960a13  2.0G   33M  2.0G   2% /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cd08de0f2e9993de6b680db2b4960a13

从返回的挂载信息来看是和前面heketi-cli客户端查看glusterfs集群是一致的

也可以进入到gluster的任意pod执行使用gluster命令进行查看
gluster volume list可以看到创建的复制卷heketidbstorage
gluster volume info heketidbstorage
返回

Volume Name: heketidbstorage
Type: Replicate
Volume ID: 2994370d-d636-422f-8038-455645be0c48
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.0.162:/var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_57de4489c5479b5f1ef9936250da2199/brick
Brick2: 192.168.0.161:/var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cd08de0f2e9993de6b680db2b4960a13/brick
Brick3: 192.168.0.163:/var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_d2e8fcdc1cdc8d45facb0a369734b1a5/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
performance.stat-prefetch: off
performance.write-behind: off
performance.open-behind: off
performance.quick-read: off
performance.strict-o-direct: on
performance.read-ahead: off
performance.io-cache: off
performance.readdir-ahead: off
user.heketi.dbstoragelevel: 1
user.heketi.id: 934b19edf9b795dddb536f90dc7064ee

9.4、定义相关的StorageClass

如果使用glusterfs集群作为kubernetes集群动态资源供应,那么就需要创建StorageClass来定义资源,创建pvc来自动创建pv,同时通过StorageClass里面heketi配置服务来自动完成卷的创建

cat storage-gluster-heketi.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.0.0.126:8080"
  clusterid: "323e7dff66d0033cf940b96b66e818d1"
  restauthenabled: "true"
  restuser: "admin"
  restuserkey: "My Secret"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"

这里配置很关键,否则下面创建pvc时无法正确创建brick和volume

9.5、创建相关的pvc

cat pvc-gluster-heketi.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-heketi
spec:
  storageClassName: gluster-heketi
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

kubectl create -f pvc-gluster-heketi.yaml
kubectl get pvc

pvc-gluster-heketi Bound pvc-5fb57f87-bbb5-4935-ac94-4932de0e3a2f 2Gi RWO gluster-heketi 15s

从反馈的信息说明pvc已经成功创建

kubectl get pv查看自动创建的pv
kubectl describe pv pvc-5fb57f87-bbb5-4935-ac94-4932de0e3a2f

Name:            pvc-832be96c-ebaf-4206-8b91-76953cb38e77
Labels:          <none>
Annotations:     Description: Gluster-Internal: Dynamically provisioned PV
                 gluster.kubernetes.io/heketi-volume-id: 26a7cc174cd93d3c608745c09fdd3b4a
                 gluster.org/type: file
                 kubernetes.io/createdby: heketi-dynamic-provisioner
                 pv.beta.kubernetes.io/gid: 40000
                 pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    gluster-heketi
Status:          Bound
Claim:           default/pvc-gluster-heketi
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        2Gi
Node Affinity:   <none>
Message:         
Source:
    Type:                Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:       glusterfs-dynamic-832be96c-ebaf-4206-8b91-76953cb38e77
    EndpointsNamespace:  default
    Path:                vol_26a7cc174cd93d3c608745c09fdd3b4a
    ReadOnly:            false
Events:                  <none>

可以看到pv引用的StorageClass,pv状态,容量,回收策略以及glusterfs挂载点等

heketi-cli topology info --user admin --secret 'My Secret'

Cluster Id: 323e7dff66d0033cf940b96b66e818d1

    File:  true
    Block: true

    Volumes:

        Name: vol_26a7cc174cd93d3c608745c09fdd3b4a
        Size: 2
        Id: 26a7cc174cd93d3c608745c09fdd3b4a
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Mount: 192.168.0.163:vol_26a7cc174cd93d3c608745c09fdd3b4a
        Mount Options: backup-volfile-servers=192.168.0.161,192.168.0.162
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 6ad559d1047c19ff697f23d426c38f2e
                        Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_6ad559d1047c19ff697f23d426c38f2e/brick
                        Size (GiB): 2
                        Node: 93114b9ccaa04a9358fc998926f7b96c
                        Device: 5c94d502aa689a60cfdc201e7c2e93ef

                        Id: 93a8cc55192513f6603f1369b8d6a75f
                        Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_93a8cc55192513f6603f1369b8d6a75f/brick
                        Size (GiB): 2
                        Node: d08e0f61b722b0a50c271b494df243f6
                        Device: a1548ff11036dbf7d3ac6512e9059c88

                        Id: cda6215634578b61755514fdb3669e58
                        Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cda6215634578b61755514fdb3669e58/brick
                        Size (GiB): 2
                        Node: 9f39ee2d2269583bc96b8d1392077d61
                        Device: 72b8c7cb20ef42219a660dec34f4f121


        Name: heketidbstorage
        Size: 2
        Id: 934b19edf9b795dddb536f90dc7064ee
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Mount: 192.168.0.163:heketidbstorage
        Mount Options: backup-volfile-servers=192.168.0.161,192.168.0.162
        Durability Type: replicate
        Replica: 3
        Snapshot: Disabled

                Bricks:
                        Id: 57de4489c5479b5f1ef9936250da2199
                        Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_57de4489c5479b5f1ef9936250da2199/brick
                        Size (GiB): 2
                        Node: d08e0f61b722b0a50c271b494df243f6
                        Device: a1548ff11036dbf7d3ac6512e9059c88

                        Id: cd08de0f2e9993de6b680db2b4960a13
                        Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cd08de0f2e9993de6b680db2b4960a13/brick
                        Size (GiB): 2
                        Node: 9f39ee2d2269583bc96b8d1392077d61
                        Device: 72b8c7cb20ef42219a660dec34f4f121

                        Id: d2e8fcdc1cdc8d45facb0a369734b1a5
                        Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_d2e8fcdc1cdc8d45facb0a369734b1a5/brick
                        Size (GiB): 2
                        Node: 93114b9ccaa04a9358fc998926f7b96c
                        Device: 5c94d502aa689a60cfdc201e7c2e93ef



    Nodes:

        Node Id: 93114b9ccaa04a9358fc998926f7b96c
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.163
        Storage Hostnames: 192.168.0.163
        Devices:
                Id:5c94d502aa689a60cfdc201e7c2e93ef   State:online    Size (GiB):29      Used (GiB):4       Free (GiB):25      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:6ad559d1047c19ff697f23d426c38f2e   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_6ad559d1047c19ff697f23d426c38f2e/brick
                                Id:d2e8fcdc1cdc8d45facb0a369734b1a5   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_d2e8fcdc1cdc8d45facb0a369734b1a5/brick

        Node Id: 9f39ee2d2269583bc96b8d1392077d61
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.161
        Storage Hostnames: 192.168.0.161
        Devices:
                Id:72b8c7cb20ef42219a660dec34f4f121   State:online    Size (GiB):29      Used (GiB):4       Free (GiB):25      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:cd08de0f2e9993de6b680db2b4960a13   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cd08de0f2e9993de6b680db2b4960a13/brick
                                Id:cda6215634578b61755514fdb3669e58   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cda6215634578b61755514fdb3669e58/brick

        Node Id: d08e0f61b722b0a50c271b494df243f6
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.162
        Storage Hostnames: 192.168.0.162
        Devices:
                Id:a1548ff11036dbf7d3ac6512e9059c88   State:online    Size (GiB):29      Used (GiB):4       Free (GiB):25      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:57de4489c5479b5f1ef9936250da2199   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_57de4489c5479b5f1ef9936250da2199/brick
                                Id:93a8cc55192513f6603f1369b8d6a75f   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_93a8cc55192513f6603f1369b8d6a75f/brick

从上面的反馈可以看到新创建了一个volume,可以看到volume的名字与前面查看pv信息path字段值一致

下面就可以将使用Volume将pvc挂载到pod里了

9.6、pod挂载pvc

cat pod-use-pvc.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-use-pvc
spec:
  containers:
  - name: pod-use-pvc
    image: busybox
    command:
    - sleep
    - "3600"
    volumeMounts:
    - name: gluster-volume
      mountPath: "/pv-data"
      readOnly: false
  volumes:
  - name: gluster-volume
    persistentVolumeClaim:
      claimName: pvc-gluster-heketi

kubectl create -f pod-use-pvc.yaml

下面进行数据测试
kubectl exec -it pod-use-pvc -- sh
cd /pv-data && mkdir wangteng

退回到宿主机
查看pv所定义的volume

Mount: 192.168.0.163:vol_26a7cc174cd93d3c608745c09fdd3b4a

mount -t glusterfs 192.168.0.163:vol_26a7cc174cd93d3c608745c09fdd3b4a /mnt/
df -h

92.168.0.163:vol_26a7cc174cd93d3c608745c09fdd3b4a 2.0G 53M 2.0G 3% /mnt

cd /mnt

drwxr-sr-x 2 root 40000 6 May 12 16:53 wangteng

可以看到数据已经写入到卷中,也就是说的确写入到glusterfs复制集中了

前面是使用单个Pod来挂载pvc,但生产环境中往往是通过各类控制器来部署pod的,而且经常会有多个副本,比如nginx,下面以nginx来进行测试
创建pvc
cat pvc-deployment-heketi.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx-html
spec:
  accessModes: 
    - ReadWriteMany
  storageClassName: gluster-heketi
  resources:
    requests:
      storage: 500Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx-conf
spec:
  accessModes: 
    - ReadWriteMany
  storageClassName: gluster-heketi
  resources:
    requests:
      storage: 10Mi

kubectl create -f pvc-deployment-heketi.yaml创建pvc
kubectl get pvc,pv 查看创建的pv

    NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                          STORAGECLASS     REASON   AGE
persistentvolume/pvc-31c475f2-5517-467e-ba71-666247e8f326   1Gi        RWX            Delete           Bound    default/glusterfs-nginx-html   gluster-heketi            9s
persistentvolume/pvc-832be96c-ebaf-4206-8b91-76953cb38e77   2Gi        RWO            Delete           Bound    default/pvc-gluster-heketi     gluster-heketi            2d17h
persistentvolume/pvc-fd6d4512-c896-4bec-84ab-f5b00e0bdf1f   1Gi        RWX            Delete           Bound    default/glusterfs-nginx-conf   gluster-heketi  

从上面的信息可以看出pvc申请的空间不足1G的按照1G来算

kubectl describe 某个pv可以看到该pv的信息,与heketi-cli命令查看的结果作比较
查看卷具体信息
heketi-cli topology info --user admin --secret 'My Secret'
返回

Cluster Id: 323e7dff66d0033cf940b96b66e818d1

    File:  true
    Block: true

    Volumes:

        Name: vol_011f2db30cf156a0ac9313c922f00a52
        Size: 1
        Id: 011f2db30cf156a0ac9313c922f00a52
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Mount: 192.168.0.163:vol_011f2db30cf156a0ac9313c922f00a52
        Mount Options: backup-volfile-servers=192.168.0.161,192.168.0.162
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 072635701b3c859defbe9869486e9846
                        Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_072635701b3c859defbe9869486e9846/brick
                        Size (GiB): 1
                        Node: 93114b9ccaa04a9358fc998926f7b96c
                        Device: 5c94d502aa689a60cfdc201e7c2e93ef

                        Id: 289609e80b710d12dbce1a4c9fb3344d
                        Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_289609e80b710d12dbce1a4c9fb3344d/brick
                        Size (GiB): 1
                        Node: d08e0f61b722b0a50c271b494df243f6
                        Device: a1548ff11036dbf7d3ac6512e9059c88

                        Id: 77e036891aea7a9a0d0d7e41a7d6cc7a
                        Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_77e036891aea7a9a0d0d7e41a7d6cc7a/brick
                        Size (GiB): 1
                        Node: 9f39ee2d2269583bc96b8d1392077d61
                        Device: 72b8c7cb20ef42219a660dec34f4f121


        Name: vol_05b3736b4857ba40c6c344ad4b8d2997
        Size: 1
        Id: 05b3736b4857ba40c6c344ad4b8d2997
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Mount: 192.168.0.163:vol_05b3736b4857ba40c6c344ad4b8d2997
        Mount Options: backup-volfile-servers=192.168.0.161,192.168.0.162
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 0c05f4c977cd4e14a0f5ab1470f12cd1
                        Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_0c05f4c977cd4e14a0f5ab1470f12cd1/brick
                        Size (GiB): 1
                        Node: d08e0f61b722b0a50c271b494df243f6
                        Device: a1548ff11036dbf7d3ac6512e9059c88

                        Id: 14d033a3ef72edd2bad44213d24b7356
                        Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_14d033a3ef72edd2bad44213d24b7356/brick
                        Size (GiB): 1
                        Node: 93114b9ccaa04a9358fc998926f7b96c
                        Device: 5c94d502aa689a60cfdc201e7c2e93ef

                        Id: a0e0d2238b556ce1fef8582f361aebaa
                        Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_a0e0d2238b556ce1fef8582f361aebaa/brick
                        Size (GiB): 1
                        Node: 9f39ee2d2269583bc96b8d1392077d61
                        Device: 72b8c7cb20ef42219a660dec34f4f121


        Name: vol_26a7cc174cd93d3c608745c09fdd3b4a
        Size: 2
        Id: 26a7cc174cd93d3c608745c09fdd3b4a
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Mount: 192.168.0.163:vol_26a7cc174cd93d3c608745c09fdd3b4a
        Mount Options: backup-volfile-servers=192.168.0.161,192.168.0.162
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 6ad559d1047c19ff697f23d426c38f2e
                        Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_6ad559d1047c19ff697f23d426c38f2e/brick
                        Size (GiB): 2
                        Node: 93114b9ccaa04a9358fc998926f7b96c
                        Device: 5c94d502aa689a60cfdc201e7c2e93ef

                        Id: 93a8cc55192513f6603f1369b8d6a75f
                        Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_93a8cc55192513f6603f1369b8d6a75f/brick
                        Size (GiB): 2
                        Node: d08e0f61b722b0a50c271b494df243f6
                        Device: a1548ff11036dbf7d3ac6512e9059c88

                        Id: cda6215634578b61755514fdb3669e58
                        Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cda6215634578b61755514fdb3669e58/brick
                        Size (GiB): 2
                        Node: 9f39ee2d2269583bc96b8d1392077d61
                        Device: 72b8c7cb20ef42219a660dec34f4f121


        Name: heketidbstorage
        Size: 2
        Id: 934b19edf9b795dddb536f90dc7064ee
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Mount: 192.168.0.163:heketidbstorage
        Mount Options: backup-volfile-servers=192.168.0.161,192.168.0.162
        Durability Type: replicate
        Replica: 3
        Snapshot: Disabled

                Bricks:
                        Id: 57de4489c5479b5f1ef9936250da2199
                        Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_57de4489c5479b5f1ef9936250da2199/brick
                        Size (GiB): 2
                        Node: d08e0f61b722b0a50c271b494df243f6
                        Device: a1548ff11036dbf7d3ac6512e9059c88

                        Id: cd08de0f2e9993de6b680db2b4960a13
                        Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cd08de0f2e9993de6b680db2b4960a13/brick
                        Size (GiB): 2
                        Node: 9f39ee2d2269583bc96b8d1392077d61
                        Device: 72b8c7cb20ef42219a660dec34f4f121

                        Id: d2e8fcdc1cdc8d45facb0a369734b1a5
                        Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_d2e8fcdc1cdc8d45facb0a369734b1a5/brick
                        Size (GiB): 2
                        Node: 93114b9ccaa04a9358fc998926f7b96c
                        Device: 5c94d502aa689a60cfdc201e7c2e93ef



    Nodes:

        Node Id: 93114b9ccaa04a9358fc998926f7b96c
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.163
        Storage Hostnames: 192.168.0.163
        Devices:
                Id:5c94d502aa689a60cfdc201e7c2e93ef   State:online    Size (GiB):29      Used (GiB):6       Free (GiB):23      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:072635701b3c859defbe9869486e9846   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_072635701b3c859defbe9869486e9846/brick
                                Id:14d033a3ef72edd2bad44213d24b7356   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_14d033a3ef72edd2bad44213d24b7356/brick
                                Id:6ad559d1047c19ff697f23d426c38f2e   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_6ad559d1047c19ff697f23d426c38f2e/brick
                                Id:d2e8fcdc1cdc8d45facb0a369734b1a5   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_5c94d502aa689a60cfdc201e7c2e93ef/brick_d2e8fcdc1cdc8d45facb0a369734b1a5/brick

        Node Id: 9f39ee2d2269583bc96b8d1392077d61
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.161
        Storage Hostnames: 192.168.0.161
        Devices:
                Id:72b8c7cb20ef42219a660dec34f4f121   State:online    Size (GiB):29      Used (GiB):6       Free (GiB):23      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:77e036891aea7a9a0d0d7e41a7d6cc7a   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_77e036891aea7a9a0d0d7e41a7d6cc7a/brick
                                Id:a0e0d2238b556ce1fef8582f361aebaa   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_a0e0d2238b556ce1fef8582f361aebaa/brick
                                Id:cd08de0f2e9993de6b680db2b4960a13   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cd08de0f2e9993de6b680db2b4960a13/brick
                                Id:cda6215634578b61755514fdb3669e58   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_72b8c7cb20ef42219a660dec34f4f121/brick_cda6215634578b61755514fdb3669e58/brick

        Node Id: d08e0f61b722b0a50c271b494df243f6
        State: online
        Cluster Id: 323e7dff66d0033cf940b96b66e818d1
        Zone: 1
        Management Hostnames: 192.168.0.162
        Storage Hostnames: 192.168.0.162
        Devices:
                Id:a1548ff11036dbf7d3ac6512e9059c88   State:online    Size (GiB):29      Used (GiB):6       Free (GiB):23      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:0c05f4c977cd4e14a0f5ab1470f12cd1   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_0c05f4c977cd4e14a0f5ab1470f12cd1/brick
                                Id:289609e80b710d12dbce1a4c9fb3344d   Size (GiB):1       Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_289609e80b710d12dbce1a4c9fb3344d/brick
                                Id:57de4489c5479b5f1ef9936250da2199   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_57de4489c5479b5f1ef9936250da2199/brick
                                Id:93a8cc55192513f6603f1369b8d6a75f   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_a1548ff11036dbf7d3ac6512e9059c88/brick_93a8cc55192513f6603f1369b8d6a75f/brick

这一大堆信息往往让人不知道该如何去找出我们想要的信息,我们最关心的信息是什么呢,应该是
pod的数据存储在哪个卷中
按照以下顺序来思考
1、找到符合大小的卷,比如上面创建了两个容量1G的pvc,这里可以看到两个1G的卷
2、查看pvc和pv
kubectl get pvc,pv | grep nginx查看我们创建的两个pvc

persistentvolumeclaim/glusterfs-nginx-conf   Bound    pvc-fd6d4512-c896-4bec-84ab-f5b00e0bdf1f   1Gi        RWX            gluster-heketi   43m
persistentvolumeclaim/glusterfs-nginx-html   Bound    pvc-31c475f2-5517-467e-ba71-666247e8f326   1Gi        RWX            gluster-heketi   43m
persistentvolume/pvc-31c475f2-5517-467e-ba71-666247e8f326   1Gi        RWX            Delete           Bound    default/glusterfs-nginx-html   gluster-heketi            43m
persistentvolume/pvc-fd6d4512-c896-4bec-84ab-f5b00e0bdf1f   1Gi        RWX            Delete           Bound    default/glusterfs-nginx-conf   gluster-heketi            43m

3、找到对应的pvc绑定的pv
4、查看pv具体信息,就可以看到pv抽象定义的卷名,和第一步中的某个卷对应起来就可以知道pod的数据存储在glusterfs复制卷的哪个部分了

deployment使用pvc
cat deployment-use-pvc.yaml

apiVersion: apps/v1
kind: Deployment 
metadata: 
  name: nginx-gfs
spec: 
  replicas: 3
  selector:
    matchLabels:
      name: nginx
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
        - name: nginx 
          image: nginx 
          imagePullPolicy: IfNotPresent
          ports: 
            - containerPort: 80
          volumeMounts:
            - name: nginx-gfs-html
              mountPath: "/usr/share/nginx/html"
            - name: nginx-gfs-conf
              mountPath: "/etc/nginx/conf.d"
      volumes:
      - name: nginx-gfs-html
        persistentVolumeClaim:
          claimName: glusterfs-nginx-html
      - name: nginx-gfs-conf
        persistentVolumeClaim:
          claimName: glusterfs-nginx-conf
---
kind: Service
apiVersion: v1
metadata:
  name: nginx-svc
  labels:
    glusterfs: nginx-service
    deploy-nginx: support
  annotations:
    description: Exposes Nginx Service
spec:
  selector:     
    name: nginx    ##与deployments中template标签一致
  ports:
    - name: nginx
      port: 80
      targetPort: 80

kubectl get po,pvc, pv | grep nginx

pod/nginx-gfs-86bd594d4f-66pfw   1/1     Running   0          62s
pod/nginx-gfs-86bd594d4f-fv2bt   1/1     Running   0          62s
pod/nginx-gfs-86bd594d4f-t2r4x   1/1     Running   0          62s
persistentvolumeclaim/glusterfs-nginx-conf   Bound    pvc-fd6d4512-c896-4bec-84ab-f5b00e0bdf1f   1Gi        RWX            gluster-heketi   84m
persistentvolumeclaim/glusterfs-nginx-html   Bound    pvc-31c475f2-5517-467e-ba71-666247e8f326   1Gi        RWX            gluster-heketi   84m
persistentvolume/pvc-31c475f2-5517-467e-ba71-666247e8f326   1Gi        RWX            Delete           Bound    default/glusterfs-nginx-html   gluster-heketi            84m
persistentvolume/pvc-fd6d4512-c896-4bec-84ab-f5b00e0bdf1f   1Gi        RWX            Delete           Bound    default/glusterfs-nginx-conf   gluster-heketi            84m

分别查看3各pod的挂载情况

[root@node161 ~]# kubectl  exec -it nginx-gfs-86bd594d4f-66pfw  -- df -h
Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                              15G  4.6G   11G  31% /
tmpfs                                                64M     0   64M   0% /dev
tmpfs                                               1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/mapper/centos-root                              15G  4.6G   11G  31% /etc/hosts
shm                                                  64M     0   64M   0% /dev/shm
192.168.0.161:vol_05b3736b4857ba40c6c344ad4b8d2997 1014M   43M  972M   5% /etc/nginx/conf.d
192.168.0.161:vol_011f2db30cf156a0ac9313c922f00a52 1014M   43M  972M   5% /usr/share/nginx/html
tmpfs                                               1.4G   12K  1.4G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               1.4G     0  1.4G   0% /proc/acpi
tmpfs                                               1.4G     0  1.4G   0% /proc/scsi
tmpfs                                               1.4G     0  1.4G   0% /sys/firmware
[root@node161 ~]# kubectl  exec -it nginx-gfs-86bd594d4f-fv2bt  -- df -h
Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                              15G  2.9G   13G  20% /
tmpfs                                                64M     0   64M   0% /dev
tmpfs                                               1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/mapper/centos-root                              15G  2.9G   13G  20% /etc/hosts
shm                                                  64M     0   64M   0% /dev/shm
192.168.0.162:vol_05b3736b4857ba40c6c344ad4b8d2997 1014M   43M  972M   5% /etc/nginx/conf.d
192.168.0.161:vol_011f2db30cf156a0ac9313c922f00a52 1014M   43M  972M   5% /usr/share/nginx/html
tmpfs                                               1.4G   12K  1.4G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               1.4G     0  1.4G   0% /proc/acpi
tmpfs                                               1.4G     0  1.4G   0% /proc/scsi
tmpfs                                               1.4G     0  1.4G   0% /sys/firmware
[root@node161 ~]# kubectl  exec -it nginx-gfs-86bd594d4f-t2r4x   -- df -h
Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                              15G  2.5G   13G  17% /
tmpfs                                                64M     0   64M   0% /dev
tmpfs                                               1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/mapper/centos-root                              15G  2.5G   13G  17% /etc/hosts
shm                                                  64M     0   64M   0% /dev/shm
192.168.0.162:vol_05b3736b4857ba40c6c344ad4b8d2997 1014M   43M  972M   5% /etc/nginx/conf.d
192.168.0.162:vol_011f2db30cf156a0ac9313c922f00a52 1014M   43M  972M   5% /usr/share/nginx/html
tmpfs                                               1.4G   12K  1.4G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               1.4G     0  1.4G   0% /proc/acpi
tmpfs                                               1.4G     0  1.4G   0% /proc/scsi
tmpfs                                               1.4G     0  1.4G   0% /sys/firmware

从上面反馈的信息就可以明白为什么这次针对多副本的deployment部署的nginx应用时挂载的pvc的访问模式为ReadWriteMany,这表示pvc绑定的pv可以被多个node挂载

下面用部署的nginx副本做个测试
往对应pvc-glusterfs-nginx-html绑定的pv所映射的卷写入数据
写入数据有两种方式,第一种是将卷挂载到宿主机;另外一种是直接进入某个pod写入数据到卷挂载的具体目录,这离我们采用后者
kubectl exec -it nginx-gfs-86bd594d4f-66pfw -- bash
echo "123" > /usr/share/nginx/html/index.html

echo "
server {
    listen       80;
    server_name  localhost;
    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}
" > /etc/nginx/conf.d/default.conf

删除旧的service和deployment重新创建新的service和deployments

kubectl get svc,deploymnets

service/glusterfs-dynamic-31c475f2-5517-467e-ba71-666247e8f326   ClusterIP   10.0.0.151   <none>        1/TCP      4h47m
service/glusterfs-dynamic-832be96c-ebaf-4206-8b91-76953cb38e77   ClusterIP   10.0.0.199   <none>        1/TCP      2d21h
service/glusterfs-dynamic-fd6d4512-c896-4bec-84ab-f5b00e0bdf1f   ClusterIP   10.0.0.205   <none>        1/TCP      4h47m
service/heketi                                                   ClusterIP   10.0.0.126   <none>        8080/TCP   5d22h
service/heketi-storage-endpoints                                 ClusterIP   10.0.0.237   <none>        1/TCP      5d22h
service/kubernetes                                               ClusterIP   10.0.0.1     <none>        443/TCP    8d
service/nginx-svc                                                ClusterIP   10.0.0.131   <none>        80/TCP     15m

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/heketi      1/1     1            1           5d22h
deployment.apps/nginx-gfs   3/3     3            3           15m
[root@node161 kubernetes-yaml]# curl 10.0.0.131 
123
[root@node161 kubernetes-yaml]# kubectl describe svc nginx-gfs 
Error from server (NotFound): services "nginx-gfs" not found
[root@node161 kubernetes-yaml]# kubectl describe svc nginx-svc 
Name:              nginx-svc
Namespace:         default
Labels:            deploy-nginx=support
                   glusterfs=nginx-service
Annotations:       description: Exposes Nginx Service
Selector:          name=nginx
Type:              ClusterIP
IP:                10.0.0.131
Port:              nginx1  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.10.3:80,10.244.25.2:80,10.244.46.3:80
Session Affinity:  None
Events:            <none>

从上面的信息可以看出数据的确已通过pvc挂载到pod中去

卷删除
kubectl exec -it heketi-d94cd58f9-tpnld -- bash
kubectl volume delete volumeID --user admin --secret 'My Secret'
这里使用gluster本身来删除卷也是可以的,但是无法与heketi配置同步,这样虽然卷删除了但是利用heketi查询卷得时候仍然能够看到删除的卷信息,所以最好还是使用heketi-cli相关命令来操作

9.7、动态存储模式优势

1、相对静态模式,动态存储可以更为灵活的通过pvc来创建对应空间的pv,不会受限于手动创建pv的空间
2、不需要手动创建大量的pv

10、CSI原理

前面所讲的pvc、storageclass、pv基于插件的存储管理机制是以一种in-tree方式来为容器提供存储服务,这种方式要求存储插件的代码需要写入到kubernetes主干代码才能被调用,这种紧耦合的方式会带来某些问题,比如插件必须要紧跟主干代码一起发布,出现错误可能不容易排查等

而CSI(容器存储接口)解决了这个问题,通过CSI这种与容器对接的标准存储接口,存储插件只需要基于这种接口进行插件的实现即可,由此插件与kubernetes实现解耦,这种方式为out of tree,也是未来kubernetes第三方存储插件的标准解决方案

10.1、csi组件部署

CSI存储插件实践

  • 创建CSIDriverRegistry的CRD资源对象

cat csidriver.yaml

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: csidrivers.csi.storage.k8s.io
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  group: csi.storage.k8s.io
  names:
    kind: CSIDriver
    plural: csidrivers
  scope: Cluster
  validation:
    openAPIV3Schema:
      properties:
        spec:
          description: Specification of the CSI Driver.
          properties:
            attachRequired:
              description: Indicates this CSI volume driver requires an attach operation,and that Kubernetes should call attach and wait for any attach operation to complete before proceeding to mount.
              type: boolean
            podInfoOnMountVersion:
              description: Indicates this CSI volume driver requires additional pod
              type: string
  version: v1alpha1
  • 创建CSINodeInfo的CRD资源对象
    cat csinode.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: csinodeinfos.csi.storage.k8s.io
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  group: csi.storage.k8s.io
  names:
    kind: CSINodeInfo
    plural: csinodeinfos
  scope: Cluster
  validation:
    openAPIV3Schema:
      properties:
        spec:
          description: Specification of CSINodeInfo
          properties:
            drivers:
              description: List of CSI drivers running on the node and their specs.
              type: array
              items:
                properties:
                  name:
                    description: The CSI driver that this object refers to.
                    type: string
                  nodeID:
                    description: The node from the driver point of view.
                    type: string
                  topologyKeys:
                    description: List of keys supported by the driver.
                    items:
                      type: string
                    type: array
        status:
          description: Status of CSINodeInfo
          properties:
            drivers:
              description: List of CSI drivers running on the node and their statuses.
              type: array
              items:
                properties:
                  name:
                    description: The CSI driver that this object refers to.
                    type: string
                  available:
                    description: Whether the CSI driver is installed.
                    type: boolean
                  volumePluginMechanism:
                    description: Indicates to external components the required mechanism.
                    pattern: in-tree|csi
                    type: string
  version: v1alpha1
  • 创建存储插件csi-hostpath-attacher

这里以csi-hostpath插件为例,需要部署以下三个组件
1、csi-hostpath-attacher
2、csi-hostpath-provisioner
3、csi-hostpathplugin(包括csi-node-driver-registrar和hostpathplugin)

创建csi-hostpath-attacher
cat csi-hostpath-attacher.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-attacher
  namespace: default

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: external-attacher-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get","list","watch","update"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get","list","watch"]
  - apiGroups: ["csi.storage.k8s.io"]
    resources: ["csinodeinfos"]
    verbs: ["get","list","watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get","list","watch","update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: csi-attacher-role
subjects:
  - kind: ServiceAccount
    name: csi-attacher
    namespace: default
roleRef:
  kind: ClusterRole
  name: external-attacher-runner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: external-attacher-cfg
  namespace: default
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get","watch","list","delete","update","create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: csi-attacher-role-cfg
  namespace: default
subjects:
  - kind: ServiceAccount
    name: csi-attacher
    namespace: default
roleRef:
  kind: Role
  name: external-attacher-cfg
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
  name: csi-hostpath-attacher
  labels:
    app: csi-hostpath-attacher
spec:
  selector:
    app: csi-hostpath-attacher
  ports:
    - name: dummy
      port: 12345
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: csi-hostpath-attacher
spec:
  serviceName: "csi-hostpath-attacher"
  replicas: 1
  selector:
    matchLabels:
      app: csi-hostpath-attacher
  template:
    metadata:
      labels:
        app: csi-hostpath-attacher
    spec:
      serviceAccountName: csi-attacher
      containers:
        - name: csi-attacher
          image: quay.io/k8scsi/csi-attacher:v1.0.1
          imagePullPolicy: IfNotPresent
          args:
            - --v=5
            - --csi-address=$(ADDRESS)
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
          - mountPath: /csi
            name: socket-dir
      volumes:
        - hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
          name: socket-dir

创建csi-hostpath-provisioner

cat csi-hostpath-provisioner.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: csi-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: external-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get","list"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get","list","watch","create","delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get","list","watch","update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get","list","watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list","watch","create","update","patch"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshots"]
    verbs: ["get","list"]
  - apiGroups: ["snapshot.storage.k8s.io"]
    resources: ["volumesnapshotcontents"]
    verbs: ["get","list"]
  - apiGroups: ["csi.storage.k8s.io"]
    resources: ["csinodeinfos"]
    verbs: ["get","list","watch"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get","list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: csi-provisioner-role
subjects:
  - kind: ServiceAccount
    name: csi-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: external-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: external-provisioner-cfg
  namespace: default
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get","watch","list","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: csi-provisioner-role-cfg
  namespace: default
subjects:
  - kind: ServiceAccount
    name: csi-provisioner
    namespace: default
roleRef:
  kind: Role
  name: external-provisioner-cfg
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: v1
kind: Service
metadata:
  name: csi-hostpath-provisioner
  labels:
    app: csi-hostpath-provisioner
spec:
  selector:
    app: csi-hostpath-provisioner
  ports:
    - name: dummy
      port: 12345
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: csi-hostpath-provisioner
spec:
  serviceName: "csi-hostpath-provisioner"
  replicas: 1
  selector:
    matchLabels:
      app: csi-hostpath-provisioner
  template:
    metadata:
      labels:
        app: csi-hostpath-provisioner
    spec:
      serviceAccountName: csi-provisioner
      containers:
        - name: csi-provisioner
          image: quay.io/k8scsi/csi-provisioner:v1.0.1
          imagePullPolicy: IfNotPresent
          args:
            - "--provisioner=csi-hostpath"
            - "--csi-address=$(ADDRESS)"
            - "--connection-timeout=15s"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
      volumes:
        - hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
          name: socket-dir   

创建hostpathplugin
cat hostpathplugin.yaml

apiVersion: v1
kind: ServiceAccount
metadata: 
  name: csi-node-sa
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: driver-registrar-runner
rules:
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get","list","watch","create","update","patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: csi-driver-registrar-role
subjects:
  - kind: ServiceAccount
    name: csi-node-sa
    namespace: default
roleRef:
  kind: ClusterRole
  name: driver-registrar-runner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: csi-hostpathplugin
spec:
  selector: 
    matchLabels:
      app: csi-hostpathplugin
  template:
    metadata:
      labels:
        app: csi-hostpathplugin
    spec:
      serviceAccountName: csi-node-sa
      hostNetwork: true
      containers:
        - name: driver-registrar
          image: quay.io/k8scsi/csi-node-driver-registrar:v1.0.1
          imagePullPolicy: IfNotPresent
          args:
            - --v=5
            - --csi-address=/csi/csi.sock
            - --kubelet-registration-path=/var/lib/kubelet/plugins/csi-hostpath/csi.sock
          env:
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          volumeMounts:
          - mountPath: /csi
            name: socket-dir
          - mountPath: /registration
            name: registration-dir
        - name: hostpath
          image: quay.io/k8scsi/hostpathplugin:v1.0.1
          imagePullPolicy: IfNotPresent
          args:
            - --v=5
            - --endpoint=$(CSI_ENDPOINT)
            - --nodeid=$(KUBE_NODE_NAME)
          env:
            - name: CSI_ENDPOINT
              value: unix:///csi/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom: 
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
          securityContext:
            privileged: true
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
            - mountPath: /var/lib/kubelet/pods
              mountPropagation: Bidirectional
              name: mountpoint-dir
      volumes:
        - hostPath:
            path: /var/lib/kubelet/plugins/csi-hostpath
            type: DirectoryOrCreate
          name: socket-dir
        - hostPath:
            path: /var/lib/kubelet/pods
            type: DirectoryOrCreate
          name: mountpoint-dir
        - hostPath:
            path: /var/lib/kubelet/plugins_registry
            type: Directory
          name: registration-dir

查看csi各组件状态
kubectl get po | grep csi
返回

csi-hostpath-attacher-0      1/1     Running   1          3d
csi-hostpath-provisioner-0   1/1     Running   1          3d17h
csi-hostpathplugin-26q5x     2/2     Running   0          5m42s
csi-hostpathplugin-b2r2v     2/2     Running   0          5m42s
csi-hostpathplugin-rg95s     2/2     Running   0          5m42s  

10.2、csi存储使用

csi要使用同样需要通过动态存储机制实现,需要创建storageclass和pvc来申请存储资源,再使用pod挂载pvc

创建storageclass
cat csi-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-hostpath-sc
provisioner: csi-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate

创建pvc来引用storageclass
cat csi-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-hostpath-sc

创建pod挂载pvc
cat csi-app.yaml

apiVersion: v1
kind: Pod
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-csi-app
      image: busybox
      imagePullPolicy: IfNotPresent
      command: ["sleep","1000000"]
      volumeMounts:
      - mountPath: "/data"
        name: my-csi-volume
  nodeSelector:
        sto: csi
  volumes:
  - name: my-csi-volume
    persistentVolumeClaim:
      claimName: csi-pvc

kubectl create -f csi-app.yaml
这里创建pod失败,pod一直处于ContainerCreating状态,查看日志

Normal SuccessfulAttachVolume 5m25s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-4d126763-9b48-4888-aeee-bf157ff2f965"
Warning FailedMount 68s (x2 over 3m22s) kubelet, 192.168.0.161 Unable to attach or mount volumes: unmounted volumes=[my-csi-volume], unattached volumes=[my-csi-volume default-token-vkfvw]: timed out waiting for the condition
Warning FailedMount 67s (x10 over 5m17s) kubelet, 192.168.0.161 MountVolume.SetUp failed for volume "pvc-4d126763-9b48-4888-aeee-bf157ff2f965" : rpc error: code = Unknown desc = mount failed: exit status 255
Mounting command: mount
Mounting arguments: -o bind /tmp/073b7f98-c802-11eb-b43c-000c29ebe9ec /var/lib/kubelet/pods/0a1dc6db-66ec-4949-8d27-aa3c068409a5/volumes/kubernetes.io~csi/pvc-4d126763-9b48-4888-aeee-bf157ff2f965/mount
Output: mount: mounting /tmp/073b7f98-c802-11eb-b43c-000c29ebe9ec on /var/lib/kubelet/pods/0a1dc6db-66ec-4949-8d27-aa3c068409a5/volumes/kubernetes.io~csi/pvc-4d126763-9b48-4888-aeee-bf157ff2f965/mount failed: No such file or directory

这里没有能够解决此问题,查看资料有这么一个解释

Current scheme does not work in multi-node cluster because
my-csi-app pod may land on different node than other pods
and mount point is not there.
This commit changes deployment so that there is one
instance of csi-hostpathplugin, and we use inter-pod affinity
to land attacher, provisioner, hostpathplugin and my-csi-appi
on a same node.

大意是说当前多节点集群中创建的pod调度到不同节点但是挂载点不一定存在,建议利用pod亲和性将附加程序attacher、配置程序provisioner、插件程序和应用部署在同一节点,但是经过测试并没有成功创建pod,故障依旧,这里不再深究,本章主要是通过csi三方存储插件来学习其机制。

10.3、csi架构

前面创建了一系列文件用于部署csi组件,但各组件件的关系和各自的作用是什么呢?
下图kubernetes csi存储插件关键组件以及容器化部署的架构

06csi

图中的各个pod是由哪个组件生成的呢?

1、部署csi-hostpath-attacher后会生成容器external-attacher,它会监控volumeattachment资源对象的变更,触发针对csi端的controllerpublish和controllerunpublish操作

2、部署csi-hostpath-provisioner后会生成external-provisioner,它会监控pvc的变更,触发createvolume和deletevolume操作

3、部署hostpathplugin后会在每个工作节点生成两个容器,分别是node-driver-registrar和第三方的存储驱动容器,本文为csi-hostplugin,前者将后者注册到kubelet中,后者用于接收kubelet调用

10.4、三方插件说明

上面测试使用的插件为hostpath插件,常见的还有nfs、cephFS、GlusterFS等插件,它们与external-attacher、external-provisioner、node-driver-registrar三个辅助容器形成完整的存储插件系统

GitHub Repository