《Kubernetes权威指南》学习笔记第十篇-共享存储

1、为什么需要存储

对于数据需要持久化的应用或者有状态的docker容器,不仅需要将容器内的目录挂载到宿主机同时还需要构建可靠的存储来存放重要数据以方便应用重建后能够再次使用之前的数据

2、pv与pvc

pv抽象底层存储将其定义为一种资源,pv由集群管理员进行创建和配置
pvc相当于一个申请,pvc定义了对pv的使用。就像pod使用node的资源一样

3、storageClass

pv、pvc并不能完全满足各种类型的应用程序,因为不同的应用程序对存储的性能会有各种不同的要求,包括读写性能、并发性能、数据冗余等,为此就有了新的对象storageClass来解决这类问题,它可以标记存储资源的性能和特性,将pv定义成某种类型的class,通过CSI(容器存储接口)来实现动态按需分配存储空间

4、pv配置

看一个简单实例

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv1
spec:
  capacity:
    storage: 5Gi  # 存储空间为5G
  accessModes:
    - ReadWriteOnce # 访问模式为只读一次
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow # 存储类的名字
  nfs:   # 后端存储类型
    path: /tmp   # nfs路径
    server: 172.17.0.2  # nfs存储地址

pv主要包括一些参数

  • capacity
    存储能力
  • volume Mode
    存储卷模式,默认为文件系统,还有块
    可以的块设备有:RBD、iscsi、Local volume、FC等
    如果存储卷是块类型,那么pv配置该如何配置?
apiVersion: v1
kind: PersistentVolume
metadata:
  name: block-pv
spec:
  capacity:
    storage: 5Gi  # 存储空间为5G
  accessModes:
    - ReadWriteOnce # 访问模式为只读一次
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow # 存储类的名字
  volumeMode: Block
  fc:
    targetWWNS: ["xxxxxx"] 
    lun: 0
    readOnly: false
  • accessModes
    访问模式,该参数用来描述应用对存储资源的访问权限
    有三种模式
    1、ReadWriteOnce 读写,只能被单个Node挂载
    2、ReadOnly 只读,允许多个Node挂载
    3、ReadWriteMany 读写,允许多个Node挂载

  • storageClass
    存储类别
    制定了storageClass的pv只能与请求了该类别的pvc绑定

  • Mount Options
    挂载选项,pv挂载到Node时,有些后端存储可能需要额外挂载一些东西

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gce-disk-1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  mountOptions:
    - hard
    - nolock
    - nfsvers=3
  gcePersistentDisk:
    fsType: ext4
    pdName: gce-disk-1
  • Node Affinity
    节点亲和性
    设置pv只能通过某些Node来访问,这样就可以将需要使用的pv的Pod调度到满足条件的Node上
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - my-node

5、pv的生命周期

四个状态

  • avaiable 可用,未绑定pvc
  • bound pvc绑定
  • release pvc解绑,但未回收
  • failed 回收

6、pvc配置

先看一个常规的pvc配置

apiVersion: v1
apiVersion: PersistentVolumeClaim
metadata:
  name: myclaim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 8Gi
  storageClassName: slow
  selector:
    matchLabels:
      release: "stable"
    matchExpressions:
      - {key: environment,operator: In,values: [dev]}

简单解释上面配置各参数

  • resources: 对存储资源的请求大小
  • accessModes: 应用对存储资源的访问权限
  • selector: 标签选择器,选择出具有对应标签的pv进行绑定
  • storageClass: 设置存储类别,只有与设置了同名Class的pv才能绑定

这里关于storageClass,上面的这个例子时设置了storageClass字段,其实也可以不设置该字段,如果不设置该字段有以下两种情况
1、DefaultStorageClass未启用
storageClassName="",系统会选择未设定Class的PV进行绑定
2、DefaultStorageClass启用
首先需要设置默认的StoragClass,然后系统会自动为pvc创建一个pv进行绑定,注意自动创建的pv使用的是使用了默认storageClass的后端存储

pv与pvc必须处于同一namespace才能进行绑定,pod在引用pvc时也需要在该namespace中才能挂载
pvc如果同时设置了storageClassName和selector这两个参数,那么只有同时满足的pv才能进行绑定

7、pv与pvc的生命周期

04abc

资源供应
资源供应的结果就是创建好的pv

  • 静态模式
    手动创建pv,而且必须设置后端存储特性

  • 动态模式
    不用手动创建pv,创建storageClass,然后设置pvc的Class为创建的Class,这样pvc会自动创建以该类Class为后端存储的pv并进行绑定

资源绑定
pvc根据自身配置去请求存储,通过存在的pv选择满足pvc要求的pv,找到满足要求的后与pvc绑定,就可以使用这个pvc了,找不到pvc就会处于pending状态,注意pv被绑定到pvc后就无法为其他pvc绑定,如果pvc请求的资源少于pv,那么为了不浪费资源可以让资源供应模式为动态模式,这样pvc找到合适的storageClass后自动创建一个pv与自身进行绑定

资源使用
Pod使用Volume定义来挂载pvc,pvc可以被多个Pod进行挂载,容器挂载pvc后就可以被持续使用

资源释放
存储资源使用完后删除pvc解绑对应的pv,但此时的pv还不能立刻被其他pvc绑定,因为之前的pvc写入的数据可能还存在该存储资源上

资源回收
上面说过pvc解绑后还不能马上使用对应的pv,还有遗留数据需要处理。要理解回收过程,需要对资源供存储使用的整体流程做一个理解

静态模式资源供应
04bu-huo

动态模式资源供应
05str1

8、StorageClass配置

StorageClass是对存储资源的抽象定义,无须管理员手动创建pv,而由系统自动创建pv并与pvc绑定实现动态资源的供应
StorageClass一旦被创建将无法被更改,包括名字、后端存储提供者、后端存储的相关参数配置,如果必须修改只能删除重建

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2

下面简单解释各个参数的意思

  • provisioner
    存储资源的提供者,以kubernetes.io/开头

  • parameters
    provisioner的参数设置

下面以GlusterFS对对StorageClass的定义作说明

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: slow
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://127.0.0.1:8081"
  clusterid: "xxxxxxxxxxxxxxxxxx"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "default"
  secretName: "heketi-secret"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"   

简单说明一下上免得参数意义
resturl:heketi地址
secretNamespace、secretName: 保存GlusterFS REST服务密码的Secret资源对象名
gidMin、gidMax: StorageClass的GID范围,用于动态供应的pv的GID范围

当启用默认的StorageClass相关配置后有利于减少pvc配置的重复配置工作,那么如何设置默认StorageClass呢?

两步走

  • kube-apiserver服务添加启动参数--enable-admission-plugins=DefaultStorageClass=...,DefaultStorageClass
  • 在想要配置成默认StorageClass的某个StorageClass配置中添加annotation
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gold
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/xxx   
parameters:
  type: pd-ssd

kubectl get sc查看

ALLOWVOLUMEEXPANSION AGE
gold (default) kubernetes.io/gce-pd Delete Immediate false 19s

9、存储实战

以GlusterFS为例来创建共享存储,包括定义StorageClass、创建GlusterFS和Heketi服务、用户申请pvc、Pod使用存储资源,本文采用Heketi来实现,先下载所需要的资源
wget https://github.com/heketi/heketi/releases/download/v10.0.0/heketi-v10.0.0.linux.amd64.tar.gz

9.1、各节点创建GlusterFS客户端

yum -y install glusterfs glusterfs-fuse

modprobe dm_snapshot
modprobe dm_mirror
modprobe dm_thin_pool

部署GlusterFS容器集群前需要使apiserver处于特权模式,添加启动参数--allow-privileged=true

因为需要在每个节点上都部署GlusterFS容器服务,这就需要给每一个工作节点打赏标签,然后通过DaemonSet来部署服务到每一个节点,但是GlusterFS集群至少需要三个节点,而这里二进制部署的集群当时只部署了两个工作节点,master节点完全用于管理集群了,现在需要将master节点也作为工作节点

具体添加可以参考文章 /kubernetesjiao-cheng-er-jin-zhi-fang-shi-an-zhuang/

将集群管理节点添加为工作节点后查看节点
kubectl get nodes

192.168.0.158 Ready 6m3s v1.19.0
192.168.0.159 Ready 119d v1.19.0
192.168.0.160 Ready 119d v1.19.0

给各个节点打标签
kubectl label nodes 192.168.0.160 storagenode=glusterfs

kubectl get nodes --show-labels

9.2、创建glusterfs容器服务

在各个节点创建一个gluster容器管理服务
cat glusterfs-daemonset.yaml

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: glusterfs
  labels:
    glusterfs: deployment
  annotations:
    description: GlusterFS Daemon Set
    tags: glusterfs
spec:
  selector:
    matchLabels: 
      glusterfs-node: daemonset
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs-node: daemonset
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
        - image: 'gluster/gluster-centos:latest'
          imagePullPolicy: Always
          name: glusterfs
          volumeMounts:
            - name: glusterfs-heketi
              mountPath: /var/lib/heketi
            - name: glusterfs-run
              mountPath: /run
            - name: glusterfs-lvm
              mountPath: /run/lvm
            - name: glusterfs-etc
              mountPath: /etc/glusterfs
            - name: glusterfs-logs
              mountPath: /var/log/glusterfs
            - name: glusterfs-config
              mountPath: /var/lib/glusterd
            - name: glusterfs-dev
              mountPath: /dev
            - name: glusterfs-cgroup
              mountPath: /sys/fs/cgroup
          securityContext:
            capabilities: {}
            privileged: true
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 60
            exec:
              command:
                - /bin/bash
                - '-c'
                - systemctl status glusterd.service
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 60
            exec:
              command:
                - /bin/bash
                - '-c'
                - systemctl status glusterd.service
      volumes:
        - name: glusterfs-heketi
          hostPath:
            path: /var/lib/heketi
        - name: glusterfs-run
        - name: glusterfs-lvm
          hostPath:
            path: /run/lvm
        - name: glusterfs-etc
          hostPath:
            path: /etc/glusterfs
        - name: glusterfs-logs
          hostPath:
            path: /var/log/glusterfs
        - name: glusterfs-config
          hostPath:
            path: /var/lib/glusterd
        - name: glusterfs-dev
          hostPath:
            path: /dev
        - name: glusterfs-cgroup
          hostPath:
            path: /sys/fs/cgroup

创建服务
kubectl create -f glusterfs-daemonset.yaml
查看服务
kubectl get po

NAME READY STATUS RESTARTS AGE
glusterfs-5nsq5 1/1 Running 0 5m23s
glusterfs-jb7tt 1/1 Running 0 5m23s
glusterfs-xjv6d 1/1 Running 0 5m23s

9.3、部署heketi服务

GlusterFSf服务容器创建后需要对其进行配置组成集群,这里使用GlusterFS框架Heketi进行配置

在部署Heketi前险创建对应的ServiceAccount
cat heketi-service-account.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account

创建serviceaccount
kubectl create -f heketi-service-account.yaml

创建Heketi对应的权限和secret
kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
kubectl create secret generic heketi-config-secret --from-file=./heketi.yaml

初始化部署heketi服务
cat heketi-bootstrap.yaml

kind: List
apiVersion: v1
items:
  - kind: Service
    apiVersion: v1
    metadata:
kind: List
apiVersion: v1
items:
  - kind: Service
    apiVersion: v1
    metadata:
      name: deploy-heketi
      labels:
        glusterfs: heketi-service
        deploy-heketi: support
      annotations:
        description: Exposes Heketi Service
    spec:
      selector:
        name: deploy-heketi
      ports:
        - name: deploy-heketi
          port: 8080
          targetPort: 8080
  - kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: deploy-heketi
      labels:
        glusterfs: heketi-deployment
        deploy-heketi: deployment
      annotations:
        description: Defines how to deploy Heketi
    spec:
      replicas: 1
      selector:
        matchLabels:
          glusterfs: heketi-pod
      template:
        metadata:
          name: deploy-heketi
          labels:
            name: deploy-heketi
            glusterfs: heketi-pod
            deploy-heketi: pod
        spec:
          serviceAccountName: heketi-service-account
          containers:
            - image: 'heketi/heketi:dev'
              imagePullPolicy: Always
              name: deploy-heketi
              env:
                - name: HEKETI_EXECUTOR
                  value: kubernetes
                - name: HEKETI_DB_PATH
                  value: /var/lib/heketi/heketi.db
                - name: HEKETI_FSTAB
                  value: /var/lib/heketi/fstab
                - name: HEKETI_SNAPSHOT_LIMIT
                  value: '14'
                - name: HEKETI_KUBE_GLUSTER_DAEMONSET
                  value: 'y'
              ports:
                - containerPort: 8080
              volumeMounts:
                - name: db
                  mountPath: /var/lib/heketi
                - name: config
                  mountPath: /etc/heketi
              readinessProbe:
                timeoutSeconds: 3
                initialDelaySeconds: 3
                httpGet:
                  path: /hello
                  port: 8080
              livenessProbe:
                timeoutSeconds: 3
                initialDelaySeconds: 30
                httpGet:
                  path: /hello
                  port: 8080
          volumes:
            - name: db
            - name: config
              secret:
                secretName: heketi-config-secret

kubectl create -f heketi-bootstrap.yaml

cd /tmp/heketi-client/bin && cp heketi-cli /usr/local/bin/

kubect get svc查看hekei-deploy的service ip

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deploy-heketi NodePort 169.169.251.88 8080:42774/TCP 3h26m

export HEKETI_CLI_SERVER=http://192.168.0.160:42774
因为heketi-deploy被部署到160上了,这里的地址需要为apiserver主机可以访问到的地址,可以是clusterIP+端口,也可以是podip+端口,更可以是NodePortIP +端口,这里使用nnodeportIp+端口是因为集群部署出了点问题,在158(apiserver所在节点)无法通过clusterip或者podIP访问160上的heketi-deploy服务,这里猜测应该是kubernetes集群部署的有问题,后面准备再部署一遍

配置集群部署文件
cat toplogy.json

{
  "clusters": [
   {
     "nodes": [
       {
         "node": {
            "hostnames": {
             "manage": [
               "192.168.0.158"
                ],
                "storage": [
                  "192.168.0.158"
                 ]
                },
                "zone": 1
                },
                "devices": [
                  "/dev/sdb"
                ]
               },
               {                 
                  "node": {
                   "hostnames": {
                    "manage": [
                      "192.168.0.159"
                       ],
                      "storage": [
                          "192.168.0.159"
                        ]
                        },
                      "zone": 1
                      },
                     "devices": [
                      "/dev/sdb"
                   ]
                  },
                 {
                  "node": {
                   "hostnames": {
                    "manage": [
                      "192.168.0.160"
                       ],
                      "storage": [
                          "192.168.0.160"
                        ]
                        },
                      "zone": 1
                      },
                     "devices": [
                      "/dev/sdb"
                   ]
                  }
                ]
              }
            ]
          }

ps:manage字段本来应该写主机名,表示在该主机节点去配置gluster容器服务,但是本文基于的kubernetes集群节点使用的是ip,没有做dns解析,所以这里也直接写的ip,如果要使用域名则需要修改kubernetes集群配置使用主机名表示节点值

heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology load --json=topology.json
ps:注意这里默认情况下heketi是有账号密码的,不能直接使用命令heketi-cli topology load --json=topology.json,否则会报错Error: Invalid JWT token: Token missing iss claim

这里报了个错

ERROR 2021/05/07 08:50:17 heketi/pkg/remoteexec/kube/target.go:145:kube.TargetDaemonSet.GetTargetPod: Unable to find a GlusterFS pod on host 192.168.0.158 with a label key glusterfs-node

这是因为节点上的glusterfs的pod中的label不对,修改前面glusterfs-deamonset.yaml

....
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs-node: pod
....

kubectl apply -f glusterfs-daemonset.yaml更新服务
再次使用heketi执行配置

Creating cluster ... ID: a0c5178e4bf4b0aba55cc50e776d115c
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node 192.168.0.158 ... ID: 54767a0eec3a860e87b7542a9376f5da
Adding device /dev/sdb ... OK
Creating node 192.168.0.159 ... ID: 37f937aa83aa2f3b3c3c5a8a971c3535
Adding device /dev/sdb ... OK
Creating node 192.168.0.160 ... ID: 9b64da7ca8c4e779452269e7c8bf8d60
Adding device /dev/sdb ... OK

heketi-cli -s $HEKETI_CLI_SERVER --user admin --secret 'My Secret' topology info查看

Cluster Id: a0c5178e4bf4b0aba55cc50e776d115c

    File:  true
    Block: true

    Volumes:


    Nodes:

        Node Id: 37f937aa83aa2f3b3c3c5a8a971c3535
        State: online
        Cluster Id: a0c5178e4bf4b0aba55cc50e776d115c
        Zone: 1
        Management Hostnames: 192.168.0.159
        Storage Hostnames: 192.168.0.159
        Devices:
                Id:bb52f3bad91c1369184408aee3cce48e   State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: 54767a0eec3a860e87b7542a9376f5da
        State: online
        Cluster Id: a0c5178e4bf4b0aba55cc50e776d115c
        Zone: 1
        Management Hostnames: 192.168.0.158
        Storage Hostnames: 192.168.0.158
        Devices:
                Id:fc7c8566f4901b0fc02168fec7f34707   State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
                        Known Paths: /dev/sdb

                        Bricks:

        Node Id: 9b64da7ca8c4e779452269e7c8bf8d60
        State: online
        Cluster Id: a0c5178e4bf4b0aba55cc50e776d115c
        Zone: 1
        Management Hostnames: 192.168.0.160
        Storage Hostnames: 192.168.0.160
        Devices:
                Id:8204b8e6fdba12e4b5e6308934c56b47   State:online    Size (GiB):19      Used (GiB):0       Free (GiB):19      
                        Known Paths: /dev/sdb

                        Bricks:

从上面打印出的信息看显然还没有创建volume和brick

yum install device-mapper* -y
heketi-cli setup-openshift-heketi-storage --user admin --secret 'My Secret'
kubectl create -f heketi-storage.json
kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"

创建持久化Heketi
cat heketi-deployment.yaml

kind: List
apiVersion: v1
items:
  - kind: Secret
    apiVersion: v1
    metadata:
      name: heketi-db-backup
      labels:
        glusterfs: heketi-db
        heketi: db
    data: {}
    type: Opaque
  - kind: Service
    apiVersion: v1
    metadata:
      name: heketi
      labels:
        glusterfs: heketi-service
        deploy-heketi: support
      annotations:
        description: Exposes Heketi Service
    spec:
      selector:
        name: heketi
      ports:
        - name: heketi
          port: 8080
          targetPort: 8080
  - kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: heketi
      labels:
        glusterfs: heketi-deployment
      annotations:
        description: Defines how to deploy Heketi
    spec:
      replicas: 1
      selector:
        matchLabels:
          glusterfs: heketi-pod
      template:
        metadata:
          name: heketi
          labels:
            name: heketi
            glusterfs: heketi-pod
        spec:
          serviceAccountName: heketi-service-account
          containers:
            - image: 'heketi/heketi:dev'
              imagePullPolicy: Always
              name: heketi
              env:
                - name: HEKETI_EXECUTOR
                  value: kubernetes
                - name: HEKETI_DB_PATH
                  value: /var/lib/heketi/heketi.db
                - name: HEKETI_FSTAB
                  value: /var/lib/heketi/fstab
                - name: HEKETI_SNAPSHOT_LIMIT
                  value: '14'
                - name: HEKETI_KUBE_GLUSTER_DAEMONSET
                  value: 'y'
              ports:
                - containerPort: 8080
              volumeMounts:
                - mountPath: /backupdb
                  name: heketi-db-secret
                - name: db
                  mountPath: /var/lib/heketi
                - name: config
                  mountPath: /etc/heketi
              readinessProbe:
                timeoutSeconds: 3
                initialDelaySeconds: 3
                httpGet:
                  path: /hello
                  port: 8080
              livenessProbe:
                timeoutSeconds: 3
                initialDelaySeconds: 30
                httpGet:
                  path: /hello
                  port: 8080
          volumes:
            - name: db
              glusterfs:
                endpoints: heketi-storage-endpoints
                path: heketidbstorage
            - name: heketi-db-secret
              secret:
                secretName: heketi-db-backup
            - name: config
              secret:
                secretName: heketi-config-secret

kubectl create -f heketi-deployment.yaml
kubectl get svc查看最新service

[root@node158 kubernetes-yaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heketi NodePort 169.169.211.30 8080:36290/TCP 5m16s

export HEKETI_CLI_SERVER=http://192.168.0.160:36290
curl http://192.168.0.160:36290/hello

Hello from Heketi

heketi-cli topology info --user admin --secret 'My Secret'


Cluster Id: de33190e73a211bf33d54c75afabe813

    File:  true
    Block: true

    Volumes:

        Name: heketidbstorage
        Size: 2
        Id: 118468d71061d1f1c61ddd034fb6b77c
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Mount: 192.168.0.159:heketidbstorage
        Mount Options: backup-volfile-servers=192.168.0.158,192.168.0.160
        Durability Type: replicate
        Replica: 3
        Snapshot: Disabled

                Bricks:
                        Id: 042752cfed5354b218211e192fcfd57e
                        Path: /var/lib/heketi/mounts/vg_793cf14603f8851a8df39bbc0753bb5c/brick_042752cfed5354b218211e192fcfd57e/brick
                        Size (GiB): 2
                        Node: 2eec312c2524494c5cc202e806f20fc1
                        Device: 793cf14603f8851a8df39bbc0753bb5c

                        Id: 3792218112df18285dfd74d3891d3157
                        Path: /var/lib/heketi/mounts/vg_0e014c7353bf225cf6e779250b80ffae/brick_3792218112df18285dfd74d3891d3157/brick
                        Size (GiB): 2
                        Node: df1936dc1f82d9e08588926bd818fcdb
                        Device: 0e014c7353bf225cf6e779250b80ffae

                        Id: e4c2cef8b2ed1991dbe93217e3da6576
                        Path: /var/lib/heketi/mounts/vg_25a7a614c4eb7b9c37517537335f7e3e/brick_e4c2cef8b2ed1991dbe93217e3da6576/brick
                        Size (GiB): 2
                        Node: 8b7b86ebad3eb1fcf2f553dbb9cbeb9b
                        Device: 25a7a614c4eb7b9c37517537335f7e3e



    Nodes:

        Node Id: 2eec312c2524494c5cc202e806f20fc1
        State: online
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Zone: 1
        Management Hostnames: 192.168.0.159
        Storage Hostnames: 192.168.0.159
        Devices:
                Id:793cf14603f8851a8df39bbc0753bb5c   State:online    Size (GiB):59      Used (GiB):2       Free (GiB):57      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:042752cfed5354b218211e192fcfd57e   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_793cf14603f8851a8df39bbc0753bb5c/brick_042752cfed5354b218211e192fcfd57e/brick

        Node Id: 8b7b86ebad3eb1fcf2f553dbb9cbeb9b
        State: online
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Zone: 1
        Management Hostnames: 192.168.0.158
        Storage Hostnames: 192.168.0.158
        Devices:
                Id:25a7a614c4eb7b9c37517537335f7e3e   State:online    Size (GiB):59      Used (GiB):2       Free (GiB):57      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:e4c2cef8b2ed1991dbe93217e3da6576   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_25a7a614c4eb7b9c37517537335f7e3e/brick_e4c2cef8b2ed1991dbe93217e3da6576/brick

        Node Id: df1936dc1f82d9e08588926bd818fcdb
        State: online
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Zone: 1
        Management Hostnames: 192.168.0.160
        Storage Hostnames: 192.168.0.160
        Devices:
                Id:0e014c7353bf225cf6e779250b80ffae   State:online    Size (GiB):59      Used (GiB):2       Free (GiB):57      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:3792218112df18285dfd74d3891d3157   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_0e014c7353bf225cf6e779250b80ffae/brick_3792218112df18285dfd74d3891d3157/brick

从上面反馈的信息可以看到已经为heketi服务

9.4、定义相关的StorageClass

准备工作完成了,现在定义StorageClass

cat storage-gluster-heketi.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://192.168.0.160:36290"
  clusterid: "de33190e73a211bf33d54c75afabe813"
  restauthenabled: "true"
  restuser: "admin"
  restuserkey: "My Secret"
  gidMin: "40000"
  gidMax: "50000"
  volumetype: "replicate:3"

这里配置很关键,否则下面创建pvc时无法正确创建brick和volume

9.5、创建相关的pvc

cat pvc-gluster-heketi.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-gluster-heketi
spec:
  storageClassName: gluster-heketi
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

kubectl create -f pvc-gluster-heketi.yaml
kubectl get pvc

pvc-gluster-heketi Bound pvc-5fb57f87-bbb5-4935-ac94-4932de0e3a2f 2Gi RWO gluster-heketi 15s

从反馈的信息说明pvc已经成功创建

kubectl get pv查看自动创建的pv
kubectl describe pv pvc-5fb57f87-bbb5-4935-ac94-4932de0e3a2f

Name:            pvc-5fb57f87-bbb5-4935-ac94-4932de0e3a2f
Labels:          <none>
Annotations:     Description: Gluster-Internal: Dynamically provisioned PV
                 gluster.kubernetes.io/heketi-volume-id: 63fa939de1157e500b37d599c1bf0cd3
                 gluster.org/type: file
                 kubernetes.io/createdby: heketi-dynamic-provisioner
                 pv.beta.kubernetes.io/gid: 40000
                 pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    gluster-heketi
Status:          Bound
Claim:           default/pvc-gluster-heketi
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        2Gi
Node Affinity:   <none>
Message:         
Source:
    Type:                Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:       glusterfs-dynamic-5fb57f87-bbb5-4935-ac94-4932de0e3a2f
    EndpointsNamespace:  default
    Path:                vol_63fa939de1157e500b37d599c1bf0cd3
    ReadOnly:            false
Events:                  <none>

可以看到pv引用的StorageClass,pv状态,容量,回收策略以及glusterfs挂载点等

heketi-cli topology info --user admin --secret 'My Secret'

Cluster Id: de33190e73a211bf33d54c75afabe813

    File:  true
    Block: true

    Volumes:

        Name: heketidbstorage
        Size: 2
        Id: 118468d71061d1f1c61ddd034fb6b77c
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Mount: 192.168.0.159:heketidbstorage
        Mount Options: backup-volfile-servers=192.168.0.158,192.168.0.160
        Durability Type: replicate
        Replica: 3
        Snapshot: Disabled

                Bricks:
                        Id: 042752cfed5354b218211e192fcfd57e
                        Path: /var/lib/heketi/mounts/vg_793cf14603f8851a8df39bbc0753bb5c/brick_042752cfed5354b218211e192fcfd57e/brick
                        Size (GiB): 2
                        Node: 2eec312c2524494c5cc202e806f20fc1
                        Device: 793cf14603f8851a8df39bbc0753bb5c

                        Id: 3792218112df18285dfd74d3891d3157
                        Path: /var/lib/heketi/mounts/vg_0e014c7353bf225cf6e779250b80ffae/brick_3792218112df18285dfd74d3891d3157/brick
                        Size (GiB): 2
                        Node: df1936dc1f82d9e08588926bd818fcdb
                        Device: 0e014c7353bf225cf6e779250b80ffae

                        Id: e4c2cef8b2ed1991dbe93217e3da6576
                        Path: /var/lib/heketi/mounts/vg_25a7a614c4eb7b9c37517537335f7e3e/brick_e4c2cef8b2ed1991dbe93217e3da6576/brick
                        Size (GiB): 2
                        Node: 8b7b86ebad3eb1fcf2f553dbb9cbeb9b
                        Device: 25a7a614c4eb7b9c37517537335f7e3e


        Name: vol_63fa939de1157e500b37d599c1bf0cd3
        Size: 2
        Id: 63fa939de1157e500b37d599c1bf0cd3
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Mount: 192.168.0.159:vol_63fa939de1157e500b37d599c1bf0cd3
        Mount Options: backup-volfile-servers=192.168.0.158,192.168.0.160
        Durability Type: replicate
        Replica: 3
        Snapshot: Enabled
        Snapshot Factor: 1.00

                Bricks:
                        Id: 2267aa0d6578a00709cbb232351b7632
                        Path: /var/lib/heketi/mounts/vg_793cf14603f8851a8df39bbc0753bb5c/brick_2267aa0d6578a00709cbb232351b7632/brick
                        Size (GiB): 2
                        Node: 2eec312c2524494c5cc202e806f20fc1
                        Device: 793cf14603f8851a8df39bbc0753bb5c

                        Id: 4311c7a5f03c13302bf5205105830ab9
                        Path: /var/lib/heketi/mounts/vg_25a7a614c4eb7b9c37517537335f7e3e/brick_4311c7a5f03c13302bf5205105830ab9/brick
                        Size (GiB): 2
                        Node: 8b7b86ebad3eb1fcf2f553dbb9cbeb9b
                        Device: 25a7a614c4eb7b9c37517537335f7e3e

                        Id: 4d07c1ea6b0b8b35c7d2f243e6435c88
                        Path: /var/lib/heketi/mounts/vg_0e014c7353bf225cf6e779250b80ffae/brick_4d07c1ea6b0b8b35c7d2f243e6435c88/brick
                        Size (GiB): 2
                        Node: df1936dc1f82d9e08588926bd818fcdb
                        Device: 0e014c7353bf225cf6e779250b80ffae



    Nodes:

        Node Id: 2eec312c2524494c5cc202e806f20fc1
        State: online
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Zone: 1
        Management Hostnames: 192.168.0.159
        Storage Hostnames: 192.168.0.159
        Devices:
                Id:793cf14603f8851a8df39bbc0753bb5c   State:online    Size (GiB):59      Used (GiB):4       Free (GiB):55      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:042752cfed5354b218211e192fcfd57e   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_793cf14603f8851a8df39bbc0753bb5c/brick_042752cfed5354b218211e192fcfd57e/brick
                                Id:2267aa0d6578a00709cbb232351b7632   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_793cf14603f8851a8df39bbc0753bb5c/brick_2267aa0d6578a00709cbb232351b7632/brick

        Node Id: 8b7b86ebad3eb1fcf2f553dbb9cbeb9b
        State: online
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Zone: 1
        Management Hostnames: 192.168.0.158
        Storage Hostnames: 192.168.0.158
        Devices:
                Id:25a7a614c4eb7b9c37517537335f7e3e   State:online    Size (GiB):59      Used (GiB):4       Free (GiB):55      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:4311c7a5f03c13302bf5205105830ab9   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_25a7a614c4eb7b9c37517537335f7e3e/brick_4311c7a5f03c13302bf5205105830ab9/brick
                                Id:e4c2cef8b2ed1991dbe93217e3da6576   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_25a7a614c4eb7b9c37517537335f7e3e/brick_e4c2cef8b2ed1991dbe93217e3da6576/brick

        Node Id: df1936dc1f82d9e08588926bd818fcdb
        State: online
        Cluster Id: de33190e73a211bf33d54c75afabe813
        Zone: 1
        Management Hostnames: 192.168.0.160
        Storage Hostnames: 192.168.0.160
        Devices:
                Id:0e014c7353bf225cf6e779250b80ffae   State:online    Size (GiB):59      Used (GiB):4       Free (GiB):55      
                        Known Paths: /dev/sdb

                        Bricks:
                                Id:3792218112df18285dfd74d3891d3157   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_0e014c7353bf225cf6e779250b80ffae/brick_3792218112df18285dfd74d3891d3157/brick
                                Id:4d07c1ea6b0b8b35c7d2f243e6435c88   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_0e014c7353bf225cf6e779250b80ffae/brick_4d07c1ea6b0b8b35c7d2f243e6435c88/brick

从上面的反馈可以看到新创建了一个volume,可以看到volume的名字与前面查看pv信息path字段值一致

下面就可以将使用Volume将pvc挂载到pod里了

9.6、pod挂载pvc

cat pod-use-pvc.yaml

apiVersion: v1
kind: Pod
metadata:
  name: pod-use-pvc
spec:
  containers:
  - name: pod-use-pvc
    image: busybox
    command:
    - sleep
    - "3600"
    volumeMounts:
    - name: gluster-volume
      mountPath: "/pv-data"
      readOnly: false
  volumes:
  - name: gluster-volume
    persistentVolumeClaim:
      claimName: pvc-gluster-heketi

kubectl create -f pod-use-pvc.yaml

下面进行数据测试
kubectl exec -it pod-use-pvc -- sh
cd /pv-data && mkdir wangteng

退回到宿主机
查看pv所定义的volume

Mount: 192.168.0.159:vol_63fa939de1157e500b37d599c1bf0cd3

mount -t glusterfs 192.168.0.159:vol_63fa939de1157e500b37d599c1bf0cd3 /mnt/
df -h

192.168.0.159:vol_63fa939de1157e500b37d599c1bf0cd3 2.0G 53M 2.0G 3% /mnt

cd /mnt

drwxr-sr-x 2 root 40000 6 May 12 16:53 wangteng

可以看到数据已经写入到卷中,也就是说的确写入到glusterfs复制集中了

前面是使用单个Pod来挂载pvc,但生产环境中往往是通过各类控制器来部署pod的,而且经常会有多个副本,比如nginx,下面以nginx来进行测试
创建pvc
cat pvc-deployment-heketi.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx-html
spec:
  accessModes: 
    - ReadWriteMany
  storageClassName: gluster-heketi
  resources:
    requests:
      storage: 500Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-nginx-conf
spec:
  accessModes: 
    - ReadWriteMany
  storageClassName: gluster-heketi
  resources:
    requests:
      storage: 10Mi

kubectl create -f pvc-deplyment-heketi.yaml创建pvc
kubect get pvc,pv 查看创建的pv
kubectl describe 某个pv可以看到该pv的信息,与heketi-cli命令查看的结果作比较

deployment使用pvc
cat deployment-use-pvc.yaml

apiVersion: apps/v1
kind: Deployment 
metadata: 
  name: nginx-gfs
spec: 
  replicas: 3
  selector:
    matchLabels:
      name: nginx
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
        - name: nginx 
          image: nginx 
          imagePullPolicy: IfNotPresent
          ports: 
            - containerPort: 80
          volumeMounts:
            - name: nginx-gfs-html
              mountPath: "/usr/share/nginx/html"
            - name: nginx-gfs-conf
              mountPath: "/etc/nginx/conf.d"
      volumes:
      - name: nginx-gfs-html
        persistentVolumeClaim:
          claimName: glusterfs-nginx-html
      - name: nginx-gfs-conf
        persistentVolumeClaim:
          claimName: glusterfs-nginx-conf

kubectl get po,pvc, pv | grep nginx

pod/nginx-gfs-86bd594d4f-6g94v   1/1     Running   0          8m45s
pod/nginx-gfs-86bd594d4f-jpbdw   1/1     Running   0          8m45s
pod/nginx-gfs-86bd594d4f-szbmc   1/1     Running   0          8m45s
persistentvolumeclaim/glusterfs-nginx-conf   Bound    pvc-4b6a9321-9f60-47d9-8aa5-50f2bc5c07d7   1Gi        RWX            gluster-heketi   25m
persistentvolumeclaim/glusterfs-nginx-html   Bound    pvc-8a62c19c-1c65-4648-8066-e8689f502ff8   1Gi        RWX            gluster-heketi   25m
persistentvolume/pvc-4b6a9321-9f60-47d9-8aa5-50f2bc5c07d7   1Gi        RWX            Delete           Bound    default/glusterfs-nginx-conf   gluster-heketi            25m
persistentvolume/pvc-8a62c19c-1c65-4648-8066-e8689f502ff8   1Gi        RWX            Delete           Bound    default/glusterfs-nginx-html   gluster-heketi            25m

从上面反馈的信息可以看出pvc申请的空间不足1G的按1G来分配

kubectl exec -it nginx-gfs-86bd594d4f-6g94v -- df -h
kubectl exec -it nginx-gfs-86bd594d4f-jpbdw -- df -h
kubect exec -it nginx-gfs-86bd594d4f-szbmc -- df -h
分别查看3各pod的挂载情况

[root@node158 kubernetes-yaml]# kubectl exec -it nginx-gfs-86bd594d4f-6g94v   -- df -h
Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                              20G  9.1G   11G  46% /
tmpfs                                                64M     0   64M   0% /dev
tmpfs                                               1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/sda2                                            20G  9.1G   11G  46% /etc/hosts
shm                                                  64M     0   64M   0% /dev/shm
192.168.0.159:vol_d755c4cfeb3513e260b5a9562c5e8677 1014M   43M  972M   5% /etc/nginx/conf.d
192.168.0.159:vol_0cd769ff561546eb834d6ef341c33226 1014M   43M  972M   5% /usr/share/nginx/html
tmpfs                                               1.4G   12K  1.4G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               1.4G     0  1.4G   0% /proc/acpi
tmpfs                                               1.4G     0  1.4G   0% /proc/scsi
tmpfs                                               1.4G     0  1.4G   0% /sys/firmware
[root@node158 kubernetes-yaml]# kubectl exec -it nginx-gfs-86bd594d4f-jpbdw    -- df -h
Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                              20G  8.3G   12G  42% /
tmpfs                                                64M     0   64M   0% /dev
tmpfs                                               1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/sda2                                            20G  8.3G   12G  42% /etc/hosts
shm                                                  64M     0   64M   0% /dev/shm
192.168.0.159:vol_d755c4cfeb3513e260b5a9562c5e8677 1014M   43M  972M   5% /etc/nginx/conf.d
192.168.0.158:vol_0cd769ff561546eb834d6ef341c33226 1014M   43M  972M   5% /usr/share/nginx/html
tmpfs                                               1.4G   12K  1.4G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               1.4G     0  1.4G   0% /proc/acpi
tmpfs                                               1.4G     0  1.4G   0% /proc/scsi
tmpfs                                               1.4G     0  1.4G   0% /sys/firmware
[root@node158 kubernetes-yaml]# kubectl exec -it nginx-gfs-86bd594d4f-szbmc     -- df -h
Filesystem                                          Size  Used Avail Use% Mounted on
overlay                                              20G  8.4G   12G  42% /
tmpfs                                                64M     0   64M   0% /dev
tmpfs                                               1.4G     0  1.4G   0% /sys/fs/cgroup
/dev/sda2                                            20G  8.4G   12G  42% /etc/hosts
shm                                                  64M     0   64M   0% /dev/shm
192.168.0.159:vol_d755c4cfeb3513e260b5a9562c5e8677 1014M   43M  972M   5% /etc/nginx/conf.d
192.168.0.160:vol_0cd769ff561546eb834d6ef341c33226 1014M   43M  972M   5% /usr/share/nginx/html
tmpfs                                               1.4G   12K  1.4G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                                               1.4G     0  1.4G   0% /proc/acpi
tmpfs                                               1.4G     0  1.4G   0% /proc/scsi
tmpfs                                               1.4G     0  1.4G   0% /sys/firmware

从上面反馈的信息就可以明白为什么这次针对多副本的deployment部署的nginx应用时挂载的pvc的访问模式为ReadWriteMany,这表示pvc绑定的pv可以被多个node挂载

宿主机节点
mount -t glusterfs 192.168.0.160:vol_0cd769ff561546eb834d6ef341c33226 /mnt挂载
df -h

192.168.0.160:vol_0cd769ff561546eb834d6ef341c33226 1014M 43M 972M 5% /mnt

可以看到在外面已经成功将卷挂载
cd /mnt && echo "test" > index.html

kubectl exec -it nginx-gfs-86bd594d4f-6g94v -- cat /usr/share/nginx/html/index.html
test
[root@node158 mnt]# kubectl exec -it  nginx-gfs-86bd594d4f-jpbdw  -- cat /usr/share/nginx/html/index.html
test
[root@node158 mnt]# kubectl exec -it  nginx-gfs-86bd594d4f-szbmc  -- cat /usr/share/nginx/html/index.html
test

从上面的信息可以看出数据的确已通过pvc挂载到pod中去

GitHub Repository