kubernetes学习笔记-DaemonSet

一、DaemonSet介绍

DaemonSet适用于每个node都需要部署一个守护进程的场景,比如:

  • 日志采集agent,如fluented或logstash
  • 监控采集agent,如Prometheus Node Exporter
  • 分布式集群组件,如Ceph MON
  • k8s必须的组件,如网络flannel,calico,kube-proxy

在安装k8s时默认在kube-system命名空间已经安装了两个DaemonSet,分别是flannel-ds-amd64和kube-proxy,分别负责flannel overlay网络互通和service代理

kubectl get ds -n kube-system

kube-flannel-ds-amd64   3         3         3       3            3           beta.kubernetes.io/arch=amd64   46d
kube-proxy              3         3         3       3            3           <none>                          46d

kubectl get pods -n kube-system -o wide

kube-flannel-ds-amd64-5qxcf      1/1     Running   14         46d     172.19.159.9   node-3   <none>           <none>
kube-flannel-ds-amd64-sfglq      1/1     Running   15         46d     172.19.159.8   node-2   <none>           <none>
kube-flannel-ds-amd64-vjkx8      1/1     Running   20         46d     172.19.159.7   node-1   <none>           <none>
kube-proxy-8gjl7                 1/1     Running   20         46d     172.19.159.7   node-1   <none>           <none>
kube-proxy-pt922                 1/1     Running   15         46d     172.19.159.8   node-2   <none>           <none>
kube-proxy-zldlm                 1/1     Running   13         46d     172.19.159.9   node-3   <none>           <none>

从上面两张图就可以看出每个节点都有一个相应组件的Pod

二、定义DaemonSet

下面以日志agent(fluentd-elasticsearch,该工具主要时采集服务器日志数据发送给Elasticsearch)为例定义一个副本集
cat fluentd-es-daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch  //必须与selector保持一致
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:          //挂在存储,agent需要到这些目录读取数据
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:          //将主机目录以hostPath形式挂载到pod中
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers 

生成daemonset
kubectl apply -f fluentd-es-daemonset.yaml

查看daemonset
kubectl get daemonsets -n kube-system fluentd-elasticsearch

fluentd-elasticsearch   3         3         3       3            3           <none>          17m

查看pod
kubectl get pods -n kube-system -o wide | grep fluentd

fluentd-elasticsearch-ht8l2      1/1     Running   0          23m     10.244.0.12    node-1   <none>           <none>
fluentd-elasticsearch-xn8ks      1/1     Running   0          23m     10.244.1.145   node-2   <none>           <none>
fluentd-elasticsearch-zt72d      1/1     Running   0          23m     10.244.2.100   node-3   <none>           <none>

查看daemonset
kubectl get daemonsets -n kube-system fluentd-elasticsearch -o yaml

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"k8s-app":"fluentd-logging"},"name":"fluentd-elasticsearch","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"name":"fluentd-elasticsearch"}},"template":{"metadata":{"labels":{"name":"fluentd-elasticsearch"}},"spec":{"containers":[{"image":"quay.io/fluentd_elasticsearch/fluentd:v2.5.2","name":"fluentd-elasticsearch","resources":{"limits":{"memory":"200Mi"},"requests":{"cpu":"100m","memory":"200Mi"}},"volumeMounts":[{"mountPath":"/var/log","name":"varlog"},{"mountPath":"/var/lib/docker/containers","name":"varlibdockercontainers","readOnly":true}]}],"terminationGracePeriodSeconds":30,"tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}],"volumes":[{"hostPath":{"path":"/var/log"},"name":"varlog"},{"hostPath":{"path":"/var/lib/docker/containers"},"name":"varlibdockercontainers"}]}}}}
  creationTimestamp: "2020-04-13T02:02:05Z"
  generation: 1
  labels:
    k8s-app: fluentd-logging
  name: fluentd-elasticsearch
  namespace: kube-system
  resourceVersion: "3223686"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/daemonsets/fluentd-elasticsearch
  uid: c5efdf2b-7d2a-11ea-983b-00163e0855fc
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: fluentd-elasticsearch
    spec:
      containers:
      - image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        imagePullPolicy: IfNotPresent
        name: fluentd-elasticsearch
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always  //异常自动重启
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      volumes:
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
  templateGeneration: 1
  updateStrategy:   //支持滚动更新
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 3
  desiredNumberScheduled: 3
  numberAvailable: 3
  numberMisscheduled: 0
  numberReady: 3
  observedGeneration: 1
  updatedNumberScheduled: 3

三、滚动更新与回滚

滚动更新
更新镜像到最新版本
kubectl set image daemonsets fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:latest -n kube-system

查看更新状态
kubectl rollout status daemonset -n kube-system fluentd-elasticsearch

Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 1 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 1 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 out of 3 new pods have been updated...
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 2 of 3 updated pods are available...
daemon set "fluentd-elasticsearch" successfully rolled out

查看详情
kubectl describe daemonsets -n kube-system fluentd-elasticsearch

Name:           fluentd-elasticsearch
Selector:       name=fluentd-elasticsearch
Node-Selector:  <none>
Labels:         k8s-app=fluentd-logging
Annotations:    deprecated.daemonset.template.generation: 2
                kubectl.kubernetes.io/last-applied-configuration:
                  {"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"labels":{"k8s-app":"fluentd-logging"},"name":"fluentd-elasticsear...
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  name=fluentd-elasticsearch
  Containers:
   fluentd-elasticsearch:
    Image:      quay.io/fluentd_elasticsearch/fluentd:latest   //最新镜像
    Port:       <none>
    Host Port:  <none>
    Limits:
      memory:  200Mi
    Requests:
      cpu:        100m
      memory:     200Mi
    Environment:  <none>
    Mounts:
      /var/lib/docker/containers from varlibdockercontainers (ro)
      /var/log from varlog (rw)
  Volumes:
   varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
   varlibdockercontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:  
Events:                 //更新日志
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulDelete  25m   daemonset-controller  Deleted pod: fluentd-elasticsearch-ht8l2
  Normal  SuccessfulCreate  24m   daemonset-controller  Created pod: fluentd-elasticsearch-lnpf6
  Normal  SuccessfulDelete  24m   daemonset-controller  Deleted pod: fluentd-elasticsearch-zt72d
  Normal  SuccessfulCreate  24m   daemonset-controller  Created pod: fluentd-elasticsearch-xdrss
  Normal  SuccessfulDelete  23m   daemonset-controller  Deleted pod: fluentd-elasticsearch-xn8ks
  Normal  SuccessfulCreate  23m   daemonset-controller  Created pod: fluentd-elasticsearch-jdmtb

回滚
查看历史版本
kubectl rollout history daemonset -n kube-system fluentd-elasticsearch
REVISION 1为初始版本

回滚
kubectl rollout undo daemonset -n kube-system fluentd-elasticsearch --to-revision=1

确认回滚是否成功
kubectl describe daemonsets. -n kube-system fluentd-elasticsearch

Pods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  name=fluentd-elasticsearch
  Containers:
   fluentd-elasticsearch:
    Image:      quay.io/fluentd_elasticsearch/fluentd:v2.5.2
    Port:       <none>
    Host Port:  <none>
    Limits:
      memory:  200Mi
    Requests:
      cpu:        100m
      memory:     200Mi
    Environment:  <none>
    Mounts:
      /var/lib/docker/containers from varlibdockercontainers (ro)
      /var/log from varlog (rw)
  Volumes:
   varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:  
   varlibdockercontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:  
Events:
  Type    Reason            Age   From                  Message
  ----    ------            ----  ----                  -------
  Normal  SuccessfulDelete  44m   daemonset-controller  Deleted pod: fluentd-elasticsearch-ht8l2
  Normal  SuccessfulCreate  43m   daemonset-controller  Created pod: fluentd-elasticsearch-lnpf6
  Normal  SuccessfulDelete  43m   daemonset-controller  Deleted pod: fluentd-elasticsearch-zt72d
  Normal  SuccessfulCreate  43m   daemonset-controller  Created pod: fluentd-elasticsearch-xdrss
  Normal  SuccessfulDelete  42m   daemonset-controller  Deleted pod: fluentd-elasticsearch-xn8ks
  Normal  SuccessfulCreate  42m   daemonset-controller  Created pod: fluentd-elasticsearch-jdmtb
  Normal  SuccessfulDelete  10m   daemonset-controller  Deleted pod: fluentd-elasticsearch-lnpf6
  Normal  SuccessfulCreate  10m   daemonset-controller  Created pod: fluentd-elasticsearch-rfcdv
  Normal  SuccessfulDelete  10m   daemonset-controller  Deleted pod: fluentd-elasticsearch-xdrss
  Normal  SuccessfulCreate  10m   daemonset-controller  Created pod: fluentd-elasticsearch-wjrxr
  Normal  SuccessfulDelete  10m   daemonset-controller  Deleted pod: fluentd-elasticsearch-jdmtb
  Normal  SuccessfulCreate  10m   daemonset-controller  Created pod: fluentd-elasticsearch-bmzwg

从上图中镜像版本号可以看到已经回退成功,从日志输出来看回退过程类似更新过程,都是先删除再创建

删除daemonset
kubectl delete daemonsets -n kube-system fluentd-elasticsearch

四、DaemonSet调度

采用affinity亲活性调度,以node-3为例
增加标签
kubectl label node node-3 apps=wangteng (删除kubectl label node node-3 apps-)
cat fluentd-elasticsearch.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch  
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - node-2
                - node-3
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: apps
                operator: In
                values: ["wangteng"]
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

生成daemonset
kubectl apply -f fluentd-elasticsearch.yaml

查看daemonset
kubectl get ds -n kube-system fluentd-elasticsearch

fluentd-elasticsearch   2         2         2       2            2           <none>          10m

kubectl get pods -n kube-system -o wide | grep fluent

fluentd-elasticsearch-rwcvb      1/1     Running   0          11m     10.244.2.108   node-3   <none>           <none>
fluentd-elasticsearch-x4wdt      1/1     Running   0          11m     10.244.1.153   node-2   <none>           <none>

理论上pod只会部署在node-3上,node-2上没有打标签,但是这里依然部署了一个pod,我的理解是preferredDuringSchedulingIgnoredDuringExecution优选参数时尽量把pod往带标签的node上部署,但不是绝对的