kubernetes学习笔记-快速入门

一、基础概念

1.1.集群与节点

kubernetes是一个容器引擎管理平台,可以实现容器化应用的自动化部署,任务调度,弹性伸缩,负载均衡等。cluster由master与node两种角色组成

  • master负责管理集群,master包含kube-apiserver,kube-controller-manager,kube-sheduler,etcd组件
  • node节点运行容器应用,包含kube-proxy和kubelet以及Container Runtime,其中Container Runtime一般是docker

02k8s2
1.查看master组件角色
kubectl get componentstatuses
2.查看node节点列表
kubectl get nodes
3.查看node节点详情
kubectl describe node node-2

Name:               node-2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node-2
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"1a:0c:b9:de:37:c9"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.19.159.8
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 26 Feb 2020 20:09:49 +0800
Taints:             <none>
Unschedulable:      false  # 是否禁用调度
Conditions:        # 资源调度能力,MemoryPressure 内存是否有压力
                   # DiskPressure 磁盘压力
                   # PIDPressure 磁盘压力
                   # Ready 是否就绪,表明节点处于正常工作状态,资源是否充足   
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Thu, 27 Feb 2020 11:49:43 +0800   Wed, 26 Feb 2020 20:09:49 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Thu, 27 Feb 2020 11:49:43 +0800   Wed, 26 Feb 2020 20:09:49 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Thu, 27 Feb 2020 11:49:43 +0800   Wed, 26 Feb 2020 20:09:49 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Thu, 27 Feb 2020 11:49:43 +0800   Wed, 26 Feb 2020 21:04:31 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.19.159.8  # 地址
  Hostname:    node-2 # 主机名
Capacity:             
 cpu:                2  # 容器可用资源CPU
 ephemeral-storage:  51473020Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3880924Ki
 pods:               110
Allocatable:
 cpu:                2  # 容器已分配资源CPU
 ephemeral-storage:  47437535154
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3778524Ki
 pods:               110
System Info:                 # 系统信息
 Machine ID:                 20181129113200424400422638950048
 System UUID:                D38EA343-8C2F-4686-9588-5A2415801B30
 Boot ID:                    aae3a97e-3de6-450a-9afe-06d6ab14d021
 Kernel Version:             3.10.0-862.14.4.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.3.1    # Container Runtime
 Kubelet Version:            v1.14.1
 Kube-Proxy Version:         v1.14.1
PodCIDR:                     10.244.1.0/24   # pod使用的网络
Non-terminated Pods:         (6 in total)  # 每个pod资源占用情况
  Namespace                  Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                               ------------  ----------  ---------------  -------------  ---
  default                    nginx-app-demo-7bdfd97dcd-7g247    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9h
  default                    nginx-app-demo-7bdfd97dcd-lskts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9h
  kube-system                coredns-fb8b8dccf-mhkkk            100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     19h
  kube-system                coredns-fb8b8dccf-vz65l            100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     19h
  kube-system                kube-flannel-ds-amd64-sfglq        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      14h
  kube-system                kube-proxy-pt922                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15h
Allocated resources:  # 已分配资源
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                300m (15%)  100m (5%)
  memory             190Mi (5%)  390Mi (10%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:              <none>

1.2.容器与应用

kuberbetes负责容器的管理调度,但kubernetes集群的最小单位不是容器,是pod,pod中可包含多个容器,但是集群不会直接运行pod,而是通过各种工作负载控制器如Deployments、ReplicaSets、DaemonSets来运行,因为控制器可以保证pod的一致性,如果某个pod异常,会在其它节点重建

  • container 将应用封装在镜像中
  • pod 将容器封装,包含一个pause容器和应用容器,容器间共享网络、存储和进程
  • Deployments,无状态化工作负载,对应的是状态化负载StatefulSets,Deployments是一种控制器,可以控制负载的副本数replicas,通过kube-controller-manager中的Deployments Controller实现副本数状态的控制

1.3.服务访问

kubernetes中pod是实际运行的载体,pod依附于node,node可能出现故障,kubernetes的控制器如replicasets在其它node上重新拉起一个pod,分配一个新的ip;而且,应用部署时会包含多个副本replicas,如同个应用deployments部署了3个pod副本,pod相当于后端的Real Server,那么该如何去访问
可以在pod前面加上一个负载均衡器,可以称为service,其将动态的pod抽象为一个服务,应用程序直接访问service即可,它会将请求转发到后端的pod,service转发有两种机制:

  • iptables 通过设置DNAT规则实现负载均衡
  • ipvs 通过ipvsadm转发

service根据服务不同的访问方式,有以下几种类型:

  • ClusterIP 集群内部互访,与DNS结合实现集群内部的服务发现
  • NodePort 通过NAT将每个node节点暴漏一个端口实现外部访问
  • LoadBalancer 云厂商外部接入方式
  • ExternalName 由ingress实现,将外部请求以域名转发的形式转发到集群

pod是动态的,node故障会再起一个pod,pod的ip会发生变化,扩展应用,副本会发生变化,那么service如何识别pod动态变化,可以通过labels实现,labels过滤出某个应用的Endpoints,当pod变化时会自动更新Endpoints,不同应用有不同的label

二、创建应用

kubernetes部署应用是通过控制器来实现的,前面也说了头Deployments、DaemonSet等,我们先以Deployments来学习,它是无状态的workload,先来部署一个应用,为了方便学习,我们先通过命令的方式来部署

1.部署nginx
部署四个副本
node-1上
kubectl run nginx-app-demo --image=nginx:1.7.9 --port=80 --replicas=4
2.查看应用列表
kubectl get deployments

nginx-app-demo   4/4     4            4           72s

3.查看应用的详细信息,Deployments通过replicaSets来控制副本数的,由replicaset控制pod数
kubelctl describe deployments nginx-app-demo

Name:                   nginx-app-demo  # 应用名称
Namespace:              default  # 命名空间
CreationTimestamp:      Thu, 27 Feb 2020 01:03:13 +0800
Labels:                 run=nginx-app-demo  # service通过label实现访问
Annotations:            deployment.kubernetes.io/revision: 3 # 滚动升级版本号
Selector:               run=nginx-app-demo
Replicas:               4 desired | 4 updated | 4 total | 4 available | 0 unavailable # 副本控制器 Replicas
StrategyType:           RollingUpdate # 升级策略
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge # 升级策略设置不超过25%的pod
Pod Template:
  Labels:  run=nginx-app-demo #
  Containers:    # 容器镜像、端口、存储等
   nginx-app-demo:   
    Image:        nginx:1.7.9
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:       # 当前状态
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-app-demo-7bdfd97dcd (4/4 replicas created) # 空控制器名称#ReplicaSets
Events:          <none>

4.查看replicasets情况
kubectl get replicasets

NAME                        DESIRED   CURRENT   READY   AGE
nginx-app-demo-7bdfd97dcd   4         4         4      9m9s

5.查看replicasets详细
kubectl describe replicasets

Name:           nginx-app-demo-7bdfd97dcd
Namespace:      default
Selector:       pod-template-hash=7bdfd97dcd,run=nginx-app-demo
Labels:         pod-template-hash=7bdfd97dcd 增加了一个hash的label识别replicasets
                run=nginx-app-demo
Annotations:    deployment.kubernetes.io/desired-replicas: 4
                deployment.kubernetes.io/max-replicas: 5
                deployment.kubernetes.io/revision: 3
                deployment.kubernetes.io/revision-history: 1
Controlled By:  Deployment/nginx-app-demo #副本的父控制,为nginx-app-demo这个#Deployments
Replicas:       4 current / 4 desired
Pods Status:    4 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template: #容器模板,继承于deployments
  Labels:  pod-template-hash=7bdfd97dcd
           run=nginx-app-demo
  Containers:
   nginx-app-demo:
    Image:        nginx:1.7.9
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Events:           <none>

6.查看pods详情,pod中部署一个nginx容器并分配了一个ip,可通过该ip直接访问
kubectl get pods

NAME                              READY   STATUS    RESTARTS   AGE
nginx-app-demo-7bdfd97dcd-7g247   1/1     Running   0          34h
nginx-app-demo-7bdfd97dcd-g82nm   1/1     Running   0          34h
nginx-app-demo-7bdfd97dcd-gssm6   1/1     Running   0          34h
nginx-app-demo-7bdfd97dcd-lskts   1/1     Running   0          34h

kubectl describle pod nginx-app-demo-7bdfd97dcd-7g247

Name:               nginx-app-demo-7bdfd97dcd-7g247
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node-2/172.19.159.8
Start Time:         Thu, 27 Feb 2020 01:59:53 +0800
Labels:             pod-template-hash=7bdfd97dcd # label名称
                    run=nginx-app-demo
Annotations:        <none>
Status:             Running
IP:                 10.244.1.8 # pod的ip
Controlled By:      ReplicaSet/nginx-app-demo-7bdfd97dcd # 副本控制器为replicasets
Containers:         #容器的信息,包括容器id,镜像,丢按扣,状态,环境变量等信息
  nginx-app-demo:
    Container ID:   docker://c6c87de72610f5bd3228498d9aca49053651387f0a017cf9240c09d38d0620c0
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 27 Feb 2020 01:59:54 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hg24n (ro)
Conditions:         # 容器状态
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:            # 容器卷
  default-token-hg24n:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hg24n
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

三、访问应用

kubernetes为每个pod都分配了一个ip地址,可通过该地址直接访问应用,相当于访问RS(real server),但一个应用是一个整体,由多个副本数组成,需要依赖于service来实现应用的负载均衡,service我们探讨ClusterIP和NodePort的访问方式
1、设置pod的内容,为了方便区分,我们将三个pod的nginx站点内容设置为不同,以观察负载均衡的效果
查看pod列表
kubectl get pods

NAME                              READY   STATUS    RESTARTS   AGE
nginx-app-demo-7bdfd97dcd-7g247   1/1     Running   0          44h
nginx-app-demo-7bdfd97dcd-g82nm   1/1     Running   0          44h
nginx-app-demo-7bdfd97dcd-gssm6   1/1     Running   0          44h
nginx-app-demo-7bdfd97dcd-lskts   1/1     Running   0          44h

进入pod容器
kubectl exec -it nginx-app-demo-7bdfd97dcd-7g247 bash
echo "web1" >/usr/share/nginx/html/index.html
以此类推另外三个容器
echo "web2" >/usr/share/nginx/html/index.html
echo "web3" >/usr/share/nginx/html/index.html
echo "web4" >/usr/share/nginx/html/index.html

2.获取pod的ip
kubectl get pod -o wide

NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-app-demo-7bdfd97dcd-7g247   1/1     Running   0          44h   10.244.1.8   node-2   <none>           <none>
nginx-app-demo-7bdfd97dcd-g82nm   1/1     Running   0          44h   10.244.2.6   node-3   <none>           <none>
nginx-app-demo-7bdfd97dcd-gssm6   1/1     Running   0          44h   10.244.2.7   node-3   <none>           <none>
nginx-app-demo-7bdfd97dcd-lskts   1/1     Running   0          44h   10.244.1.9   node-2   <none>           <none>

3.访问pod的ip

[root@node-1 ~]# curl 10.244.1.8
web1
[root@node-1 ~]# curl 10.244.2.6 
web2
[root@node-1 ~]# curl 10.244.2.7
web3
[root@node-1 ~]# curl 10.244.1.9
web4

3.1.clusterip访问

通过pod的ip直接访问应用,对于单个pod的应用可以实现,对于多个副本replicas的应用则不符合要求,需要通过service来实现负载均衡,service需要设置不同的type,默认为ClusterIP即集群内部访问,如下通过expose子命令将服务暴露到service
1.暴露service,其中port表示代理监听端口,target-port代表是容器的端口,type设置的是service的类型
kubectl expose deployment nginx-app-demo --name ngin-service-demo --port=80 --protocol=TCP --target-port=80 --type ClusterIP
2.查看service的详情,可以看到service通过labels选择器selector自动将pod的ip生成endpoints
kubectl get services

kubernetes           ClusterIP   10.96.0.1      <none>        443/TCP        2d6h
ngin-service-demo    ClusterIP   10.97.52.224    <none>        80/TCP         2m56s

查看services详情
如果想删除暴露的service服务,可以使用命令
kubectl delete services -l run=nginx-app-demo(标签)
查看service详情,可以看到Labels的Seletor和前面Deployments设置一致,Endpoints将pod组成一个列表
kubectl describe services nginx-service-demo

Name:              nginx-service-demo  # service名称
Namespace:         default 
Labels:            run=nginx-app-demo  # 标签
Annotations:       <none>
Selector:          run=nginx-app-demo   # 标签选择器
Type:              ClusterIP            # service类型
IP:                10.97.52.224         # 服务ip,即虚拟VIP
Port:              <unset>  80/TCP      # 服务端口
TargetPort:        80/TCP               # 容器端口
Endpoints:         10.244.1.8:80,10.244.1.9:80,10.244.2.6:80,10.244.2.7:80
Session Affinity:  None                 #负载均衡调度算法
Events:            <none>

3.访问service地址,可以实现pods的负载均衡,调度策略为轮询。service默认的调度策略Session Afinity为None,即为轮询,可以设置成ClinetIP,实现相同的客户端ip的请求调度到相同的pod上
curl 10.97.52.224

[root@node-1 ~]# curl 10.97.52.224
web4
[root@node-1 ~]# curl 10.97.52.224
web2
[root@node-1 ~]# curl 10.97.52.224
web2
[root@node-1 ~]# curl 10.97.52.224
web3
[root@node-1 ~]# curl 10.97.52.224
web4
[root@node-1 ~]# curl 10.97.52.224
web1

4.ClusterIP原理机制
service后端实现有两种机制:iptables和ipvs,这里使用的是Iptables实现,iptables通过nat链生成访问规则,KUBE-SVC-R5Y5DZHD7Q6DDTFZ为入站规则
iptables -t nat -L -n | grep nginx

KUBE-MARK-MASQ  tcp  -- !10.244.0.0/16        10.97.52.224         /* default/nginx-service-demo: cluster IP */ tcp dpt:80
KUBE-SVC-R5Y5DZHD7Q6DDTFZ  tcp  --  0.0.0.0/0            10.97.52.224         /* default/nginx-service-demo: cluster IP */ tcp dpt:80

入站:KUBE-SVC-R5Y5DZHD7Q6DDTFZ任意原地址访问目标10.102.1.1的目标端口80时将请求转发给KUBE-SVC-R5Y5DZHD7Q6DDTFZ链
5.查看入站请求规则,入站请求规则将会映射到不同的链,不同链将会转发到不同pod的ip上
iptables -t nat -L KUBE-SVC-R5Y5DZHD7Q6DDTFZ -n

KUBE-SEP-UQVTSQDQ4PMVJNPT  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.25000000000
KUBE-SEP-3TNRTSMGKOPSFXEO  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.33332999982
KUBE-SEP-ZXN23Z5BMWGY2OZ5  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.50000000000
KUBE-SEP-WGOEA5PEEXLRJEBT  all  --  0.0.0.0/0            0.0.0.0/0           

查看实际转发的四条链的规则,实际映射到不同的pod的ip地址上

[root@node-1 ~]# iptables -t nat -L KUBE-SEP-UQVTSQDQ4PMVJNPT -n
Chain KUBE-SEP-UQVTSQDQ4PMVJNPT (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.1.8           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.1.8:80
[root@node-1 ~]# iptables -t nat -L KUBE-SEP-3TNRTSMGKOPSFXEO -n
Chain KUBE-SEP-3TNRTSMGKOPSFXEO (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.1.9           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.1.9:80
[root@node-1 ~]# iptables -t nat -L KUBE-SEP-ZXN23Z5BMWGY2OZ5 -n
Chain KUBE-SEP-ZXN23Z5BMWGY2OZ5 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.2.6           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.2.6:80
[root@node-1 ~]# iptables -t nat -L KUBE-SEP-WGOEA5PEEXLRJEBT -n
Chain KUBE-SEP-WGOEA5PEEXLRJEBT (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.2.7           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.2.7:80

3.2.Nodeport访问应用

Service通过ClusterIP只能提供集群内部的应用访问,外部无法直接访问应用,如果需要外部访问有如下几种方式:NodePort,LoadBalancer和Ingress,其中LoadBalancer需要由云服务提供商实现,Ingress需要安装单独的Ingress Controller,日常测试可以通过NodePort的方式实现,NodePort可以将node的某个端口暴露给外部网络访问
1.修改type的类型由ClusterIP修改为NodePort类型
kubectl patch services nginx-service-demo -p '{"spec":{"type": "NodePort"}}'
查看service
kubectl get services

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes           ClusterIP   10.96.0.1      <none>        443/TCP        3d7h
nginx-service-demo   NodePort    10.97.52.224   <none>        80:30571/TCP   25h

确认yaml文件配置,分配了一个NodePort端口,即每个node上都会监听该端口
kubectl get services nginx-service-demo -o yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-02-28T15:14:45Z"
  labels:
    run: nginx-app-demo
  name: nginx-service-demo
  namespace: default
  resourceVersion: "415693"
  selfLink: /api/v1/namespaces/default/services/nginx-service-demo
  uid: 0db06db9-5a3d-11ea-b098-00163e0855fc
spec:
  clusterIP: 10.97.52.224
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30571
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-app-demo
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

通过NodePort访问应用程序,每个node的地址相当于vip,可以实现相同的负载均衡效果,同时CluserIP功能依可用

[root@node-1 ~]# curl node-1:30571
web3
[root@node-1 ~]# curl node-1:30571
web4
[root@node-1 ~]# curl node-1:30571
web3
[root@node-1 ~]# curl node-1:30571
web4
[root@node-1 ~]# curl node-1:30571
web2
[root@node-1 ~]# curl node-1:30571
web4
[root@node-1 ~]# curl node-2:30571
web3
[root@node-1 ~]# curl node-2:30571
web2
[root@node-1 ~]# curl node-2:30571
web1
[root@node-1 ~]# 
[root@node-1 ~]# curl node-2:30571
web3
[root@node-1 ~]# curl node-3:30571
web1
[root@node-1 ~]# curl node-3:30571
web3
[root@node-1 ~]# curl node-3:30571
web4
[root@node-1 ~]# curl node-3:30571
web3

2.NodePort转发原理,每个node上通过kube-proxy监听NodePort的端口,由后端的iptables实现端口的转发
netstat -ntlp|grep 30571

tcp6       0      0 :::30571                :::*                    LISTEN      9490/kube-proxy     

查看转发规则
iptables -t nat -L -n

KUBE-MARK-MASQ  tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/nginx-service-demo: */ tcp dpt:30571
KUBE-SVC-R5Y5DZHD7Q6DDTFZ  tcp  --  0.0.0.0/0            0.0.0.0/0            /* default/nginx-service-demo: */ tcp dpt:30571

查看入栈规则请求链
iptables -t nat -L KUBE-SVC-R5Y5DZHD7Q6DDTFZ -n

target     prot opt source               destination         
KUBE-SEP-UQVTSQDQ4PMVJNPT  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.25000000000
KUBE-SEP-3TNRTSMGKOPSFXEO  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.33332999982
KUBE-SEP-ZXN23Z5BMWGY2OZ5  all  --  0.0.0.0/0            0.0.0.0/0            statistic mode random probability 0.50000000000
KUBE-SEP-WGOEA5PEEXLRJEBT  all  --  0.0.0.0/0            0.0.0.0/0           

继续查看转发链

[root@node-1 ~]# iptables -t nat -L KUBE-SEP-UQVTSQDQ4PMVJNPT -n
Chain KUBE-SEP-UQVTSQDQ4PMVJNPT (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.1.8           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.1.8:80
[root@node-1 ~]# iptables -t nat -L KUBE-SEP-3TNRTSMGKOPSFXEO -n
Chain KUBE-SEP-3TNRTSMGKOPSFXEO (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.1.9           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.1.9:80
[root@node-1 ~]# iptables -t nat -L KUBE-SEP-ZXN23Z5BMWGY2OZ5 -n
Chain KUBE-SEP-ZXN23Z5BMWGY2OZ5 (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.2.6           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.2.6:80
[root@node-1 ~]# iptables -t nat -L KUBE-SEP-WGOEA5PEEXLRJEBT -n
Chain KUBE-SEP-WGOEA5PEEXLRJEBT (1 references)
target     prot opt source               destination         
KUBE-MARK-MASQ  all  --  10.244.2.7           0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp to:10.244.2.7:80

四、扩展应用

通过扩展real server的数量来实现高负载,扩展副本数replicas来实现,kubernets能提供两种方式的伸缩能力:1. 手动伸缩能力scale up和scale down,2. 动态的弹性伸horizontalpodautoscalers,基于CPU的利用率实现自动的弹性伸缩,需要依赖与监控组件如metrics serve,这里先用手动扩展的方式进行学习
kubectl scale --replicas=5 deployment nginx-app-demo
查看
kubectl get pods -o wide

NAME                              READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nginx-app-demo-7bdfd97dcd-7g247   1/1     Running   0          3d8h    10.244.1.8   node-2   <none>           <none>
nginx-app-demo-7bdfd97dcd-9ftfl   1/1     Running   0          4m15s   10.244.2.8   node-3   <none>           <none>
nginx-app-demo-7bdfd97dcd-g82nm   1/1     Running   0          3d8h    10.244.2.6   node-3   <none>           <none>
nginx-app-demo-7bdfd97dcd-gssm6   1/1     Running   0          3d8h    10.244.2.7   node-3   <none>           <none>
nginx-app-demo-7bdfd97dcd-lskts   1/1     Running   0          3d8h    10.244.1.9   node-2   <none>           <none>

kubectl describe services nginx-service-demo
kubectl describe endpoints nginx-service-demo

Name:         nginx-service-demo
Namespace:    default
Labels:       run=nginx-app-demo
Annotations:  endpoints.kubernetes.io/last-change-trigger-time: 2020-03-01T02:54:23Z
Subsets:
  Addresses:          10.244.1.8,10.244.1.9,10.244.2.6,10.244.2.7,10.244.2.8
  NotReadyAddresses:  <none>
  Ports:
    Name     Port  Protocol
    ----     ----  --------
    <unset>  80    TCP

Events:  <none>

将新加入的pod站点内容设置成web5
kubectl exec -it nginx-app-demo-7bdfd97dcd-9ftfl bash
echo "web5" > /usr/share/nginx/html/index.html

curl 10.97.52.224进行测试

弹性伸缩会自动自动加入到service中实现服务自动发现和负载均衡,应用的扩展相比于传统应用快速非常多。此外,kubernetes还支持自动弹性扩展的能力,即Horizontal Pod AutoScaler,自动横向伸缩能力,配合监控系统根据CPU的利用率弹性扩展Pod个数

五、滚动升级应用版本

Deployments的升级策略为RollingUpdate,其每次会更新应用中的25%的pod,新建新的pod逐个替换,防止应用程序在升级过程中不可用,如果应用程序升级过程中失败,还可以通过回滚的方式将应用程序回滚到之前的状态,回滚时通过replicasets的方式实现
1.更换nginx的镜像
kubectl set image deployments/nginx-app-demo nginx-app-demo=nginx:latest
kubectl get pods -w观察升级过程
2.查看deployments的详情可知道,deployments已经更换了新的replicasets

Name:                   nginx-app-demo
Namespace:              default
CreationTimestamp:      Thu, 27 Feb 2020 01:03:13 +0800
Labels:                 run=nginx-app-demo
Annotations:            deployment.kubernetes.io/revision: 2 #新的版本号,用于回滚
Selector:               run=nginx-app-demo
Replicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=nginx-app-demo
  Containers:
   nginx-app-demo:
    Image:        nginx:latest
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-app-demo-5cc8746f96 (5/5 replicas created)
Events:
  Type    Reason             Age                   From                   Message
  ----    ------             ----                  ----                   -------
  Normal  ScalingReplicaSet  6m7s (x2 over 3d10h)  deployment-controller  Scaled up replica set nginx-app-demo-5cc8746f96 to 2
  Normal  ScalingReplicaSet  6m7s (x2 over 3d10h)  deployment-controller  Scaled up replica set nginx-app-demo-5cc8746f96 to 3
  Normal  ScalingReplicaSet  6m7s                  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 4
  Normal  ScalingReplicaSet  6m6s (x2 over 3d10h)  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 3
  Normal  ScalingReplicaSet  6m6s (x2 over 3d10h)  deployment-controller  Scaled up replica set nginx-app-demo-5cc8746f96 to 4
  Normal  ScalingReplicaSet  6m5s (x2 over 3d10h)  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 2
  Normal  ScalingReplicaSet  6m5s (x2 over 3d10h)  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 1
  Normal  ScalingReplicaSet  6m5s                  deployment-controller  Scaled up replica set nginx-app-demo-5cc8746f96 to 5
  Normal  ScalingReplicaSet  6m4s (x2 over 3d10h)  deployment-controller  Scaled down replica set nginx-app-demo-7bdfd97dcd to 0

3.查看滚动升级的版本,可以看到有两个版本,分别对应的两个不同的replicasets
kubectl rollout history deployment nginx-app-demo

deployment.extensions/nginx-app-demo 
REVISION  CHANGE-CAUSE
1         <none>
2        <none>

查看replicasets列表,旧的包含pod为0
kubectl get replicasets

NAME                        DESIRED   CURRENT   READY   AGE
nginx-app-demo-5cc8746f96   5         5         5       3d10h
nginx-app-demo-7bdfd97dcd   0         0         0       3d11h

查看升级后nginx版本
curl -I 10.97.52.224

HTTP/1.1 200 OK
Server: nginx/1.17.8
Date: Sun, 01 Mar 2020 04:29:33 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Jan 2020 13:36:08 GMT
Connection: keep-alive
ETag: "5e26fe48-264"
Accept-Ranges: bytes

4.回滚旧版本
kubectl rollout undo deployment nginx-app-demo --to-revision=1
查看nginx版本
curl -I 10.97.52.224

HTTP/1.1 200 OK
Server: nginx/1.7.9
Date: Sun, 01 Mar 2020 04:32:23 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 23 Dec 2014 16:25:09 GMT
Connection: keep-alive
ETag: "54999765-264"
Accept-Ranges: bytes