kubernetes

kubernetes学习笔记-资源控制

一、pod资源控制

1.1.资源定义

容器运行过程中需要分配所需的资源,如何与cggroup联动配合呢?答案是通过定义resource来实现资源的分配,资源的分配单位主要是cpu和memory,资源的定义分两种:requests和limits,requests表示请求资源,主要用于初始kubernetes调度pod时的依据,表示必须满足的分配资源;limits表示资源的限制,即pod不能超过limits定义的限制大小,超过则通过cggroup限制,pod中定义资源可以通过下面四个字段定义

  • spec.container[].resources.requests.cpu 请求cpu资源的大小,如0.1个cpu和100m表示分配1/10个cpu
  • spec.container[].resources.requests.memory 请求内存大小,单位可用M,Mi,G,Gi表示
  • spec.container[].resources.limits.cpu 限制cpu的大小,不能超过阀值,cggroup中限制的值
  • spec.container[].resources.limits.memory 限制内存的大小,不能超过阀值,超过会发生OOM

下面编写一个限制资源的yaml文件
cat nginx-resource.yaml

apiVersion: v1
kind: Pod
metadata: 
  name: nginx-demo
  labels: 
    name: nginx-demo
spec:
  containers:
  - name: nginx-demo
    image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    ports:
    - name: nginx-port-80
      protocol: TCP
      containerPort: 80
    resources:
      requests:
        cpu: 0.25
        memory: 128Mi
      limits:
        cpu: 500m
        memory: 256Mi

kubectl apply -f nginx-resource.yaml
kubectl get pods
kubectl describe pods nginx-demo

Name:               nginx-demo
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node-3/172.19.159.9
Start Time:         Mon, 02 Mar 2020 19:32:11 +0800
Labels:             name=nginx-demo
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-demo"},"name":"nginx-demo","namespace":"default"},"sp...
Status:             Running
IP:                 10.244.2.19
Containers:
  nginx-demo:
    Container ID:   docker://ed9d2cdb0acdcfd23e9a802cb4fb08cd448d257a6412f7c576217c067f986f85
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 02 Mar 2020 19:32:12 +0800
    Ready:          True
    Restart Count:  0
    Limits:    # 限制资源
      cpu:     500m
      memory:  256Mi
    Requests:  # 请求资源
      cpu:        250m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hg24n (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-hg24n:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hg24n
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  31s   default-scheduler  Successfully assigned default/nginx-demo to node-3
  Normal  Pulled     30s   kubelet, node-3    Container image "nginx:1.7.9" already present on machine
  Normal  Created    30s   kubelet, node-3    Created container nginx-demo
  Normal  Started    30s   kubelet, node-3    Started container nginx-demo

Pod的资源如何分配呢?毫无疑问是从node上分配的,当我们创建一个pod的时候如果设置了requests,kubernetes的调度器kube-scheduler会执行两个调度过程:filter过滤和weight称重,kube-scheduler会根据请求的资源过滤,把符合条件的node筛选出来,然后再进行排序,把最满足运行pod的node筛选出来,然后再特定的node上运行pod

kubectl describe node node-3

Name:               node-3
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node-3
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"12:a1:e5:81:59:ed"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.19.159.9
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 26 Feb 2020 20:11:34 +0800
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 02 Mar 2020 19:47:56 +0800   Wed, 26 Feb 2020 20:11:34 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 02 Mar 2020 19:47:56 +0800   Wed, 26 Feb 2020 20:11:34 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 02 Mar 2020 19:47:56 +0800   Wed, 26 Feb 2020 20:11:34 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Mon, 02 Mar 2020 19:47:56 +0800   Thu, 27 Feb 2020 00:00:12 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.19.159.9
  Hostname:    node-3
Capacity:
 cpu:                2
 ephemeral-storage:  51473020Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3880924Ki
 pods:               110
Allocatable:        # 节点容许分配的资源情况
 cpu:                2
 ephemeral-storage:  47437535154
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             3778524Ki
 pods:               110
System Info:
 Machine ID:                 20181129113200424400422638950048
 System UUID:                FB6FF8AF-D8EA-424B-B469-CF04D88E7CF5
 Boot ID:                    ef5ed234-7ed4-42f7-8358-de3735b3016b
 Kernel Version:             3.10.0-862.14.4.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.3.1
 Kubelet Version:            v1.14.1
 Kube-Proxy Version:         v1.14.1
PodCIDR:                     10.244.2.0/24
Non-terminated Pods:         (5 in total)
  Namespace                  Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                               ------------  ----------  ---------------  -------------  ---
  default                    nginx-app-demo-7bdfd97dcd-6fzzq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         31h
  default                    nginx-app-demo-7bdfd97dcd-plp5w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         31h
  default                    nginx-demo                         250m (12%)    500m (25%)  128Mi (3%)       256Mi (6%)     15m
  kube-system                kube-flannel-ds-amd64-5qxcf        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      4d22h
  kube-system                kube-proxy-zldlm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4d23h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                350m (17%)  600m (30%)
  memory             178Mi (4%)  306Mi (8%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:              <none>

1.2.资源分配原理

kubectl get pods -o wide nginx-demo发现nginx-demo在node-3上
登录node-3上
docker container list | grep nginx-demo

ed9d2cdb0acd        84581e99d807           "nginx -g 'daemon of…"   About an hour ago   Up About an hour                        k8s_nginx-demo_nginx-demo_default_75637512-5c79-11ea-b098-00163e0855fc_0
638c192e4fe4        k8s.gcr.io/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_nginx-demo_default_75637512-5c79-11ea-b098-00163e0855fc_0  

有两个pod:一个通过pause镜像创建,另外一个通过应用镜像创建

查看容器信息
docker inspect ed9d2cdb0acd

[
    {
        "Id": "ed9d2cdb0acdcfd23e9a802cb4fb08cd448d257a6412f7c576217c067f986f85",
        "Created": "2020-03-02T11:32:12.614562002Z",
        "Path": "nginx",
        "Args": [
            "-g",
            "daemon off;"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 13180,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2020-03-02T11:32:12.858182046Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:84581e99d807a703c9c03bd1a31cd9621815155ac72a7365fd02311264512656",
        "ResolvConfPath": "/var/lib/docker/containers/638c192e4fe4c349a772b77b8c8cc172f23577a3a1319640e5c70edc9f9e603a/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/638c192e4fe4c349a772b77b8c8cc172f23577a3a1319640e5c70edc9f9e603a/hostname",
        "HostsPath": "/var/lib/kubelet/pods/75637512-5c79-11ea-b098-00163e0855fc/etc-hosts",
        "LogPath": "/var/lib/docker/containers/ed9d2cdb0acdcfd23e9a802cb4fb08cd448d257a6412f7c576217c067f986f85/ed9d2cdb0acdcfd23e9a802cb4fb08cd448d257a6412f7c576217c067f986f85-json.log",
        "Name": "/k8s_nginx-demo_nginx-demo_default_75637512-5c79-11ea-b098-00163e0855fc_0",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/var/lib/kubelet/pods/75637512-5c79-11ea-b098-00163e0855fc/volumes/kubernetes.io~secret/default-token-hg24n:/var/run/secrets/kubernetes.io/serviceaccount:ro",
                "/var/lib/kubelet/pods/75637512-5c79-11ea-b098-00163e0855fc/etc-hosts:/etc/hosts",
                "/var/lib/kubelet/pods/75637512-5c79-11ea-b098-00163e0855fc/containers/nginx-demo/7f0c993c:/dev/termination-log"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {
                    "max-size": "100m"
                }
            },
            "NetworkMode": "container:638c192e4fe4c349a772b77b8c8cc172f23577a3a1319640e5c70edc9f9e603a",
            "PortBindings": null,
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "container:638c192e4fe4c349a772b77b8c8cc172f23577a3a1319640e5c70edc9f9e603a",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 967,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 256,    # CPU分配的权重,作用在requests.cpu上
            "Memory": 268435456, # 内存分配的大小,作用在limits.memory上 
            "NanoCpus": 0,
            "CgroupParent": "kubepods-burstable-pod75637512_5c79_11ea_b098_00163e0855fc.slice",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 100000, # CPU分配的使用比例,和CpuQuota一起作用在limits.cpu上
            "CpuQuota": 50000,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 268435456,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/9296bf74d674e8a386b5b6d4113daa776019713645d38acf41083f9421a7df8a-init/diff:/var/lib/docker/overlay2/2d91dc96f9b07150d45a779f7fc67197d1f02f0ae501fac517d62e8196cb0640/diff:/var/lib/docker/overlay2/263824e2f752be3a48c9dc81af672cb6de2349026c9342732c588dfaa4550cb7/diff:/var/lib/docker/overlay2/64d88eeef9e39b6e05c70650eccbfffb3437aacb56caff5e9d6428d3e475ecde/diff:/var/lib/docker/overlay2/bf38e262ccd30949226fb85f562306eb3f14f09261e43c5142d0bb4b8fa921b3/diff:/var/lib/docker/overlay2/a8fe4e1adc0891c4e0378cc54acb4db03d4ac7aa80638b21c883852ef2d00203/diff:/var/lib/docker/overlay2/1baf4d835e3c13628f8dc524d9bb25bcb77b71d208cdea4331a0c36f78f6fa36/diff:/var/lib/docker/overlay2/f3c9122a39e53641de7287187de985aaf4279bd09d7ceef26cb572eef351c49d/diff:/var/lib/docker/overlay2/427ad40fdba428aa539e61c51ce8185eeded0c4e1a0c44e6382f06c07f05b4f8/diff:/var/lib/docker/overlay2/7374713f7b77fa7eff7812f35db682f34261d43d386c33b6aad96456f4a92628/diff:/var/lib/docker/overlay2/3f1350f4974a0f651e01e537ecb398bfc44ddbb55c38077aa6aec49bb62c6a2d/diff:/var/lib/docker/overlay2/bed8b3ba38bc1f9b1fd1a23c55fd5255615f2266d9eb3c75a3da1e28e3488510/diff:/var/lib/docker/overlay2/8448326ef0f28a450ba687c015af6971f118175b121db52a1a092273876e6815/diff:/var/lib/docker/overlay2/f279ba301d7b4be294441b24678109133c90e7e34e2154945082ef441c77b270/diff",
                "MergedDir": "/var/lib/docker/overlay2/9296bf74d674e8a386b5b6d4113daa776019713645d38acf41083f9421a7df8a/merged",
                "UpperDir": "/var/lib/docker/overlay2/9296bf74d674e8a386b5b6d4113daa776019713645d38acf41083f9421a7df8a/diff",
                "WorkDir": "/var/lib/docker/overlay2/9296bf74d674e8a386b5b6d4113daa776019713645d38acf41083f9421a7df8a/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/75637512-5c79-11ea-b098-00163e0855fc/volumes/kubernetes.io~secret/default-token-hg24n",
                "Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/75637512-5c79-11ea-b098-00163e0855fc/etc-hosts",
                "Destination": "/etc/hosts",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/var/lib/kubelet/pods/75637512-5c79-11ea-b098-00163e0855fc/containers/nginx-demo/7f0c993c",
                "Destination": "/dev/termination-log",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "volume",
                "Name": "5c92dd8b1758388b9d6cf6f58f77e3dfd88b0077483a6f0a3101d3a095cb7985",
                "Source": "/var/lib/docker/volumes/5c92dd8b1758388b9d6cf6f58f77e3dfd88b0077483a6f0a3101d3a095cb7985/_data",
                "Destination": "/var/cache/nginx",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "nginx-demo",
            "Domainname": "",
            "User": "0",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "443/tcp": {},
                "80/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "NGINX_SERVICE_DEMO_PORT=tcp://10.97.52.224:80",
                "NGINX_SERVICE_DEMO_PORT_80_TCP=tcp://10.97.52.224:80",
                "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1",
                "NGINX_SERVICE_DEMO_PORT_80_TCP_PROTO=tcp",
                "NGINX_SERVICE_DEMO_PORT_80_TCP_PORT=80",
                "NGINX_SERVICE_DEMO_PORT_80_TCP_ADDR=10.97.52.224",
                "KUBERNETES_SERVICE_HOST=10.96.0.1",
                "KUBERNETES_SERVICE_PORT_HTTPS=443",
                "KUBERNETES_PORT_443_TCP_PROTO=tcp",
                "KUBERNETES_PORT_443_TCP_PORT=443",
                "NGINX_SERVICE_DEMO_SERVICE_PORT=80",
                "KUBERNETES_SERVICE_PORT=443",
                "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443",
                "NGINX_SERVICE_DEMO_SERVICE_HOST=10.97.52.224",
                "KUBERNETES_PORT=tcp://10.96.0.1:443",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "NGINX_VERSION=1.7.9-1~wheezy"
            ],
            "Cmd": [
                "nginx",
                "-g",
                "daemon off;"
            ],
            "Healthcheck": {
                "Test": [
                    "NONE"
                ]
            },
            "Image": "sha256:84581e99d807a703c9c03bd1a31cd9621815155ac72a7365fd02311264512656",
            "Volumes": {
                "/var/cache/nginx": {}
            },
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
                "annotation.io.kubernetes.container.hash": "aefd8763",
                "annotation.io.kubernetes.container.ports": "[{\"name\":\"nginx-port-80\",\"containerPort\":80,\"protocol\":\"TCP\"}]",
                "annotation.io.kubernetes.container.restartCount": "0",
                "annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
                "annotation.io.kubernetes.container.terminationMessagePolicy": "File",
                "annotation.io.kubernetes.pod.terminationGracePeriod": "30",
                "io.kubernetes.container.logpath": "/var/log/pods/default_nginx-demo_75637512-5c79-11ea-b098-00163e0855fc/nginx-demo/0.log",
                "io.kubernetes.container.name": "nginx-demo",
                "io.kubernetes.docker.type": "container",
                "io.kubernetes.pod.name": "nginx-demo",
                "io.kubernetes.pod.namespace": "default",
                "io.kubernetes.pod.uid": "75637512-5c79-11ea-b098-00163e0855fc",
                "io.kubernetes.sandbox.id": "638c192e4fe4c349a772b77b8c8cc172f23577a3a1319640e5c70edc9f9e603a"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {}
        }
    }
]

1.3.资源测试-cpu

上面提到了通过设置参数来限制cpu和memory使用,现在我们来验证这两个参数,我们通过stress来压测cpu和memory设置的参数是否起到作用
cat cpu-demo.yaml

apiVersion: v1
kind: Pod
metadata: 
  name: cpu-demo
  namespace: default
  annotations:
    kubernetes.io/description: "demo for cpu requests and"
spec:
  containers:
  - name: stress-cpu
    image: vish/stress
    resources:
      requests:     # 请求的资源
        cpu: 250m
      limits:
        cpu: 500m    # 上限资源
    args:
    - -cpus          # 测试分配的总cpu数
    - "1"

该文件的意思是利用stress应用对一个cpu做压力测试,我们分配了0.25个请求资源,设定了资源上限为0.5个cpu,理论上如果设置的参数起作用,那么对应容器所占cpu比重是50%,下面来验证

kubectl apply -f cpu-demo.yaml
查看pod的详情
kubectl describe pod cpu-demo
登录到对应节点
docker container stats
03k8s10
top
03k8s11

1.4.资源测试-memory

cat memory-demo.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-stress-demo
  annotations:
    kubernetes.io/description: "stress demo for memory limits"
spec:
  containers:
  - name: memory-stress-limits
    image: polinux/stress
    resources:
      requests:
        memory: 128Mi
      limits:
        memory: 512Mi
    command: ["stress"]
    args: ["--vm","1","--vm-bytes","520M","--vm-hang","1"]   # 容器内使用520M内存

kubectl apply -f memory-demo.yaml
kubectl get pods memory-stress-demo

NAME                 READY   STATUS             RESTARTS   AGE
memory-stress-demo   0/1     CrashLoopBackOff   4          3m50s

容器的状态为OOMKilled,RESTARTS的次数不断的增加,不停的尝试重启

二、服务质量QOS

服务质量QOS(Quality of Service)主要用于pod调度和驱逐时参考的重要因素,不同的QOS其服务质量不同,对应不同的优先级,主要分为三种类型的Qos

  • BestEffort 尽最大努力分配资源,默认没有指定resource分配的Qos,优先级最低
  • Burstable 可波动的资源,至少需要分配到requests中的资源,常见的QOS
  • Guaranteed 完全可保障资源,requests和limits定义的资源相同,优先级最高

2.1.BestEffort

默认的Qos策略为BestEffort,优先级别最低,当资源紧张的时候,优先驱逐BestEffort定义的Pod

cat nginx-qos-besteffort.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-qos-besteffort
  labels:
    name: nginx-qos-besteffort
spec:
  containers:
  - name: nginx-qos-besteffort
    image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    ports: 
    - name: nginx-port-80
      protocol: TCP
      containerPort: 80
    resources: {}

查看QOS策略
kubectl get pods nginx-qos-besteffort -o yaml

apiVersion: v1
kind: Pod
metadata:
 annotations:
   kubectl.kubernetes.io/last-applied-configuration: |
     {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-besteffort"},"name":"nginx-qos-besteffort","namespace":"default"},"spec":{"containers":[{"image":"nginx:1.7.9","imagePullPolicy":"IfNotPresent","name":"nginx-qos-besteffort","ports":[{"containerPort":80,"name":"nginx-port-80","protocol":"TCP"}],"resources":{}}]}}
 creationTimestamp: "2020-03-02T17:52:38Z"
 labels:
   name: nginx-qos-besteffort
 name: nginx-qos-besteffort
 namespace: default
 resourceVersion: "673239"
 selfLink: /api/v1/namespaces/default/pods/nginx-qos-besteffort
 uid: 9b359ab5-5cae-11ea-b098-00163e0855fc
spec:
 containers:
 - image: nginx:1.7.9
   imagePullPolicy: IfNotPresent
   name: nginx-qos-besteffort
   ports:
   - containerPort: 80
     name: nginx-port-80
     protocol: TCP
   resources: {}
   terminationMessagePath: /dev/termination-log
   terminationMessagePolicy: File
   volumeMounts:
   - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
     name: default-token-hg24n
     readOnly: true
 dnsPolicy: ClusterFirst
 enableServiceLinks: true
 nodeName: node-3
 priority: 0
 restartPolicy: Always
 schedulerName: default-scheduler
 securityContext: {}
 serviceAccount: default
 serviceAccountName: default
 terminationGracePeriodSeconds: 30
 tolerations:
 - effect: NoExecute
   key: node.kubernetes.io/not-ready
   operator: Exists
   tolerationSeconds: 300
 - effect: NoExecute
   key: node.kubernetes.io/unreachable
   operator: Exists
   tolerationSeconds: 300
 volumes:
 - name: default-token-hg24n
   secret:
     defaultMode: 420
     secretName: default-token-hg24n
status:
 conditions:
 - lastProbeTime: null
   lastTransitionTime: "2020-03-02T17:52:38Z"
   status: "True"
   type: Initialized
 - lastProbeTime: null
   lastTransitionTime: "2020-03-02T17:52:39Z"
   status: "True"
   type: Ready
 - lastProbeTime: null
   lastTransitionTime: "2020-03-02T17:52:39Z"
   status: "True"
   type: ContainersReady
 - lastProbeTime: null
   lastTransitionTime: "2020-03-02T17:52:38Z"
   status: "True"
   type: PodScheduled
 containerStatuses:
 - containerID: docker://d894b1fcfe37bfe3e466612dfab275e1b2cd71e8f1dbcf8d7838e2f4e49881ce
   image: nginx:1.7.9
   imageID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
   lastState: {}
   name: nginx-qos-besteffort
   ready: true
   restartCount: 0
   state:
     running:
       startedAt: "2020-03-02T17:52:39Z"
 hostIP: 172.19.159.9
 phase: Running
 podIP: 10.244.2.21
 qosClass: BestEffort
 startTime: "2020-03-02T17:52:38Z"

2.2.Burstable

至少需要一个container定义了requests,且requests定义的资源小于limits资源
cat nginx-qos-burstable.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-qos-burstable
  labels:
    name: nginx-qos-burstable
spec:
  containers:
  - name: nginx-qos-burstable
    image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    ports:
    - name: nginx-port-80
      protocol: TCP
      containerPort: 80
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 200m
        memory: 256Mi

2.3.Guaranteed

resource中定义的cpu和memory必须包含有requests和limits,切requests和limits的值必须相同,其优先级别最高,当出现调度和驱逐时优先保障该类型的Qos
cat nginx-qos-guaranteed.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx-qos-guaranteed
  labels:
    name: nginx-qos-guaranteed
spec:
  containers:
  - name: nginx-qos-guaranteed
    image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    ports:
    - name: nginx-port-80
      protocol: TCP
      containerPort: 80
    resources:
      requests:
        cpu: 200m
        memory: 256Mi
      limits:
        cpu: 200m
        memory: 256Mi
支付宝扫码打赏 微信打赏

若你觉得我的文章对你有帮助,欢迎点击上方按钮对我打赏

扫描二维码,分享此文章

linuxwt's Picture
linuxwt

我叫王腾,来自武汉,2016年毕业后在上海做了一年helpdesk,自学了linux后回武汉从事系统运维的工作,从2017年开始写博客记录自己的学习工作,现在正在进行数据迁移到此博客,目前就职于中国移动设计院有限公司,个人的座右铭是:逃脱舒适区才能在闲暇的时候惬意的玩耍。

武汉光谷 https://linuxwt.com

Subscribe to 今晚打老虎

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!

Comments