1、Pod里存放多个容器
Pod可以包含一个或者多个容器,包含多容器往往是基于这多个容器之间存在着关联,将其全部使用同一个Pod安装方便容器间的访问,下面是一个多容器例子
cat php-redis.yaml
apiVersion: v1
kind: Pod
name: redis-php
labels:
name: redis-php
spec:
containers:
- name: frontend
image: kubeguide/guestbook-php-frontend:latest
ports:
- containerPort: 80
- name: redis
image: kubeguide/redis-master:latest
ports:
- containerPort: 6379
2、Pod中的Volume
同一个Pod中容器可以共享Pod级别的Volume
cat pod-volume-applogs.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-pod
spec:
containers:
- name: tomcat
image: tomcat
ports:
- containerPort: 8080
volumeMounts:
- name: app-logs
mountPath: /usr/local/tomcat/logs
- name: busybox
image: busybox
command: ["sh","-c","tail -f /logs/catalina*.log"]
volumeMounts:
- name: app-logs
mountPath: /logs
volumes:
- name: app-logs
emptyDir: {}
上面给pod/volume-pod定义了一个共享存储卷app-logs,挂载到tomcat容器内的/usr/local/tomcat/los目录,且同时挂载到busybox容器内的/logs目录,tomcat启动后会往/usr/local/tomcat/logs目录里写文件,这些文件通过volume同步映射到busybox中的/logs目录下为内部程序读取
可以通过查看busybox中的程序输出结果来与tomcat的写入的日志文件内容做对比
tomcat写入的日志文件
kubectl exec -it volume-pod -c tomcat -- ls /usr/local/tomcat/logs
kubectl exec -it volume-pod -c tomcat -- tail /logs/catalina.2021-01-11.log
11-Jan-2021 02:59:42.726 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded Apache Tomcat Native library [1.2.25] using APR version [1.6.5].
11-Jan-2021 02:59:42.726 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
11-Jan-2021 02:59:42.726 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
11-Jan-2021 02:59:42.737 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.1.1d 10 Sep 2019]
11-Jan-2021 02:59:44.188 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
11-Jan-2021 02:59:44.314 INFO [main] org.apache.catalina.startup.Catalina.load Server initialization in [2439] milliseconds
11-Jan-2021 02:59:44.542 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
11-Jan-2021 02:59:44.542 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.41]
11-Jan-2021 02:59:44.583 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
11-Jan-2021 02:59:44.722 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [407] milliseconds
查看busybox程序读出的内容
kubectl logs volume-pod -c busybox
11-Jan-2021 02:59:42.726 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded Apache Tomcat Native library [1.2.25] using APR version [1.6.5].
11-Jan-2021 02:59:42.726 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
11-Jan-2021 02:59:42.726 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
11-Jan-2021 02:59:42.737 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.1.1d 10 Sep 2019]
11-Jan-2021 02:59:44.188 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
11-Jan-2021 02:59:44.314 INFO [main] org.apache.catalina.startup.Catalina.load Server initialization in [2439] milliseconds
11-Jan-2021 02:59:44.542 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
11-Jan-2021 02:59:44.542 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.41]
11-Jan-2021 02:59:44.583 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
11-Jan-2021 02:59:44.722 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [407] milliseconds
输出的内容与前者一样,这里体现了共享存储的一个作用
3、Pod配置管理-configMap
为了方便部署,一般需要将应用程序与配置文件分开,这样方便随时修改配置文件而不需要重新构建应用程序镜像,在kubernetes里通过configMap实现,它可以用来定义变量也可以用来表示一个完整的配置文件内容
3.1.yaml文件方式创建configMap
cat cm-appvars.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-appvar
data:
applevel: info
appdatadir: /var/data
kubectl create -f cm-appars.yaml
创建一个configmap
kubectl get configmap
简单查看
kubectl describe configmap cm-appvars
详细查看
kubectl get configmap -o yaml
打印出configmap的yaml文件内容
除了通过configmap来定义变量以外,还可以直接定义配置文件
cat cm-appconfigfiles.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cm-appconfigfiles
data:
key-serverxml: |
<?xml version='1.0' encoding='utf-8'?>
key-loggingproperties: "handlers
= location.org.apache.juli.FieldHandler"
3.2.命令的方式创建configmap
针对配置文件
在目录configfiles下存在两个文件server.xml,logging.properties
kubectl create configmap cm-appconf --from-file=configfiles
创建configmap
针对变量
kubectl create configmap cm-appenv --from-literal=loglevel=info --from-literal=appdatadir=/var/data
3.3.使用env字段从configmap中获取环境变量
configmap的定义对象决定了其使用场景,如果是变量就容器应用可以通过环境变量获取configmap中的内容,如果是文件或者目录可以通过volume挂载到容器中
cat cm-test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: cm-test-pod
spec:
containers:
- name: cm-test
image: busybox
command: ["/bin/sh","-c","env | grep APP"]
env:
- name: APPLOGLEVEL # 应用内的环境变量
valueFrom: # 应用内的环境变量取值源
configMapKeyRef: # 取值源为configmap的对应值
name: cm-appvars
key: apploglevel
- name: APPDATADIR
valueFrom:
configMapKeyRef:
name: cm-appvars
key: appdatadir
restartPolicy: Never
启动该Pod
kubectl create -f cm-test-pod.yaml
验证环境变量设置是否生效
kubectl logs cm-test-pod
返回
APPDATADIR=/var/data
APPLOGLEVEL=info
3.4.volume中使用configmap
利用volume通过configmap挂载文件到容器内,比如使用tomcat来做测试,为了测试自定义一个tomcat镜像
cat Dockerfile
FROM tomcat:latest
MAINTAINER <tengwanginit@gmail.com>
RUN mkdir /configfiles
创建自定义镜像
docker build -t kubeguide/tomcat-app:v1
cat cm-test-app.yaml
apiVersion: v1
kind: Pod
metadata:
name: cm-test-app
spec:
containers:
- name: cm-test-app
image: kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
volumeMounts:
- name: serverxml # 引用的卷的名字
mountPath: /configfiles # 容器内目录,文件挂载目标地
volumes:
- name: serverxml # volume名字
configMap:
name: cm-appconfigfiles # 使用configmap“cm-appconfigfiles”
items:
- key: key-serverxml # 使用的configmap中key值
path: server.xml # 挂载到容器中的文件名,直接在挂载目录自动生成
- key: key-loggingproperties
path: logging.propertires
kubectl create -f cm-test-app.yaml
创建pod
kubectl exec -it cm-test-app -- bash
进入pod
ls -la /configfiles
验证是否挂载成功
上面通过item字段自定义了挂载到容器内的文件名,但是也可以不用item字段,挂载进去的文件就和configmap中定义的字段名一样
修改上面的yaml文件
apiVersion: v1
kind: Pod
metadata:
name: cm-test-app
spec:
containers:
- name: cm-test-app
image: kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
volumeMounts:
- name: serverxml
mountPath: /configfiles
volumes:
- name: serverxml
configMap:
name: cm-appconfigfiles
创建pod后进入/configfiles目录
ls ./
返回
key-loggingproperties key-serverxml
查看使用configmap定义的字段
kubectl describe configmap cm-appconfigfiles
返回
Name: cm-appconfigfiles
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
key-loggingproperties:
----
handlers = location.org.apache.juli.FieldHandler
key-serverxml:
----
<?xml version='1.0' encoding='utf-8'?>
Events: <none>
3.5.configmap注意点
1、configmap必须在创建pod前创建
2、只能被处于同一namespace的pod使用
3、静态pod无法使用configmap,只有apiserver管理的pod才能使用
4、使用configmap进行文件挂载时只能挂载到目录下面,不能挂载到文件
4、Downward api使用
Downward api可以通过环境变量和volume的方式将pod信息注入容器内
4.1.pod信息注入到容器环境变量
cat dapi-test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: busybox
command: ["/bin/sh","-c","env"]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
restartPolicy: Never
kubectl create -f dapi-test-pod.yaml
创建pod
kubectl logs dapi-test-pod
查看日志,返回
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://169.169.0.1:443
HOSTNAME=dapi-test-pod
SHLVL=1
HOME=/root
MY_POD_NAMESPACE=default
MY_POD_IP=172.17.0.4
KUBERNETES_PORT_443_TCP_ADDR=169.169.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://169.169.0.1:443
KUBERNETES_SERVICE_HOST=169.169.0.1
PWD=/
MY_POD_NAME=dapi-test-pod
可以看到pod中设置的环境变量已经注入到容器中
这里downward提供了valueFrom的写法,分别使用了三个变量
metadata.namespace、 metadata.name、status.podIP
4.2.容器资源注入到容器环境变量
cat dapi-test-pod-container-vars.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-container-vars
spec:
containers:
- name: test-container
image: busybox
imagePullPolicy: Never
command: ["sh","-c"]
args:
- while true;do
echo -en '\n';
printenv MY_CPU_REQUEST MY_CPU_LIMIT;
printenv MY_MEM_REQUEST MY_MEM_LIMIT;
sleep 3600;
done;
resources:
requests:
memory: "32Mi"
cpu: "125m"
limits:
memory: "64Mi"
cpu: "250m"
env:
- name: MY_CPU_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
- name: MY_CPU_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.cpu
- name: MY_MEM_REQUEST
valueFrom:
resourceFieldRef:
containerName: test-container
resource: requests.memory
- name: MY_MEM_LIMIT
valueFrom:
resourceFieldRef:
containerName: test-container
resource: limits.memory
restartPolicy: Never
创建pod后,查看日志
kubectl logs dapi-test-pod-container-vars
返回
1
1
33554432
67108864
看以看出成功的将容器的资源信息导入到容器内的环境变量了
再验证一下
kubectl exec -it dapi-test-pod-container-vars -- env
返回
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=dapi-test-pod-container-vars
TERM=xterm
MY_MEM_LIMIT=67108864
MY_CPU_REQUEST=1
MY_CPU_LIMIT=1
MY_MEM_REQUEST=33554432
KUBERNETES_SERVICE_HOST=169.169.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://169.169.0.1:443
KUBERNETES_PORT_443_TCP=tcp://169.169.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=169.169.0.1
HOME=/root
同样利用Downward api语法,resourceFieldRef可以将容器内部资源和限制等配置设置成容器内的环境变量,这里用到以下几个配置requests.cpu、requests.memory、limits.cpu、limits.memory。
4.3.volume在Downward api中的使用
可以使用Downward api语法获取pod的label字段、annotation字段内容,然后通过volumes字段进行挂载,以文件的形式映射到容器内
cat dapi-test-pod-volume.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod-volume
labels:
zone: us-est-coast
cluster: test-cluster1
rack: rack-22
annotations:
build: two
builder: join-doe
spec:
containers:
- name: test-container-volume
image: busybox
imagePullPolicy: Never
command: ["sh","-c"]
args:
- while true;do
if [[ -e /etc/labels ]];then
echo -en '\n\n';cat /etc/labels;fi;
if [[ -e /etc/annotations ]];then
echo -en '\n\n';cat /etc/annotations;fi;
sleep 3600;
done;
volumeMounts:
- name: podinfo
mountPath: /etc
readOnly: false
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
创建pod后会将labels和annotations的内容输出并以文件的方式挂载到容器内
4.4.为什么需要Downward API
我们知道在微服务架构中有一个很重要的东西叫服务治理和注册中心,其存在的意义大致是集群中的节点将自己的某些信息比如ip、名称等写入到某个配置文件,然后启动各个节点时读取配置文件中的这些信息并发布到服务注册中心以此实现集群个节点能够自动发现,这一过程只需要用来修改配置文件即可,而配置文件可以通过Downward api来实现,这样就可以灵活进行配置更迭。
5、Pod生命周期
Pod的状态对于调度和重启有着至关重要的作用
状态 | 描述 |
---|---|
Pending | 已创建Pod,但Pod内至少还有一个容器镜像没有创建 |
Running | Pod内所有容器均已创建,至少有一个容器处于运行状态、正在重启或正在启动 |
Succeeded | Pod内所有容器均成功执行后退出且不会再重启 |
Failed | Pod内所有容器均已退出,至少有一个容器退出为失败状态 |
Unknown | 由于某种原因无法获取该Pod的状态,这种可能是因为网络通信不畅导致 |
6、Pod重启策略
在Pod的yaml文件中使用字段restartPolicy
- Always 当容器失效时,由kubelet自动重启该容器
- OnFailure 当容器终止且退出码不为0时由kubelet自动重启该容器
- Never kubelet永远不会重启该容器
kubelet重启容器的时间分别是1、2、4、8,最长延时5分钟,不过Pod的重启策略还受到管理该Pod的控制器限制
- RC、DaemonSet下的Pod必须设置为Always
- Job下的Pod可以设置成Onfailure或Never
- kubelet直接管理的Pod不受restartPolicy字段限制,只要Pod失效就会自动重启
7、Pod健康检查
对于判断Pod的健康状态来说首先我们需要其健康指标是什么,与其说是判断Pod的健康状态还不如说是判断Pod内部容器的健康状态,但容器的健康状态分为内外两部分,使用livenessProbe可以知道什么时候需要重启容器,使用readinessProbe可以知道Pod内容器是否就绪可以接受流量,否则就将其从service的Endpoint中剔除,防止流量负载到该Pod
7.1.LivenessProbe实现
ExecAction实现
容器内部执行命令,返回0表示容器健康
cat liveness.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox
args:
- /bin/sh
- -c
- echo ok > /tmp/health;sleep 10;rm -rf /tmp/health;sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/health
initialDelaySeconds: 15
timeoutSeconds: 1
创建pod
查看pod具体
kubectl describe pods liveness-exec
发现
Normal Killing 4m32s (x2 over 5m52s) kubelet, 192.168.0.160 Container liveness failed liveness probe, will be restarted
Warning Unhealthy 4m32s (x6 over 6m12s) kubelet, 192.168.0.160 Liveness probe failed: cat: can't open '/tmp/health': No such file or directory
因为容器启动10秒后删除了文件/tmp/health,但是livenessProbe初始探测是15秒之后,这就导致了返回码为0,livenessProbe判定后会让kubelet杀掉容器并重启,注意这里只是杀掉应用容器但是Pod一直在
可以通过生命事件来验证
kubectl get pods
liveness-exec 0/1 CrashLoopBackOff 7 15m
到对应节点查看pod容器
docker ps
ad6715d201af k8s.gcr.io/pause:3.2 "/pause" 15 minutes ago Up 15 minutes
两个的存活时间一致,但是应用容器有7次重启记录
TCPSocketAction实现
对容器的ip及端口进行tcp连接,如果能连接则说明容器健康
cat pod-with-healthcheck.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
创建pod后
进入到容器内部查看是否有有nginx
kubectl exec -it pod-with-healthcheck -- bash
dpkg -l | grep nginx
可以看到安装了nginx
kubectl describe pods pod-with-healthcheck
可以查看到pod部署到哪个节点去了,到那个节点宿主机查看是否有nginx进程
ps -ef|grep nginx
返回
root 105068 105052 0 15:24 ? 00:00:00 nginx: master process nginx -g daemon off;
101 105113 105068 0 15:24 ? 00:00:00 nginx: worker process
root 118011 1691 0 16:46 pts/0 00:00:00 grep --color=auto nginx
HTTPGetAction实现
通过http请求的返回码来作为livenessProbe的检测方式,如果介于200到400之间表示容器健康
cat pod-with-healthcheck.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-healthcheck
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /_status/healthz
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
创建pod后,查看pod详细
发现
Normal Killing 9m7s (x2 over 10m) kubelet, 192.168.0.160 Container nginx failed liveness probe, will be restarted
Normal Pulling 9m7s (x3 over 11m) kubelet, 192.168.0.160 Pulling image "nginx"
Warning Unhealthy 6m7s (x14 over 10m) kubelet, 192.168.0.160 Liveness probe failed: HTTP probe failed with statuscode: 404
Warning BackOff 65s (x11 over 4m57s) kubelet, 192.168.0.160 Back-off restarting failed container
这说明该pod中的容器在livenessProbe探测下进行了重启
ReadnessProbe实现
实现方式同LivenessProbe