《Kubernetes权威指南》学习笔记第十一篇-Service

1、定义

为一组相同功能的容器提供统一的入口地址,将请求负载到后端的各个容器应用

下面是定义service的一个模板
cat service.yaml

apiVersion: v1    //api版本
kind: Service   //对象类型
metadata: //元数据
  name: // service名
  namespace: //命名空间   
  labels:   //标签   
    name:  //标签   
  annotations:  //说明注解
    name:   
spec:   //具体说明
  type:  //服务类型   
  clusterIP: //VIP
  selector:  //匹配pod副本的标签
  sessionAffinity: //会话保持
  ports:
  - name: //端口名
    port: //虚拟端口
    targetPort: //pod端口
    nodePort: //映射到宿主机的端口
  status:   //下面均是外接公有云负载均衡器的配置
    loadBalancer:
      ingress:
        ip: 
        hostname:

上面这个只是将service可以定义的字段列出来,具体配置只会截取部分
一般来讲通过type的值来分类
NodePort服务
cat service.yaml

apiVersion: v1    //api版本
kind: Service   //对象类型
metadata: //元数据
  name: // service名
  namespace: //命名空间   
  labels:   //标签   
    name:  //标签   
  annotations:  //说明注解
    name:   
spec:   //具体说明
  type: NodePort  //服务类型   
  selector:  //匹配pod副本的标签
  sessionAffinity: //会话保持
  ports:
  - name: //端口名
    port: //虚拟端口
    targetPort: //pod端口
    nodePort: //映射到宿主机的端口

ClusterIP服务
cat service.yaml

apiVersion: v1    //api版本
kind: Service   //对象类型
metadata: //元数据
  name: // service名
  namespace: //命名空间   
  labels:   //标签   
    name:  //标签   
  annotations:  //说明注解
    name:   
spec:   //具体说明
  selector:  //匹配pod副本的标签
  sessionAffinity: //会话保持
  ports:
  - name: //端口名
    port: //虚拟端口
    targetPort: //pod端口

LoadBalancer服务
cat service.yaml

apiVersion: v1    //api版本
kind: Service   //对象类型
metadata: //元数据
  name: // service名
  namespace: //命名空间   
  labels:   //标签   
    name:  //标签   
  annotations:  //说明注解
    name:   
spec:   //具体说明
  type: LoadBalancer //服务类型   
  clusterIP: //VIP
  selector:  //匹配pod副本的标签
  sessionAffinity: //会话保持
  ports:
  - name: //端口名
    port: //虚拟端口
    targetPort: //pod端口
    nodePort: //映射到宿主机的端口
  status:   //下面均是外接公有云负载均衡器的配置
    loadBalancer:
      ingress:
        ip:  //外部负载均衡器ip
        hostname: // 外部负载均衡器主机名

下面将不常用的字段简单总结:

  • sessionAffinity该字段默认为空,可选值ClientIP,表示将同一客户端的请求转发到同一个pod
  • clusterIP 在type=ClusterIP时可以定义该字段,将vip地址固定

2、常规Service

一般对外提供服务的应用程序在容器中的方式就是通过TCP/IP机制,监听IP和端口来实现
cat javaapp-deploy-yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: javaapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: javaapp
  template:
    metadata:
      name: javaapp
      labels:
        app: javaapp
    spec:
      containers:
      - name: javaapp
        image: tomcat:8.5
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: default-page
          mountPath: /usr/local/tomcat/webapps/ROOT
      volumes:
      - name: default-page
        hostPath:
          path: /tmp/ROOT

kubectl create -f javaapp-deploy.yaml
kubectl get po -o wide

javaapp-6bc8fcd8cd-p9gq7   1/1     Running   0          16h   10.244.21.3     192.168.0.162   <none>           <none>
javaapp-6bc8fcd8cd-rk2ql   1/1     Running   0          16h   10.244.1.2      192.168.0.163   <none>           <none>

curl http://10.244.21.3:8080访问默认页面

这里本地的挂载目录/tmp/ROOT来自哪里?这是因为通过容器部署的tomcat默认项目运目录/usr/local/tomcat/webapps下是空的,需要将同目录下的webapps.dist/ROOT复制到webapps下才能正常访问默认页面
具体操作如下
cat javaapp.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: javaapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: javaapp
  template:
    metadata:
      name: javaapp
      labels:
        app: javaapp
    spec:
      containers:
      - name: javaapp
        image: tomcat:8.5
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080

kubectl create -f javaapp.yaml创建pod副本
cp default/javaapp-7dfc6d8dd7-xcbms:/usr/local/tomcat/webapps.dist/ROOT /tmp/将pod内的ROOT复制到宿主机节点目录/tmp/下方便进行持久化,这里使用的是hostPath方式进行的持久化,故需要将改目录复制到所有节点
scp -r /tmp/ROOT node162:/tmp
scp -r /tmp/ROOT node163:/tmp

下面创建service,一般有两种方式
命令行方式
kubectl expose deployment javaapp创建service,这个时候service端口默认会复制cntainerPort

[root@node161 book]# kubectl  get svc | grep javaapp
javaapp                                                  ClusterIP   10.0.0.208   <none>        8080/TCP    4m22s

curl http://10.0.0.208:8080访问默认页面

yaml文件方式
cat javaapp-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: javaapp
spec:
  ports:
  - port: 8081
    targetPort: 8080
  selector:
    app: javaapp

kubectl create -f javaapp-service.yaml创建service
kubectl get svc | grep javaapp

[root@node161 book]# kubectl  get svc | grep javaapp
javaapp                                                  ClusterIP   10.0.0.120   <none>        8081/TCP    2m32s

curl http://10.0.0.120:8081访问默认页面

Repository

3、多端口Service

当容器内提供多个服务时需要监听多个端口,Service同样可以实现多端口的配置,多个端口对应多个应用
下面是一个简略配置

cat javaapp-multiport-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: javaapp
spec:
  ports:
  - port: 8081
    targetPort: 8080
    name: web
  - port: 8085
    targetPort: 8005
    name: management
  selector:
    app: javaapp

ps:上面的targetPort必须与containerPort保持一致

除此之外,service里还可以设置端口协议,表示此端口的数据传输协议,一个很典型的就是kube-dns服务,可以查看
kubectl get svc -n kube-system -o yaml

    ports:
    - name: dns
      port: 53
      protocol: UDP
      targetPort: 53
    - name: dns-tcp
      port: 53
      protocol: TCP
      targetPort: 53

4、外部Service

有时候集群中的某个应用需要连接集群外的数据库,这时可以使用无头服务来实现,分两步
1、创建不带标签选择器的service
cat external-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  ports:
  - protocol: TCP
    port: 80
    target: 80

这种service无法选择后端Pod,系统不会自动创建Endpoint,所以需要手动创建和该service同名的Endpoint

2、创建与外部service同名的Endpoint
cat external-endpoint.yaml

apiVersion: v1
kind: Endpoints
metadata:
  name: my-service
subsets:
- address: 
  - IP: xx.xx.xx.xx
  ports:
  - port: 80

访问1中的service,请求会被转发到xx.xx.xx.xx

5、Headless Service

Kubernetes中负载均衡默认有两种方式,一种轮询,一种保持会话,但是如果不想使用默认负载均衡策略,kubernetes通过创建不设置clusterIP(即设置clusterIP: None)的headless service来实现这种功能

部署cassandra集群
创建service
cat cassandra-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: cassandra
  labels:
    name: cassandra
spec:
  clusterIP: None
  ports:
  - port: 9042
  selector:
    app: cassandra

kubectl create -f cassandra-service.yaml

创建StatefulSet
cat cassandra-sts.aml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cassandra
spec:
  serviceName: cassandra
  replicas: 3
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      containers:
        - name: cassandra
          image: cassandra:3.11
          imagePullPolicy: IfNotPresent
          resources:
            requests:
              memory: "1Gi"
              cpu: "0.5"
            limits:
              memory: "1Gi"
              cpu: "0.5"
          securityContext:
            capabilities:
              add:
                - IPC_LOCK
          lifecycle:
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - nodetool drain
          ports:
            - containerPort: 7000
              name: intra-node
            - containerPort: 7001
              name: tls-intra-node
            - containerPort: 7199
              name: jmx
            - containerPort: 9042
              name: cql
          env:
            - name: CASSANDRA_SEEDS
              value: "cassandra-0.cassandra.default.svc.cluster.local,cassandra-1.cassandra.default.svc.cluster.local,cassandra-2.cassandra.default.svc.cluster.local"
            - name: MAX_HEAP_SIZE
              value: 512M
            - name: HEAP_NEWSIZE
              value: 100M
            - name: CASSANDRA_CLUSTER_NAME
              value: "cx"
            - name: CASSANDRA_DC
              value: "DC1"
            - name: CASSANDRA_RACK
              value: "Rack1"
            - name: CASSANDRA_ENDPOINT_SNITCH
              value: GossipingPropertyFileSnitch
          volumeMounts:
            - name: cassandra-storage
              mountPath: /var/lib/cassandra
  volumeClaimTemplates:
  - metadata:
      name: cassandra-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: "gluster-heketi"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 4Gi
  selector:
    matchLabels:
      app: cassandra

kubectl create -f cassandra-sts.yaml
kubectl exec -it cassandra-0 -- nodetool status查看集群状态

Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.244.1.2   297.04 KiB  256          63.7%             fcb8cd5d-682e-45bc-b23b-8e5b1bcba5ee  Rack1
UN  10.244.21.3  70.69 KiB  256          68.7%             4f6229db-b8f9-485c-890a-ca197df4e02b  Rack1
UN  10.244.68.3  297.03 KiB  256          67.6%             53b83559-7f3f-41e6-af40-c926d8c96f83  Rack1

ps:own列表示数据分布,综合是200%

对集群进行缩容
kubectl scale sts cassandra --replicas=2
kubectl exec -it cassandra-0 -- nodetool status查看集群状态

--  Address      Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.244.1.2   297.04 KiB  256          63.7%             fcb8cd5d-682e-45bc-b23b-8e5b1bcba5ee  Rack1
UN  10.244.21.3  70.69 KiB  256          68.7%             4f6229db-b8f9-485c-890a-ca197df4e02b  Rack1
DN  10.244.68.3  297.03 KiB  256          67.6%             53b83559-7f3f-41e6-af40-c926d8c96f83  Rack1

发现有个节点down了,但是数据还没有迁移到Up节点上,需要进行数据迁移后再删除该节点
kubectl exec -it cassandra-0 -- nodetool removenode 53b83559-7f3f-41e6-af40-c926d8c96f83
一段时间后,再次查看

Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns (effective)  Host ID                               Rack
UN  10.244.1.2   297.04 KiB  256          100.0%            fcb8cd5d-682e-45bc-b23b-8e5b1bcba5ee  Rack1
UN  10.244.21.3  80.81 KiB  256          100.0%            4f6229db-b8f9-485c-890a-ca197df4e02b  Rack1

集群自动发现添加节点机制如下:生成一个pod后,cassandra应用会通过一个restAPI去请求kubernetes master来查询service cassandra路由的Endpoints,当发现新的Endpoint时将其添加到集群

关于去中心化的集群大多可以通过无头服务来进行部署,比如mongodb集群、es集群等

拓展
上面cassandra集群使用的是动态存储,下面我们用本地文件系统来持久化数据
部署pv
cat cassandra-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: cassandra-storage-0
  labels:
    type: local
    app: cassandra
spec:
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/cassandra-storage-0
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cassandra-storage-1
  labels:
    type: local
    app: cassandra
spec:
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/cassandra-storage-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: cassandra-storage-2
  labels:
    type: local
    app: cassandra
spec:
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /tmp/data/cassandra-storage-2

部署headless service,参照前面

部署集群
cat cassandra-sts.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cassandra
spec:
  serviceName: cassandra
  replicas: 3
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      containers:
        - name: cassandra
          image: cassandra:3.11
          imagePullPolicy: IfNotPresent
          resources:
            requests:
              memory: "1Gi"
              cpu: "0.5"
            limits:
              memory: "1Gi"
              cpu: "0.5"
          securityContext:
            capabilities:
              add:
                - IPC_LOCK
          lifecycle:
            preStop:
              exec:
                command:
                - /bin/sh
                - -c
                - nodetool drain
          ports:
            - containerPort: 7000
              name: intra-node
            - containerPort: 7001
              name: tls-intra-node
            - containerPort: 7199
              name: jmx
            - containerPort: 9042
              name: cql
          env:
            - name: CASSANDRA_SEEDS
              value: "cassandra-0.cassandra.default.svc.cluster.local,cassandra-1.cassandra.default.svc.cluster.local,cassandra-2.cassandra.default.svc.cluster.local"
            - name: MAX_HEAP_SIZE
              value: 512M
            - name: HEAP_NEWSIZE
              value: 100M
            - name: CASSANDRA_CLUSTER_NAME
              value: "cx"
            - name: CASSANDRA_DC
              value: "DC1"
            - name: CASSANDRA_RACK
              value: "Rack1"
            - name: CASSANDRA_ENDPOINT_SNITCH
              value: GossipingPropertyFileSnitch
          volumeMounts:
            - name: cassandra-storage
              mountPath: /var/lib/cassandra
  volumeClaimTemplates:
  - metadata:
      name: cassandra-storage
      annotations:
        volume.beta.kubernetes.io/storage-class: ""
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 4Gi
  selector:                //通过标签选择来绑定pv   
    matchLabels:
      app: cassandra

kubectl get pvc,pv

persistentvolumeclaim/cassandra-storage-cassandra-0        Bound    cassandra-storage-1                        4Gi        RWO                             123m
persistentvolumeclaim/cassandra-storage-cassandra-1        Bound    cassandra-storage-0                        4Gi        RWO                             123m
persistentvolumeclaim/cassandra-storage-cassandra-2        Bound    cassandra-storage-2                        4Gi        RWO                             123m
persistentvolume/cassandra-storage-0                        4Gi        RWO            Retain           Bound    default/cassandra-storage-cassandra-1                                  125m
persistentvolume/cassandra-storage-1                        4Gi        RWO            Retain           Bound    default/cassandra-storage-cassandra-0                                  125m
persistentvolume/cassandra-storage-2                        4Gi        RWO            Retain           Bound    default/cassandra-storage-cassandra-2                                  125m

重点理解字段volumeClaimTemplates,它能够自动生成pvc,pvc名由volumeClaimTemplates.metadata.name与pod名组成,本文创建了三个带标签app=cassandra的pv,pvc会随机从中进行绑定

Git Hub Repository

6、集群外部访问pod

将pod内容器端口映射到宿主机

容器级别hostPort
这个类似于docker端口映射,通过hostPort来实现
cat javaapp-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: javaapp-pod
spec:
      containers:
      - name: javaapp-pod
        image: tomcat:8.5
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          hostPort: 8081
        volumeMounts:
        - name: default-page
          mountPath: /usr/local/tomcat/webapps/ROOT
      volumes:
      - name: default-page
        hostPath:
          path: /tmp/ROOT

kuebctl get po -o wide查看pod调度到哪个节点
curl http://nodex:8081访问tomcat默认页面

Pod级别hostPort
cat javaapp-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: javaapp-pod
spec:
      hostNetwork: true
      containers:
      - name: javaapp-pod
        image: tomcat:8.5
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: default-page
          mountPath: /usr/local/tomcat/webapps/ROOT
      volumes:
      - name: default-page
        hostPath:
          path: /tmp/ROOT

当设置了hostNetwork=true后,pod内所有容器端口都将映射到宿主机,且容器内不指定hostPort时默认hostPort等于containerPort,如果指定也必须等于containerPort

7、集群外部访问service

将service端口映射到宿主机
cat javaapp-pod-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: javaap-pod
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    nodePort: 8081
  selector:
    app: javaapp-pod   

kubectl create -f javapp-pod-service.yaml创建pod service
cat javaapp-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: javaapp-pod
  labels:
    app: javaapp-pod
spec:
      containers:
      - name: javaapp-pod
        image: tomcat:8.5
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: default-page
          mountPath: /usr/local/tomcat/webapps/ROOT
      volumes:
      - name: default-page
        hostPath:
          path: /tmp/ROOT

kubectl create -f javaapp-pod.yaml创建pod
curl http://node161:8081访问service,请求会被转发到service后端的pod

通过LoadBalancer将service映射到公有云服务商提供的负载均衡器,下面是一个例子
cat loadbalancer-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: java-service
spec:
  selector:
    app: javaapp
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
    nodePort: 300061
  clusterIP: 10.0.171.239
  loadBalancerIP: 78.11.24.19
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 146.148.47.155
      hostname: xx.xx.xx

ps:外部负载均衡器的流量将会被引到后端的Pod,然而具体这个如何实现则要看云提供商

8、Ingress路由