《Kubernetes权威指南》学习笔记第二篇-二进制方式安装

一、基础环境配置

节点规划

hostname ip role 需要安装的组件
node161 192.168.0.161 master node kube-apiserver kuber-controller-manager kube-scheduler etcd flannel kube-proxy docker kubelet coredns
node162 192.168.0.162 node kubelet kube-proxy docker flannel coredns
node163 192.168.0.163 node kubelet kube-proxy docker flannel coredns

以下步骤1~6所有节点都需要执行,7仅在161节点执行
1、禁用swap
swapoff -a或者编辑文件/etc/fstab注释到swap挂载

2、禁用selinux
sed -i 's/enforcing/disabled/g' /etc/selinux/config

3、本地host解析
cat /etc/hosts

192.168.0.161 node161
192.168.0.162 node162
192.168.0.163 node163

4、关闭防火墙
systemctl stop firewalld && systemctl disable firewalld

5、创建所有服务的配置文件存放目录
mkdir -p /etc/kubernetes/

6、修改docker镜像源
echo {\"registry-mirrors\":[\"https://nr630v1c.mirror.aliyuncs.com\"]} > /etc/docker/daemon.json

7、节点ssh免密(可选)

ssh-copy-id -i .ssh/id_rsa.pub  node161  
ssh-copy-id -i .ssh/id_rsa.pub  node162  
ssh-copy-id -i .ssh/id_rsa.pub  node163

二、master节点服务安装

1、docker安装

docker下载

解压安装文件

tar xzvf docker-19.03.6.tgz
cp -ar docker/* /usr/bin

创建docker系统服务配置文件

cat /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP 
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target

检验docker安装情况

systemctl start docker && systemctl enable docker && docker version 正常安装成功会返回

Client: Docker Engine - Community
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.16
 Git commit:        369ce74a3c
 Built:             Thu Feb 13 01:24:49 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.16
  Git commit:       369ce74a3c
  Built:            Thu Feb 13 01:32:22 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.12
  GitCommit:        35bd7a5f69c13e1563af8a93431411cd9ecf5021
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

2、安装etcd

etcd用来存储kubernetes集群各个资源对象的状态,启用集群前必须线启用etcd

etcd下载

解压安装文件

tar zvxf etcd-v3.3.25-linux-amd64.tar.gz   
cd etcd-v3.3.25-linux-amd64
cp -ar etcd /usr/bin
cp -ar etcdctl /usr/bin

创建etcd系统服务文件
cat /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
EnvironmentFile=/etc/kubernetes/etcd.conf
ExecStart=/usr/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new

[Install]
WantedBy=multi-user.target

创建etcd配置文件
cat /etc/kubernetes/etcd.conf

ETCD_NAME="etcd"
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_PEER_URLS="http://192.168.0.161:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.0.161:2379"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.161:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.161:2379"
ETCD_INITIAL_CLUSTER="etcd=http://192.168.0.161:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new

创建etcd数据目录
mkdir -p /var/lib/etcd

启动etcd
systemctl daemon-reload && systemctl start etcd && systemctl status etcd
返回

● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2021-05-16 11:06:48 CST; 8s ago
 Main PID: 11125 (etcd)
    Tasks: 7
   Memory: 10.6M
   CGroup: /system.slice/etcd.service
           └─11125 /usr/bin/etcd --name=etcd --data-dir=/var/lib/etcd/ --listen-peer-urls=http://192.168.0.161:2380 --listen-client-urls=http://192.168....

May 16 11:06:48 node161 etcd[11125]: raft.node: 8c41e24ae0fa16d3 elected leader 8c41e24ae0fa16d3 at term 2
May 16 11:06:48 node161 etcd[11125]: published {Name:etcd ClientURLs:[http://192.168.0.161:2379]} to cluster b2de1eab5825a73d
May 16 11:06:48 node161 etcd[11125]: forgot to set Type=notify in systemd service file?
May 16 11:06:48 node161 etcd[11125]: setting up the initial cluster version to 3.3
May 16 11:06:48 node161 etcd[11125]: ready to serve client requests
May 16 11:06:48 node161 etcd[11125]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
May 16 11:06:48 node161 etcd[11125]: ready to serve client requests
May 16 11:06:48 node161 etcd[11125]: serving insecure client requests on 192.168.0.161:2379, this is strongly discouraged!
May 16 11:06:48 node161 etcd[11125]: set the initial cluster version to 3.3
May 16 11:06:48 node161 etcd[11125]: enabled capabilities for version 3.3

验证etcd
etcdctl --endpoints="http://192.168.0.161:2379" cluster-health
返回

member 8c41e24ae0fa16d3 is healthy: got healthy result from http://192.168.0.161:2379
cluster is healthy

3、安装flannel

flannel是负责集群中pod之间、pod与node之间进行通信的一种网络插件,它将配置的pod的网段信息写入到etcd中

flannel下载

向etcd中写入pod网段信息
etcdctl --endpoints="http://192.168.0.161:2379" set /coreos.com/network/config '{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}'
返回

{ "Network": "10.244.0.0/16", "Backend": {"Type": "vxlan"}}

解压安装文件

tar zvxf flannel-v0.13.0-linux-amd64.tar.gz    
mv flanneld mk-docker-opts.sh /usr/bin

创建flannel系统服务文件
cat /usr/lib/systemd/system/flannel.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/flannel.conf
ExecStart=/usr/bin/flanneld --ip-masq $FLANNEL_ARGS
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

创建flannel配置文件
cat /etc/kubernetes/flannel.conf

FLANNEL_ARGS="--etcd-endpoints=http://192.168.0.161:2379"  

etcd分配网段给flannel后,需要通过脚本mk-docker-opts.sh将网段信息写入/run/flannel/docker文件,再由docker给每给pod分配具体的ip

创建指定网段的docker配置文件
cat /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP 
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target

启动flannel
systemctl start flannel && systemctl enable flannel && systemctl status flannel && systemctl restart docker
返回

Created symlink from /etc/systemd/system/multi-user.target.wants/flannel.service to /usr/lib/systemd/system/flannel.service.
● flannel.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flannel.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2021-05-16 12:57:20 CST; 110ms ago
 Main PID: 11229 (flanneld)
   CGroup: /system.slice/flannel.service
           └─11229 /usr/bin/flanneld --ip-masq --etcd-endpoints=http://192.168.0.161:2379

May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.128893   11229 vxlan_network.go:60] watching for new subnet leases
May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.129559   11229 iptables.go:167] Deleting iptables rule: ! -s 10.244.0.0/16 -d 10.244.0....ASQUERADE
May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.132798   11229 iptables.go:155] Adding iptables rule: -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.140370   11229 iptables.go:155] Adding iptables rule: -s 10.244.0.0/16 ! -d 224.0.0.0/4...ASQUERADE
May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.143213   11229 iptables.go:155] Adding iptables rule: -s 10.244.0.0/16 -j ACCEPT
May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.150539   11229 iptables.go:155] Adding iptables rule: -d 10.244.0.0/16 -j ACCEPT
May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.155195   11229 main.go:433] Waiting for 22h59m59.925478438s to renew lease
May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.161061   11229 iptables.go:155] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.25.0...-j RETURN
May 16 12:57:20 node161 systemd[1]: Started Flanneld overlay address etcd agent.
May 16 12:57:20 node161 flanneld[11229]: I0516 12:57:20.170922   11229 iptables.go:155] Adding iptables rule: ! -s 10.244.0.0/16 -d 10.244.0.0/...ASQUERADE
Hint: Some lines were ellipsized, use -l to show in full.

查看效果
ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:46:79:55 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.161/24 brd 192.168.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::16de:65e0:8390:aa55/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:bd:83:95:5f brd ff:ff:ff:ff:ff:ff
    inet 10.244.25.1/24 brd 10.244.25.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether e2:4b:19:41:b9:e8 brd ff:ff:ff:ff:ff:ff
    inet 10.244.25.0/32 brd 10.244.25.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::e04b:19ff:fe41:b9e8/64 scope link 
       valid_lft forever preferred_lft forever

从上面的网卡信息可以看到生成的docker0为前面指定的网段,同时生成了该节点的flannel.1网卡

4、安装kube-apiserver

kube-apiserver是集群中最重要组件,客户端工具kubectl所有操作都会请求它,集群中的资源的增删改查都需要先通过kube-apiserver处理,处理完后让etcd处理并存储资源最新状态

kube-apiserver下载

解压安装文件

tar zvxf kubernetes-server-linux-amd64.tar.gz  
cd kubernetes/server/bin   
cp kube-apiserver /usr/bin 

创建kube-apiserver系统服务文件
cat /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

创建kube-apiserver配置文件
cat /etc/kubernetes/apiserver

KUBE_API_ARGS="--etcd-servers=http://192.168.0.161:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=10.0.0.0/24 --service-node-port-range=1-65535 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernetes --v=0"

启动kube-apiserver
systemctl start kube-apiserver && systemctl enable kube-apiserver && systemctl status kube-apiserver
返回

● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2021-05-16 13:52:46 CST; 7s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 15534 (kube-apiserver)
    Tasks: 7
   Memory: 322.5M
   CGroup: /system.slice/kube-apiserver.service
           └─15534 /usr/bin/kube-apiserver --etcd-servers=http://192.168.0.161:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-clust...

May 16 13:52:41 node161 systemd[1]: Starting Kubernetes API Server...
May 16 13:52:41 node161 kube-apiserver[15534]: Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version.
May 16 13:52:41 node161 kube-apiserver[15534]: Flag --insecure-port has been deprecated, This flag will be removed in a future version.
May 16 13:52:46 node161 systemd[1]: Started Kubernetes API Server.
May 16 13:52:46 node161 kube-apiserver[15534]: E0516 13:52:46.390923   15534 controller.go:152] Unable to remove old endpoints from kubernetes s...rrorMsg:
Hint: Some lines were ellipsized, use -l to show in full.

5、安装kube-controller-manager

kube-controller-manager用来管理集群中的各类控制器资源

kube-controller-manager下载

解压安装文件

tar zvxf kubernetes-server-linux-amd64.tar.gz  
cd kubernetes/server/bin   
cp kube-controller-manager /usr/bin   

创建kube-controller-manager系统服务文件
cat /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

创建kube-controller-manager配置文件
cat /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0"

创建kube-controller-manager链接kube-apiserver的配置文件
cat /etc/kubernetes/kubeconfig

apiversion: v1
kind: Config
users:
- name: client
  user: 
clusters:
- name: default
  cluster:
    server: http://192.168.0.161:8080
contexts:
- context:
    cluster: default
    user: client
  name: default
current-context: default

启动kube-controller-manager
systemctl start kube-controller-manager && systemctl enable kube-controller-manager && systemctl status kube-controller-manager
返回

● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2021-05-16 14:36:48 CST; 57s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 18798 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─18798 /usr/bin/kube-controller-manager --kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0

May 16 14:36:48 node161 systemd[1]: Started Kubernetes Controller Manager.
May 16 14:36:48 node161 systemd[1]: Starting Kubernetes Controller Manager...
May 16 14:36:53 node161 kube-controller-manager[18798]: E0516 14:36:53.862257   18798 core.go:230] failed to start cloud node lifecycle controlle...rovided
May 16 14:36:54 node161 kube-controller-manager[18798]: E0516 14:36:54.278871   18798 core.go:90] Failed to start service con

6、安装kube-scheduler

kube-scheduler的作用是将pod调度到特定的节点上

kube-scheduler下载

解压安装文件

tar zvxf kubernetes-server-linux-amd64.tar.gz  
cd kubernetes/server/bin   
cp kube-scheduler /usr/bin   

创建kube-scheduler系统服务文件
cat /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler 
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

创建kube-scheduler配置文件
cat /etc/kubernetes/scheduler

KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0"

启动kube-scheduler
systemctl start kube-scheduler && systemctl enable kube-scheduler && systemctl status kube-scheduler
返回

● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2021-05-16 15:03:35 CST; 276ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 20808 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─20808 /usr/bin/kube-scheduler --kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0

May 16 15:03:35 node161 systemd[1]: Started Kubernetes Scheduler.
May 16 15:03:35 node161 systemd[1]: Starting Kubernetes Scheduler...
May 16 15:03:35 node161 kube-scheduler[20808]: I0516 15:03:35.665520   20808 registry.go:173] Registering SelectorSpread plugin
May 16 15:03:35 node161 kube-scheduler[20808]: I0516 15:03:35.665612   20808 registry.go:173] Registering SelectorSpread plugin

7、安装kube-proxy

kube-proxy将客户端流量转发到service后的一组pod,同时通过etcd的watch机制监控service及endpoints的变化,维护service到endpoint的路由映射关系,这样可以保证即时pod的ip变化仍然可以通过service来访问服务

kube-proxy下载

解压安装文件

tar zvxf kubernetes-server-linux-amd64.tar.gz  
cd kubernetes/server/bin   
cp kube-proxy /usr/bin   

创建kube-proxy系统服务文件
cat /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service

[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

创建kube-proxy配置文件
cat /etc/kubernetes/proxy

KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.0.161  --cluster-cidr=10.0.0.0/24 --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

启动kube-proxy
systemctl start kube-proxy && systemctl enable kube-proxy && systemctl status kube-proxy
返回

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
● kube-proxy.service - Kubernetes Kube-proxy Server
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2021-05-16 15:44:55 CST; 229ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 23826 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           └─23826 /usr/bin/kube-proxy --kubeconfig=/etc/kubernetes/kubeconfig --hostname-ov...

May 16 15:44:55 node161 systemd[1]: Started Kubernetes Kube-proxy Server.
May 16 15:44:55 node161 systemd[1]: Starting Kubernetes Kube-proxy Server...

8、安装kubelet

kubelet管理节点上的由集群创建的容器,保证pod运行状态与期望一致,同时将node向master进行注册

kubelet下载

解压安装文件

tar zvxf kubernetes-server-linux-amd64.tar.gz  
cd kubernetes/server/bin   
cp kubelet /usr/bin   

创建kubelet系统服务文件
cat /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

创建kubelet配置文件
cat /etc/kubernetes/kubelet

KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.0.161 --logtostderr=false  --log-dir=/var/log/kubernetes --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 --v=0"  

创建kubelet数据目录
mkdir -p /var/lib/kubelet

启动kubelet
systemctl start kubelet && systemctl enable kubelet && systemctl status kubelet
返回

● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2021-05-16 16:10:10 CST; 205ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 25796 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─25796 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.0.161 --logtostderr=false --data-dir=/var/lib/ku...

May 16 16:10:10 node161 systemd[1]: Started Kubernetes Kubelet Server.
May 16 16:10:10 node161 systemd[1]: Starting Kubernetes Kubelet Server...

三、node节点服务安装

1、docker安装

安装docker步骤与master节点相同,这里不再详述

2、flannel安装

将master节点上的flannel服务相关文件拷贝过来

scp 192.168.0.161:/usr/bin/flanneld /usr/bin/   
scp 192.168.0.161:/usr/bin/mk-docker-opts.sh /usr/bin/    
scp 192.168.0.161:/usr/lib/systemd/system/flannel.service /usr/lib/systemd/system/
scp 192.168.0.161:/etc/kubernetes/flannel.conf  /etc/kubernetes/
scp 192.168.0.161:/usr/lib/systemd/system/docker.service /usr/lib/systemd/system/

启动flannel
systemctl start flannel && systemctl enable flannel && systemctl status flannel && systemctl restart docker
返回结果与master节点一致,同时也创建了指定网段的docker0和flannel.1网卡,下面分别是两个node的网卡信息,可以与master比较一下
node162

[root@node162 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:58:2a:f2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.162/24 brd 192.168.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::f3d5:b3a:a0cb:73bc/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:f8:0f:14:06 brd ff:ff:ff:ff:ff:ff
    inet 10.244.46.1/24 brd 10.244.46.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether ee:c4:3f:b5:73:64 brd ff:ff:ff:ff:ff:ff
    inet 10.244.46.0/32 brd 10.244.46.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::ecc4:3fff:feb5:7364/64 scope link 
       valid_lft forever preferred_lft forever

node163

[root@node163 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:eb:e9:ec brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.163/24 brd 192.168.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::66c7:9d17:482a:8b9f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:af:21:ed:e4 brd ff:ff:ff:ff:ff:ff
    inet 10.244.10.1/24 brd 10.244.10.255 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether c6:c1:fa:6d:bc:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.244.10.0/32 brd 10.244.10.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::c4c1:faff:fe6d:bcb8/64 scope link 
       valid_lft forever preferred_lft forever

3、kube-proxy安装
将master节点上的kube-proxy服务相关文件拷贝过来

scp 192.168.0.161:/usr/bin/kube-proxy /usr/bin/   
scp 192.168.0.161:/usr/lib/systemd/system/kube-proxy.service /usr/lib/systemd/system/  
scp 192.168.0.161:/etc/kubernetes/proxy /etc/kubernetes
scp 192.168.0.161:/etc/kubernetes/kubeconfig /etc/kubernetes/

修改kube-proxy的配置文件
node162
sed -i s/161/162/g /etc/kubernetes/proxy
node163
sed -i s/161/163/g /etc/kubernetes/proxy

启动kube-proxy
systemctl start kube-proxy && systemctl enable kube-proxy && systemctl status kube-proxy
返回与master节点类似

4、kubelet安装

将master节点上的kubelet服务相关文件拷贝过来

scp 192.168.0.161:/usr/bin/kubelet /usr/bin
scp 192.168.0.161:/usr/lib/systemd/system/kubelet.service /usr/lib/systemd/system/
scp 192.168.0.161:/etc/kubernetes/kubelet /etc/kubernetes/   

创建数据目录
mkdir -p /var/lib/kubelet

修改kubelet的配置文件
node162
sed -i s/161/162/g /etc/kubernetes/kubelet
node163
sed -i s/161/163/g /etc/kubernetes/kubelet

启动kubelet
systemctl start kubelet && systemctl enable kubelet && systemctl status kubelet
返回与master类似的信息

5、coredns安装
coredns负责实现pod之间通过service来通信

coredns下载

在安装前需要给kubelet添加两个启动参数

  • --cluster-dns=10.0.0.186 集群dns服务的clusterIP地址
  • --cluster-domain=cluster.local 集群dns域
    重启kubelet

解压安装文件

tar zvxf kubernetes-server-linux-amd64.tar.gz   && cd kubernetes    
tar zvxf kubernetes-src.tar.gz   
cd cluster/addons/dns/coredns/    

修改文件transforms2sed.sed为

s/__PILLAR__DNS__SERVER__/10.0.0.186/g
s/__PILLAR__DNS__DOMAIN__/cluster.local/g
s/__PILLAR__CLUSTER_CIDR__/$SERVICE_CLUSTER_IP_RANGE/g
s/__PILLAR__DNS__MEMORY__LIMIT__/200Mi/g
s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

sed -f transforms2sed.sed coredns.yaml.base > coredns.yaml
kubectl create -f coreyaml
kubectl get po -n kube-system

验证
创建实例
cat example.yaml

apiVersion: v1
kind: Service
metadata:
  name: dnsutils-ds
  labels:
    app: dnsutils-ds
spec:
  type: NodePort
  selector:
    app: dnsutils-ds
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: dnsutils-ds
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      app: dnsutils-ds
  template:
    metadata:
      labels:
        app: dnsutils-ds
    spec:
      containers:
      - name: my-dnsutils
        image: tutum/dnsutils:latest
        command:
          - sleep
          - "3600"
        ports:
        - containerPort: 80

kubectl create -f example.yaml
kubectl exec -it dnsutils-ds-2t88f -- bash

root@dnsutils-ds-2t88f:/# nslookup kubernetes
Server:         10.0.0.186
Address:        10.0.0.186#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.0.0.1

root@dnsutils-ds-2t88f:/# nslookup www.baidu.com
Server:         10.0.0.186
Address:        10.0.0.186#53

Non-authoritative answer:
www.baidu.com   canonical name = www.a.shifen.com.
Name:   www.a.shifen.com
Address: 14.215.177.39
Name:   www.a.shifen.com
Address: 14.215.177.38

root@dnsutils-ds-2t88f:/# nslookup nginx-svc    
Server:         10.0.0.186
Address:        10.0.0.186#53

Name:   nginx-svc.default.svc.cluster.local
Address: 10.0.0.205

四、集群验证

kubectl get nodes
返回

NAME            STATUS   ROLES    AGE     VERSION
192.168.0.161   Ready    <none>   6m10s   v1.19.0
192.168.0.162   Ready    <none>   12s     v1.19.0
192.168.0.163   Ready    <none>   24s     v1.19.0

kubectl get cs
返回

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
etcd-0               Healthy   {"health":"true"}   
controller-manager   Healthy   ok                  
scheduler            Healthy   ok     

为了便于查看区分master以及node,打标签

kubectl label node 192.168.0.161  node-role.kubernetes.io/master='master'   
kubectl label node 192.168.0.162  node-role.kubernetes.io/node='node'
kubectl label node 192.168.0.163  node-role.kubernetes.io/node='node'   

kubeclt get nodes
返回

192.168.0.161   Ready    master   31m   v1.19.0
192.168.0.162   Ready    node     25m   v1.19.0
192.168.0.163   Ready    node     25m   v1.19.0

五、问题

1、本文中临时关闭了swap,如果是生产服务器一定要永久关闭
2、二进制安装的一定不要安装kubeadm,有时候我们的环境里为了方便获取kubernetes镜像会使用到kubeadm,这个时候用完kubeadm后一定要卸载kubeadm和kubelet,否则二进制安装的kubelet无法正常启动
否则可能会爆出这个错

failed to get container info for "/user.slice/user-0.slice/session-1.scope": unknown container "/use

GitHub Repository