《Kubernetes权威指南》学习笔记第二篇-二进制方式安装

1、准备

下载binary kubernetesv1.19.0
下载etcd

节点

hostaname ip role
node158 192.168.0.158 master
node159 192.168.0.159 node
node160 192.168.0.160 node

关掉三节点swap分区
swapoff -a

2、master安装相关服务

node158上操作
安装etcd
etcd时kubernetes集群的主数据库,在启动集群其他服务之前必须先安装该服务

tar zvxf etcd-v3.3.25-linux-amd64.tar.gz   
cd etcd-v3.3.25-linux-amd64
cp -ar etcd /usr/bin
cp -ar etcdctl /usr/bin

cat /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target

启动前需要先创建数据目录和配置文件目录

mkdir -p /var/lib/etcd  
mkdir -p /etc/etcd

systemctl daemon-reload
systemctl start etcd

验证etcd安装是否成功
etcdctl cluster-health

安装kube-apiserver

tar zvxf kubernetes-server-linux-amd64.tar.gz  
cd kubernetes/server/bin   
cp kube-apiserver /usr/bin   

cat /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

创建apiserver的相关配置文件及目录
mkdir -p /var/log/kubernetes
cat /etc/kubernetes/apiserver

KUBE_API_ARGS="--etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernetes --v=0"

安装kube-controller-manager

tar zvxf kubernetes-server-linux-amd64.tar.gz  
cd kubernetes/server/bin   
cp kube-controller-manager /usr/bin   

cat /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

创建controller-manager配置文件
cat /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0"

创建连接api server的配置文件
cat /etc/kubernetes/kubeconfig

apiversion: v1
kind: Config
users:
- name: client
  user: 
clusters:
- name: default
  cluster:
    server: http://192.168.0.158:8080
contexts:
- context:
    cluster: default
    user: client
  name: default
current-context: default

安装kube-scheduler

tar zvxf kubernetes-server-linux-amd64.tar.gz  
cd kubernetes/server/bin   
cp kube-schduler /usr/bin   

cat /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler 
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

创建scheduler配置文件
cat /etc/kubernetes/scheduler

KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0"

按照顺序启动kube-apiserver、kube-controller-manager、kube-scheduler三个服务

systemctl enable kube-apiserver   
systemctl enable kube-controller-manager   
systemctl enable kube-scheduler   
systemctl start kube-apiserver  
systemctl start kube-controller-manager   
systemctl start kube-scheduler   

到这里master上的服务就完成了,有以下两点需要注意

  • /etc/etcd/etcd.conf本文在安装etcd的时候设置了,但是没有进行具体的配置,etcd默认http://127.0.0.1:2379供客户端连接
  • 在创建kube-apiserver等三个服务的时候一定要注意其配置文件中用到的参数指制定的值,根据需要提前进行创建
  • 启动顺序按照etcd、kube-apiserver、kube-controller-manager、kube-scheduler的顺序来启动

3、node安装相关服务

因为master节点也是可以作为node节点的,前面使用kubeadm来安装的时候主节点上是安装了kubelet的,如果不打算将主节点也作为pod调度节点时,可以不装kubelet,这篇二进制安装就不装了,只在node上安装docker、kubelet、kube-proxy
安装docker没啥好说的,因为本文均使用二进制安装,这里也用二进制安装docker
node159 node160操作
安装kubelet
从node158上复制执行文件kubelet到node节点

scp kubelet node159:/usr/bin  
scp kubelet node160:/usr/bin

cat /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

创建数据目录及配置文件
mkdir -p /var/lib/kubelet
mkdir -p /etc/kubermetes
mkdir -p /var/log/kubernetes

cat /etc/kubernetes/kubelet

KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.0.159 --logtostderr=false --log-dir=/var/log/kubernetes --v=0"

参数--ostname-override不同node请设置不同的参数值
将node158上的kubeconfig传输到node节点上

scp /etc/kubernetes/kubeconfig node159:/etc/kubernetes  
scp /etc/kubernetes/kubeconfig node160:/etc/kubernetes  

systemctl enable kubelet
systemctl start kubelet

安装kube-proxy
从node158上复制执行文件kube-proxy到node节点

scp kube-proxy node159:/usr/bin  
scp kube-proxy node160:/usr/bin 

cat /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube-proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service

[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

cat /etc/kubernetes/proxy

KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

systemctl enable kube-proxy
systemctl start kube-proxy

node默认会通过kubelet向master注册本机node,故可以在master查看集群状态
kubectl get nodes
显示

NAME            STATUS   ROLES    AGE     VERSION
192.168.0.159   Ready    <none>   4h45m   v1.19.0
192.168.0.160   Ready    <none>   4h45m   v1.19.0

从上面的情况来看集群正常

需要注意的几点:

  • 关闭所有节点swap分区
  • master节点既可以做节点,本文主节点不作为工作节点,pod不会调度到master节点,如果想调度pod到master节点,则需要在master节点上安装docker、kubelet、kube-proxy
  • master上复制kubectl执行文件到/usr/bin下,便于使用命令进行集群操作

问题

1、本文中临时关闭了swap,如果是生产服务器一定要永久关闭
2、二进制安装的一定不要安装kubeadm,有时候我们的环境里为了方便获取kubernetes镜像会使用到kubeadm,这个时候用完kubeadm后一定要卸载kubeadm和kubelet,否则二进制安装的kubelet无法正常启动
否则可能会爆出这个错

failed to get container info for "/user.slice/user-0.slice/session-1.scope": unknown container "/use