Kubeasz 安装Kubernetes高可用集群

发布时间 2023-08-04 13:52:01作者: kntvops

Kubeasz 安装Kubernetes高可用集群

一、集群规划和基础参数设定

官网: https://github.com/easzlab/kubeasz/blob/v3.6/docs/setup/00-planning_and_overall_intro.md

1.1 主机规划

本次安装采用 Kubeasz 安装,kubeasz 致力于提供快速部署高可用k8s集群的工具, 同时也努力成为k8s实践、使用的参考书;基于二进制方式部署和利用ansible-playbook实现自动化;既提供一键安装脚本, 也可以根据安装指南分步执行安装各个组件。

机器规划如下:

主机名 IP地址 角色
k8s-master-01 192.168.122.111 master节点、keepalived、l4lb
k8s-master-02 192.168.122.112 master节点、keepalived、l4lb
k8s-worker-01 192.168.122.211 etcd节点、worker节点
k8s-worker-02 192.168.122.212 etcd节点、worker节点
k8s-worker-03 192.168.122.213 etcd节点、worker节点
kubeasz-ansible 192.168.122.100 部署节点
192.168.122.200 高可用apiserverIP
  • 部署节点:运行ansible/ezctl命令,一般复用第一个master节点

  • etcd节点: 注意etcd集群需要1,3,5,...奇数个节点,一般复用master节点,这里为了测试,复用了worker节点

  • master节点: 高可用集群至少2个master节点

  • worker节点: 运行应用负载的节点,可根据需要提升机器配置/增加节点数

  • keepalived+l4lb:向集群外提供高可用apiserver,一般使用单独的节点,这个测试,复用了master节点

  • 注意1:确保各节点时区设置一致、时间同步。 如果你的环境没有提供NTP 时间同步,推荐集成安装chrony

  • 注意2:确保在干净的系统上开始安装,不要使用曾经装过kubeadm或其他k8s发行版的环境

  • 注意3:建议操作系统升级到新的稳定内核,请结合阅读内核升级文档

注意:默认配置下容器运行时和kubelet会占用/var的磁盘空间,如果磁盘分区特殊,可以设置config.yml中的容器运行时和kubelet数据目录: CONTAINERD_STORAGE_DIR DOCKER_STORAGE_DIR KUBELET_ROOT_DIR

1.2 部署节点安装配置

1.2.1 使用清华源安装docker

官网:https://mirrors.tuna.tsinghua.edu.cn/help/docker-ce/

export DOWNLOAD_URL="https://mirrors.tuna.tsinghua.edu.cn/docker-ce"
curl -fsSL https://get.docker.com/ | sudo -E sh

docker -v
Docker version 24.0.5, build ced0996

1.2.2 拉取 kubeasz 镜像

使用ansible in docker 容器化方式运行,无需安装额外依赖,后续./ezdown -S直接启动一个容器即可 。

1.2.3 准备ssh免密登陆

配置从部署节点能够ssh免密登陆所有节点

ssh-keygen  -t rsa 
for i in 111 112 211 212 213;do ssh-copy-id  192.168.122.$i;done

for i in 111 112 211 212 213;do ssh 192.168.122.$i hostname ;done
k8s-master-01
k8s-master-02
k8s-worker-01
k8s-worker-02
k8s-worker-03

1.2.4 下载项目源码、二进制、以及离线镜像

# 下载工具脚本ezdown, 使用kubeasz版本3.6.1

export release=3.6.1
wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown

 ./ezdown 
Usage: ezdown [options] [args]
  option:
    -C         stop&clean all local containers
    -D         download default binaries/images into "/etc/kubeasz"
    -P <OS>    download system packages of the OS (ubuntu_22,debian_11,...)
    -R         download Registry(harbor) offline installer
    -S         start kubeasz in a container
    -X <opt>   download extra images
    -d <ver>   set docker-ce version, default "20.10.24"
    -e <ver>   set kubeasz-ext-bin version, default "1.7.1"
    -k <ver>   set kubeasz-k8s-bin version, default "v1.27.2"
    -m <str>   set docker registry mirrors, default "CN"(used in Mainland,China)
    -z <ver>   set kubeasz version, default "3.6.1"
# 下载kubeasz代码、二进制、默认容器镜像,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz
./ezdown -D
...
3.9: digest: sha256:3ec9d4ec5512356b5e77b13fddac2e9016e7aba17dd295ae23c94b2b901813de size: 527
2023-08-04 11:12:48 INFO Action successed: download_all

镜像可能会拉取失败,请确保所有文件都下载成功Action successed,如果失败请使用别的方式获取镜像,然后导入到部署机

二、在部署节点编排k8s安装

2.1 启动容器

 ./ezdown -S

2.2 创建并规划集群配置实例

docker exec -it kubeasz ezctl new k8s-lab01
2023-08-04 11:13:50 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-lab01
2023-08-04 11:13:50 DEBUG set versions
2023-08-04 11:13:50 DEBUG disable registry mirrors
2023-08-04 11:13:50 DEBUG cluster k8s-lab01: files successfully created.
2023-08-04 11:13:50 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-lab01/hosts'
2023-08-04 11:13:50 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-lab01/config.yml'
  • 根据规划修改hosts 文件 : /etc/kubeasz/clusters/k8s-lab01/hosts
[etcd]
192.168.122.211
192.168.122.212
192.168.122.213
[kube_master]
192.168.122.111 k8s_nodename='k8s-master-01'
192.168.122.112 k8s_nodename='k8s-master-02'
[kube_node]
192.168.122.211 k8s_nodename='k8s-worker-01'
192.168.122.212 k8s_nodename='k8s-worker-02'
192.168.122.213 k8s_nodename='k8s-worker-03'
[harbor]
[ex_lb]
192.168.122.111 LB_ROLE=backup EX_APISERVER_VIP=192.168.122.200 EX_APISERVER_PORT=8443
192.168.122.112 LB_ROLE=master EX_APISERVER_VIP=192.168.122.200 EX_APISERVER_PORT=8443
[chrony]
[all:vars]
SECURE_PORT="6443"
CONTAINER_RUNTIME="containerd"
CLUSTER_NETWORK="calico"
PROXY_MODE="ipvs"
SERVICE_CIDR="10.68.0.0/16"
CLUSTER_CIDR="172.20.0.0/16"
NODE_PORT_RANGE="30000-42767"
CLUSTER_DNS_DOMAIN="cluster.local"
bin_dir="/opt/kube/bin"
base_dir="/etc/kubeasz"
cluster_dir="{{ base_dir }}/clusters/k8s-lab01"
ca_dir="/etc/kubernetes/ssl"
k8s_nodename=''
ansible_python_interpreter=/usr/bin/python3
  • 其他集群层面的主要配置选项修改/etc/kubeasz/clusters/k8s-lab01/config.yml, 保持默认即可

三、分步安装集群

3.1 创建证书和环境准备

官网:https://github.com/easzlab/kubeasz/blob/master/docs/setup/01-CA_and_prerequisite.md

docker exec -it kubeasz ezctl setup k8s-lab01 01

...
PLAY RECAP ***********************************************************************************************************************************************
192.168.122.111            : ok=23   changed=19   unreachable=0    failed=0    skipped=115  rescued=0    ignored=0   
192.168.122.112            : ok=23   changed=19   unreachable=0    failed=0    skipped=111  rescued=0    ignored=0   
192.168.122.211            : ok=23   changed=18   unreachable=0    failed=0    skipped=111  rescued=0    ignored=0   
192.168.122.212            : ok=23   changed=19   unreachable=0    failed=0    skipped=111  rescued=0    ignored=0   
192.168.122.213            : ok=23   changed=19   unreachable=0    failed=0    skipped=111  rescued=0    ignored=0   
localhost                  : ok=33   changed=30   unreachable=0    failed=0    skipped=11   rescued=0    ignored=0   

3.2 安装etcd集群

官网:https://github.com/easzlab/kubeasz/blob/master/docs/setup/02-install_etcd.md

docker exec -it kubeasz ezctl setup k8s-lab01 02
# 验证集群
export NODE_IPS="192.168.122.211 192.168.122.212 192.168.122.213"

for ip in ${NODE_IPS}; do
  ETCDCTL_API=3 etcdctl \
  --endpoints=https://${ip}:2379  \
  --cacert=/etc/kubernetes/ssl/ca.pem \
  --cert=/etc/kubernetes/ssl/etcd.pem \
  --key=/etc/kubernetes/ssl/etcd-key.pem \
  endpoint health; done
 
输入出结果:
https://192.168.122.211:2379 is healthy: successfully committed proposal: took = 18.348037ms
https://192.168.122.212:2379 is healthy: successfully committed proposal: took = 17.281408ms
https://192.168.122.213:2379 is healthy: successfully committed proposal: took = 14.890857ms

3.3 安装容器运行时

官网:https://github.com/easzlab/kubeasz/blob/master/docs/setup/03-container_runtime.md

docker exec -it kubeasz ezctl setup k8s-lab01 03

3.4 安装kube_master节点

官网:https://github.com/easzlab/kubeasz/blob/master/docs/setup/04-install_kube_master.md

docker exec -it kubeasz ezctl setup k8s-lab01 04
# 验证
systemctl status kube-apiserver
systemctl status kube-controller-manager
systemctl status kube-scheduler

# 复制kubectl命令到master主机
scp 192.168.122.111:/opt/kube/bin/kubectl /usr/bin/

kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-2               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}   

这里可能需要自己复制KUBECONFIG文件到master节点上,默认文件为部署主机的.kube/config

3.5 安装kube_node节点

docker exec -it kubeasz ezctl setup k8s-lab01 05
# 验证
systemctl status kubelet	# 查看状态
systemctl status kube-proxy

kubectl get node
NAME            STATUS                     ROLES    AGE     VERSION
k8s-master-01   Ready,SchedulingDisabled   master   12m     v1.27.2
k8s-master-02   Ready,SchedulingDisabled   master   12m     v1.27.2
k8s-worker-01   Ready                      node     3m41s   v1.27.2
k8s-worker-02   Ready                      node     3m41s   v1.27.2
k8s-worker-03   Ready                      node     3m41s   v1.27.2

3.6 安装网络组件

docker exec -it kubeasz ezctl setup k8s-lab01 06
kubectl  get  pod -A -n kube-system
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-67c67b9b5f-6zdjq   1/1     Running   0          100s
kube-system   calico-node-6gjdf                          1/1     Running   0          100s
kube-system   calico-node-8wbmf                          1/1     Running   0          100s
kube-system   calico-node-n5xxx                          1/1     Running   0          100s
kube-system   calico-node-vsznt                          1/1     Running   0          100s
kube-system   calico-node-wscv8                          1/1     Running   0          100s

3.7 安装集群主要插件

官网:https://github.com/easzlab/kubeasz/blob/master/docs/setup/07-install_cluster_addon.md

目前挑选一些常用、必要的插件自动集成到安装脚本之中:

docker exec -it kubeasz ezctl setup k8s-lab01 07

3.8 向集群外提供高可用apiserver

官网:https://github.com/easzlab/kubeasz/blob/master/docs/setup/ex-lb.md

kubeasz 3.0.2 重写了ex-lb服务安装,利用最小化依赖编译安装的二进制文件,不依赖于linux发行版;优点是可以统一版本和简化离线安装部署,并且理论上能够支持更多linux发行版

ex_lb 服务由 keepalivedl4lb 组成

  • l4lb:是一个精简版(仅支持四层转发)的nginx编译二进制版本
  • keepalived:利用主备节点vrrp协议通信和虚拟地址,消除l4lb的单点故障;keepalived保持存活,它是基于VRRP协议保证所谓的高可用或热备的,这里用来预防l4lb的单点故障。

keepalived与l4lb配合,实现master的高可用过程如下:

  • 1.keepalived利用vrrp协议生成一个虚拟地址(VIP),正常情况下VIP存活在keepalive的主节点,当主节点故障时,VIP能够漂移到keepalived的备节点,保障VIP地址高可用性。
  • 2.在keepalived的主备节点都配置相同l4lb负载配置,并且监听客户端请求在VIP的地址上,保障随时都有一个l4lb负载均衡在正常工作。并且keepalived启用对l4lb进程的存活检测,一旦主节点l4lb进程故障,VIP也能切换到备节点,从而让备节点的l4lb进行负载工作。
  • 3.在l4lb的配置中配置多个后端真实kube-apiserver的endpoints,并启用存活监测后端kube-apiserver,如果一个kube-apiserver故障,l4lb会将其剔除负载池。
docker exec -it kubeasz ezctl setup k8s-lab01 10
# 验证
systemctl status l4lb 	# 检查进程状态
systemctl status keepalived 	# 检查进程状态
# keepalived 主备切换演练
# 1.尝试关闭 keepalived主节点上的 l4lb进程,然后在keepalived 备节点上查看 master的 VIP地址是否能够漂移过来,并依次检查上一步中的验证项。
root@k8s-master-02:~# ip a|grep 200
    inet 192.168.122.200/32 scope global eth0
root@k8s-master-02:~# systemctl stop l4lb
root@k8s-master-02:~# ip a|grep 200
root@k8s-master-01:~# ip ad |grep 200
    inet 192.168.122.200/32 scope global eth0

# 2. 尝试直接关闭 keepalived 主节点系统,检查各验证项
略