k8s v1.20二进制高可用集群安装部署
链接:https://pan.baidu.com/s/13TpUek8KMyjdPA2v4Jytqw
提取码:9ujh
服务器整体规划:
k8s-master01 10.0.1.201 kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,etcd,nginx,keepalived
k8s-master02 10.0.1.202 kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,nginx,keepalived
k8s-master03 10.0.1.205 kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,docker,nginx,keepalived
k8s-node01 10.0.1.203 kubelet,kube-proxy,docker,etcd
k8s-node02 10.0.1.204 kubelet,kube-proxy,docker,etcd
k8s-master-lb 192.168.110.136 (VIP) Nginx+Keepalived 不占硬件资源,在3台master上
pod 网段: 10.96.0.0/16 #kube-controller-manager中的--cluster-cidr字段
service 网段: 10.244.0.0/16 #后面搭建apiserver、kube-controller-manager的时候定义
kubernetes ClusterIP: 10.244.0.1 #集群service的第一个IP,自动分配
kube-dns ClusterIP: 10.244.0.2 #部署CoreDNS的时候需要把coredns.yaml中clusterIP字段修改成10.244.0.2
单Master服务器规划:
k8s-master01 10.0.1.201 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node01 10.0.1.203 kubelet,kube-proxy,docker,etcd
k8s-node02 10.0.1.204 kubelet,kube-proxy,docker,etcd
先配置单master集群,后面再手动扩展和配置高可用
配置ssh免登录:master01操作
执行前先把下面第一点中的修改主机名和添加hosts做了
ssh-keygen -t rsa #全部回车
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i ~/.ssh/id_rsa.pub $i;done
一、在所有集群节点初始化配置
修改主机名:
hostnamectl set-hostname k8s-master01 # master01执行
hostnamectl set-hostname k8s-master02 # master02执行
hostnamectl set-hostname k8s-master03 # master03执行
hostnamectl set-hostname k8s-node01 # node01执行
hostnamectl set-hostname k8s-node02 # node02执行
添加hosts:
cat >> /etc/hosts << EOF
10.0.1.201 k8s-master01
10.0.1.202 k8s-master02
10.0.1.205 k8s-master03
10.0.1.203 k8s-node01
10.0.1.204 k8s-node02
10.0.1.200 k8s-master-lb
EOF
关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
关闭selinux
永久:
sed -i 's/enforcing/disabled/' /etc/selinux/config
临时:
setenforce 0
关闭swap
永久:
sed -ri 's/.*swap.*/#&/' /etc/fstab
临时:
swapoff -a
同步时间
安装 crontabs服务
yum install crontabs
systemctl status crond //查看状态
systemctl enable crond //设为开机启动
systemctl start crond //启动crond服务
systemctl stop crond //关闭crond服务
systemctl restart crond //重启crond服务
安装ntpdate:
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm && \
yum install ntpdate -y
时间同步配置如下:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && \
echo 'Asia/Shanghai' > /etc/timezone && \
ntpdate time2.aliyun.com
配置定时规则:
vim /etc/crontab //编辑
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
用户的定时任务分6段,分别是:分,时,日,月,周,命令
第1列表示分钟1~59 每分钟用*或者 */1表示
第2列表示小时1~23(0表示0点)
第3列表示日期1~31
第4列表示月份1~12
第5列标识号星期0~6(0表示星期天)
第6列要运行的命令
*:表示任意时间都,实际上就是“每”的意思。可以代表00-23小时或者00-12每月或者00-59分
-:表示区间,是一个范围,00 17-19 * * * cmd,就是每天17,18,19点的整点执行命令
,:是分割时段,30 3,19,21 * * * cmd,就是每天凌晨3和晚上19,21点的半点时刻执行命令
/n:表示分割,可以看成除法,*/5 * * * * cmd,每隔五分钟执行一次
*/3 * * * * root ntpdate time2.aliyun.com
保存生效
crontab /etc/crontab
查看任务
crontab -l
查看日志
tail -f /var/log/cron
配置国内yum源
#更改yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#刷新缓存
yum makecache
#禁止自动更新系统内核
yum update -y --exclud=kernel*
更新内核版本
由于Docker运行需要较新的系统内核功能,例如ipvs等等,所以一般情况下,我们需要使用4.0+以上版本的系统内核。
# 内核要求是4.18+,如果是`CentOS 8`则不需要升级内核
#1.导入elrepo的key
rpm -import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
#安装elrepo的yum源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
#仓库启用后,列出可用的内核相关包
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
#长期维护版本lt为5.4,最新主线稳定版ml为5.14,我们需要安装最新的长期维护版本内核,使用如下命令:(以后这台机器升级内核直接运行这句就可升级为最新维护版本)
yum -y --enablerepo=elrepo-kernel install kernel-lt.x86_64 kernel-lt-devel.x86_64
#设置启动优先级
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
#查看内核版本
grubby --default-kernel
#如果资源下载慢,网盘里面已经下载好了,可以直接用,网盘内核版本:4.19.5
mkdir -p /root/k8s/deploy/kernel && cd /root/k8s/deploy/kernel
rpm -ivh kernel-ml-4.19.5-1.el7.elrepo.x86_64.rpm
rpm -ivh kernel-ml-devel-4.19.5-1.el7.elrepo.x86_64.rpm
rpm -ivh kernel-ml-headers-4.19.5-1.el7.elrepo.x86_64.rpm
安装ipvs
ipvs是系统内核中的一个模块,其网络转发性能很高。一般情况下,我们首选ipvs。
#安装IPVS
yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp
#加载IPVS模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
优化系统内核参数
内核参数优化的主要目的是使其更适合kubernetes的正常运行。
#进行内核优化
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF
#2.查看内核参数
sysctl -p
#3.重启
reboot
安装基础软件
yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp -y
二、部署ETCD集群:
Etcd 是一个分布式键值存储系统
,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。
ETCD数量通常为奇数个
etcd-1 | 10.0.1.201 |
---|---|
etcd-2 | 10.0.1.203 |
etcd-3 | 10.0.1.204 |
注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。
2.1、准备cfssl证书生成工具
cfssl
是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。找任意一台服务器操作,这里用Master01节点。
网盘已下载好,可以直接用。
# 下载软件包
mkdir -p /root/k8s/deploy/tls/cfssl && cd /root/k8s/deploy/tls/cfssl
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
cp cfssl_linux-amd64 /usr/local/bin/cfssl
cp cfssljson_linux-amd64 /usr/local/bin/cfssljson
cp cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
2.2、生成Etcd证书
2.2.1、自签证书颁发机构(CA)
创建证书存放目录:
mkdir -p /root/k8s/deploy/tls/{etcd,k8s} && cd /root/k8s/deploy/tls/etcd
自签CA
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
证书项 | 解释 |
---|---|
C | 国家 |
ST | 省 |
L | 城市 |
O | 组织 |
OU | 组织别名 |
生成证书:会生成ca.pem和ca-key.pem文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
参数项 | 解释 |
---|---|
gencert | 生成新的key(密钥)和签名证书 |
--initca | 初始化一个新CA证书 |
2.2.2、使用自签CA签发Etcd Https证书
也在etcd目录:/root/k8s/deploy/tls/etcd
cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"10.0.1.201",
"10.0.1.202",
"10.0.1.203",
"10.0.1.204",
"10.0.1.205"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
注:上述文件hosts
字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP,我把另外两台master节点也加了进去,后面做扩展用。
生成证书,会生成server.pem和server-key.pem文件
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2.3、部署Etcd集群 (master01上操作)
1)下载etcd二进制文件
网盘已下载。
mkdir -p /root/k8s/deploy/package && cd /root/k8s/deploy/package
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
2)创建工作目录并解压二进制包
mkdir -p /opt/etcd/{bin,cfg,ssl}
tar -zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
3)创建etcd配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.1.201:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.1.201:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.1.201:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.1.201:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.1.201:2380,etcd-2=https://10.0.1.203:2380,etcd-3=https://10.0.1.204:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
配置文件说明:
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIALCLUSTER_TOKEN:集群Token
ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
4)systemd管理etcd
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5)拷贝生成的证书至指定位置
cp /root/k8s/deploy/tls/etcd/ca*pem /root/k8s/deploy/tls/etcd/server*pem /opt/etcd/ssl/
6)启动并设置开机启动
systemctl daemon-reload && systemctl start etcd && systemctl enable etcd && systemctl status etcd
注意:此时启动一台etcd会显示hang住,没有处于Running状态,暂时忽略,是因为其他两个节点并没有启动,可以查看日志/var/log/messages
7)将上面节点1所有生成的文件拷贝到节点2和节点3
scp -r /opt/etcd/ k8s-node01:/opt/
scp /usr/lib/systemd/system/etcd.service k8s-node01:/usr/lib/systemd/system/
scp -r /opt/etcd/ k8s-node02:/opt/
scp /usr/lib/systemd/system/etcd.service k8s-node02:/usr/lib/systemd/system/
8)在node1和node2节点上分别修改etcd.conf
配置文件中的节点名称和当前服务器IP
vim /opt/etcd/cfg/etcd.conf node1节点操作
修改字段:
ETCD_NAME # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_LISTEN_PEER_URL # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS # 修改此处为当前服务器IP
ETCD_INITIAL_ADVERTISE_PEER_URLS # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS # 修改此处为当前服务器IP
完整配置文件:
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.1.203:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.1.203:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.1.203:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.1.203:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.1.201:2380,etcd-2=https://10.0.1.203:2380,etcd-3=https://10.0.1.204:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
vim /opt/etcd/cfg/etcd.conf node2节点操作
修改字段:
ETCD_NAME # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_LISTEN_PEER_URL # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS" # 修改此处为当前服务器IP
ETCD_INITIAL_ADVERTISE_PEER_URLS # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS # 修改此处为当前服务器IP
完整配置文件:
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.1.204:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.1.204:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.1.204:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.1.204:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://10.0.1.201:2380,etcd-2=https://10.0.1.203:2380,etcd-3=https://10.0.1.204:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
9)启动etcd并设置开机启动
在node01和node02节点上操作
systemctl daemon-reload && systemctl start etcd && systemctl enable etcd && systemctl status etcd
再把master01上的etcd-1重启下:
systemctl daemon-reload && systemctl restart etcd && systemctl status etcd
10)查看集群状态
master01操作
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://10.0.1.201:2379,https://10.0.1.203:2379,https://10.0.1.204:2379" endpoint health --write-out=table
显示结果如下,说明部署成功。
+-------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+-------------------------+--------+-------------+-------+
| https://10.0.1.201:2379 | true | 13.15258ms | |
| https://10.0.1.203:2379 | true | 13.318605ms | |
| https://10.0.1.204:2379 | true | 16.182005ms | |
+-------------------------+--------+-------------+-------+
三、在所有集群节点上安装docker
如果有docker了,就不用安装了,docker版本尽量在19.x以上。
如果没有,一键安装:
curl -fsSL https://get.docker.com/ | sh
'''
执行这个命令后,脚本就会自动的将一切准备工作做好,并且把 Docker 安装在系统中。
不过,由于伟大的墙的原因,在国内使用这个脚本可能会出现某些下载出现错误的情况。国内的一些云服务商提供了这个脚本的修改版本,使其使用国内的 Docker 软件源镜像安装,这样就避免了墙的干扰。
阿里云安装脚本
复制代码代码如下:
'''
curl -sSL http://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/docker-engine/internet | sh
systemctl start docker && systemctl enable docker && systemctl status docker
配置docker国内镜像加速:
cat > /etc/docker/daemon.json << EOF
{"registry-mirrors": ["https://u8n2zdxj.mirror.aliyuncs.com"]}
EOF
重载:
systemctl daemon-reload && systemctl restart docker && systemctl status docker
四、部署Master 节点
在master01上操作
4.1、部署kube-apiserver
4.1.1、自签证书签发机构(CA)
cd /root/k8s/deploy/tls/k8s
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成证书:生成ca.pem和ca-key.pem文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
4.1.2、使用自签CA签发kube-apiserver HTTPS证书
创建证书请求文件
cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.244.0.1",
"127.0.0.1",
"10.0.1.201",
"10.0.1.202",
"10.0.1.205",
"10.0.1.200",
"10.0.1.206",
"10.0.1.207",
"10.0.1.208",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
注意:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
注: 如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.244.0.1)
"10.244.0.1", #servicer IP
"127.0.0.1",
"10.0.1.201", #master01
"10.0.1.202", #master02,预留
"10.0.1.205", #master03,预留
"10.0.1.200", #vip
"10.0.1.206", #预留
"10.0.1.207", #预留
"10.0.1.208", #预留
生成证书,生成server.pem和server-key.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
4.1.3、部署kube-apiserver步骤
1)下载并解压二进制软件包
网盘已下载好。
下载对应版本:
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
我选择的是v1.20.4
cd /root/k8s/deploy/package
wget https://dl.k8s.io/v1.20.15/kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} && \
tar -zxvf kubernetes-server-linux-amd64.tar.gz && \
cd kubernetes/server/bin && \
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin && \
cp kubectl /usr/bin/
2)创建配置文件
注意修改etcd-server IP、apiserver IP、service IP段、
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://10.0.1.201:2379,https://10.0.1.203:2379,https://10.0.1.204:2379 \\
--bind-address=10.0.1.201 \\
--secure-port=6443 \\
--advertise-address=10.0.1.201 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.244.0.0/16 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。
参数说明:
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
1.20版本必须加的参数:--service-account-issuer,--service-account-signing-key-file
--etcd-xxxfile:连接Etcd集群证书
--audit-log-xxx:审计日志
启动聚合层相关配置:--requestheader-client-ca-file,--proxy-client-cert-file,--proxy-client-key-file,--requestheader-allowed-names,--requestheader-extra-headers-prefix,--requestheader-group-headers,--requestheader-username-headers,--enable-aggregator-routing
3)拷贝生成的证书
cd /root/k8s/deploy/tls/k8s
cp /root/k8s/deploy/tls/k8s/ca*pem /root/k8s/deploy/tls/k8s/server*pem /opt/kubernetes/ssl/
4)启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
5)创建token文件
格式:token,用户名,UID,用户组
生成token:
[root@k8s-master01 k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
bde68eb08fabb126259d018b395115b6
cat > /opt/kubernetes/cfg/token.csv << EOF
bde68eb08fabb126259d018b395115b6,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
6)systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
7)启动并设置开机启动
systemctl daemon-reload && \
systemctl start kube-apiserver && \
systemctl enable kube-apiserver && \
systemctl status kube-apiserver
测试
curl --insecure https://10.0.1.201:6443/
有返回说明启动正常 。
4.2、部署kube-controller-manager
在master01上操作
1)创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr=10.96.0.0/16 \\
--service-cluster-ip-range=10.244.0.0/16 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF
释义:
--cluster-cidr #pod IP段,掩码需要是16位
--service-cluster-ip-range #service IP段
--kubeconfig:连接apiserver配置文件
--leader-elect:当该组件启动多个时,自动选举(HA)
--cluster-signing-cert-file/--cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
2)生成kubeconfig文件
生成kube-controller-manager证书:
cd /root/k8s/deploy/tls/k8s
创建证书请求文件:
cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"hosts": [
"127.0.0.1",
"10.0.1.201",
"10.0.1.202",
"10.0.1.205",
"10.0.1.200",
"10.0.1.206",
"10.0.1.207",
"10.0.1.208"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
EOF
释义:
注:
hosts 列表包含所有 kube-controller-manager 节点 IP;我这里填写3台master的IP,另外的为预留IP
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
生成kubeconfig文件(以下是linux命令,直接全部复制到终端执行):
cd /root/k8s/deploy/tls/k8s
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://10.0.1.201:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
--client-certificate=./kube-controller-manager.pem \
--client-key=./kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3)systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4)启动并设置开机启动
systemctl daemon-reload && \
systemctl start kube-controller-manager && \
systemctl enable kube-controller-manager && \
systemctl status kube-controller-manager
4.3、部署kube-scheduler和kubectl
1)创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF
释义:
--kubeconfig:连接apiserver配置文件
--leader-elect:当该组件启动多个时,自动选举(HA)
2)生成kubeconfig文件
生成kube-scheduler证书:
cd /root/k8s/deploy/tls/k8s
创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"10.0.1.201",
"10.0.1.202",
"10.0.1.205",
"10.0.1.200",
"10.0.1.206",
"10.0.1.207",
"10.0.1.208"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
EOF
注:
hosts 列表包含所有 kube-scheduler 节点 IP;我这里填的3台master IP,顺便预留了一些 。
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
生成kubeconfig文件(以下是shell命令,直接在终端执行):
记得修改下KUBE_APISERVER的地址为master01地址
KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://10.0.1.201:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
--client-certificate=./kube-scheduler.pem \
--client-key=./kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3)systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4)启动并设置开机自启
systemctl daemon-reload && \
systemctl start kube-scheduler && \
systemctl enable kube-scheduler && \
systemctl status kube-scheduler
5)查看集群状态
生成kubectl连接集群的证书:
cd /root/k8s/deploy/tls/k8s
创建证书请求文件:
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [
"127.0.0.1",
"10.0.1.201",
"10.0.1.202",
"10.0.1.203",
"10.0.1.204",
"10.0.1.205",
"10.0.1.200",
"10.0.1.206",
"10.0.1.207",
"10.0.1.208"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
#hosts 列表包含所有 节点 IP,包括node节点以及预留的IP
说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;
注: 这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group; “O”: “system:masters”, 必须是system:masters,否则后面kubectl create clusterrolebinding报错。
创建kubeconfig配置文件 kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书。
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
创建kubeconfig配置文件 kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书
生成kubeconfig文件:
mkdir /root/.kube
KUBE_APISERVER IP修改成master01的地址
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://10.0.1.201:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
--client-certificate=./admin.pem \
--client-key=./admin-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=cluster-admin \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
授权kubernetes证书访问kubelet api权限:
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
测试集群:
kubectl cluster-info #会获取一些kubernetes信息
kubectl get cs #查看各组件健康状态
kubectl get all --all-namespaces #查看集群内所有资源
配置kubectl命令自动补全:
yum install -y bash-completion && \
source /usr/share/bash-completion/bash_completion && \
source <(kubectl completion bash) && \
kubectl completion bash > ~/.kube/completion.bash.inc && \
source '/root/.kube/completion.bash.inc' && \
source $HOME/.bash_profile && \
echo "source <(kubectl completion bash)" >> ~/.bashrc && \
source ~/.bashrc
6)授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
查看:
kubectl get clusterrolebinding | grep -i bootstrap
五、部署node节点
下面还是在master01节点上操作,即同时作为Worker Node。
5.1、创建工作目录并拷贝二进制文件
在master01节点拷贝kubernetes-server安装包到/root/k8s/deploy/package目录
ssh k8s-node01 "mkdir -p /root/k8s/deploy/package"
ssh k8s-node02 "mkdir -p /root/k8s/deploy/package"
cd /root/k8s/deploy/package && \
scp kubernetes-server-linux-amd64.tar.gz k8s-node01:/root/k8s/deploy/package && \
scp kubernetes-server-linux-amd64.tar.gz k8s-node02:/root/k8s/deploy/package
然后分别到node01、node02执行以下操作
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} && \
cd /root/k8s/deploy/package && \
tar -xzvf kubernetes-server-linux-amd64.tar.gz && \
cd kubernetes/server/bin && \
cp kubelet kube-proxy /opt/kubernetes/bin
5.2、部署kubelet
master01操作
1)创建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master01 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
EOF
参数说明:
--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像
2)配置参数文件
记得修改clusterDNS IP,该IP为service IP段第二个IP
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.244.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
3)生成kubelet初次加入集群引导kubeconfig文件
KUBE_APISERVER为master01 IP地址
TOKEN为先前生成的/opt/kubernetes/cfg/token.csv #两者一定要相同
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://10.0.1.201:6443"
TOKEN="bde68eb08fabb126259d018b395115b6"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
4)systemd管理kubelet
先把kubelet执行文件拷贝到/opt/kubernetes/bin/
cd /root/k8s/deploy/package && \
cp kubernetes/server/bin/kubelet /opt/kubernetes/bin/
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5)启动并设置开机启动
systemctl daemon-reload && \
systemctl start kubelet && \
systemctl enable kubelet && \
systemctl status kubelet
6)批准kubelet证书申请并加入集群
# 查看kubelet证书请求
[root@k8s-master01 package]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-51DMQaNyjahx9Nrmt5_ckUbA6LvUBZOaYHAeKkythkU 23s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 批准申请
kubectl certificate approve node-csr-51DMQaNyjahx9Nrmt5_ckUbA6LvUBZOaYHAeKkythkU
>>> certificatesigningrequest.certificates.k8s.io/node-csr-51DMQaNyjahx9Nrmt5_ckUbA6LvUBZOaYHAeKkythkU approved
# 查看节点(由于网络插件还没有部署,节点显示准备就绪 NotReady,暂时先忽略。)
kubectl get node #这里有问题,没有获取到资源,查看日志显示system:kube-controller-manager有权限问题,解决办法:
kubectl create clusterrolebinding controller-node-clusterrolebing --clusterrole=system:controller:node-controller --user=system:kube-controller-manager
kubectl create clusterrolebinding kube-controller-manager --clusterrole=cluster-admin --user=system:kube-controller-manager
重启所有组件:
systemctl restart etcd.service && \
systemctl status etcd.service && \
systemctl restart kube-apiserver.service && \
systemctl status kube-apiserver.service && \
systemctl restart kube-controller-manager.service && \
systemctl status kube-controller-manager.service && \
systemctl restart kube-scheduler.service && \
systemctl status kube-scheduler.service && \
systemctl restart kubelet.service && \
systemctl status kubelet.service
5.3、部署kube-proxy
master01操作
1)创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
2)配置参数文件
注意clusterCIDR为pod网段
hostnameOverride为master01节点主机名,别写错了
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master01
clusterCIDR: 10.96.0.0/16
EOF
3)生成kube-proxy.kubeconfig文件
生成kube-proxy证书:
创建证书请求文件:
cd /root/k8s/deploy/tls/k8s
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [
"127.0.0.1",
"10.0.1.201",
"10.0.1.202",
"10.0.1.203",
"10.0.1.204",
"10.0.1.205",
"10.0.1.200",
"10.0.1.206",
"10.0.1.207",
"10.0.1.208"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成证书:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
生成kubeconfig文件:
注意修改KUBE_APISERVER IP地址为master01地址
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://10.0.1.201:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
4)systemd管理kube-proxy
cp /root/k8s/deploy/package/kubernetes/server/bin/kube-proxy /opt/kubernetes/bin
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5)启动并设置开机启动
systemctl daemon-reload && \
systemctl start kube-proxy && \
systemctl enable kube-proxy && \
systemctl status kube-proxy
5.4、部署网络组件Calico
Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
官网:https://projectcalico.docs.tigera.io/about/about-calico
部署Calico:
calico.yaml不用修改,可以直接执行
网盘已下载好,可以直接拿来用:k8s\deploy\package\calico.yaml
cd /root/k8s/deploy/package
wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
kubectl apply -f calico.yaml
查看calico状态:(STATUS状态是Running)
kubectl get po,svc,deploy -A -o wide
5.5、授权apiserver访问kubelet
master01操作
cd /root/k8s/deploy/package
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
执行:
kubectl apply -f apiserver-to-kubelet-rbac.yaml
5.6、新增Worker Node
1)拷贝已部署好的Node相关文件到新节点
在Master01节点将Worker Node涉及文件拷贝到新节点node01、node02:10.0.1.203、10.0.1.204
拷贝到node01节点:
scp -r /opt/kubernetes root@10.0.1.203:/opt/ && \
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@10.0.1.203:/usr/lib/systemd/system && \
scp /opt/kubernetes/ssl/ca.pem root@10.0.1.203:/opt/kubernetes/ssl
拷贝到node02节点:
scp -r /opt/kubernetes root@10.0.1.204:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@10.0.1.204:/usr/lib/systemd/system && \
scp /opt/kubernetes/ssl/ca.pem root@10.0.1.204:/opt/kubernetes/ssl
2)删除kubelet证书和kubeconfig文件
因为这几个文件是证书申请审批后自动生成的,每个Node不同,所以必须删除。
在node01、node02节点操作
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig && \
rm -f /opt/kubernetes/ssl/kubelet*
3)修改配置文件中的主机名
在node01、node02节点操作
node01:
vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node01
vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node01
node02:
vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node02
vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node02
4)启动并设置开机启动
node01、node02上操作
systemctl daemon-reload && \
systemctl start kubelet kube-proxy && \
systemctl enable kubelet kube-proxy && \
systemctl status kubelet kube-proxy
5)在Master上批准新Node kubelet证书申请
在master01上操作
# 查看证书请求
[root@k8s-master01 package]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-py6S6tMt8hyMXFb-cp6oloxg2ZaYw0L8C2ezUXNwLq8 18s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-tFBsqBiFMTTgBACstqeucyhHXOco2wRFKX8LOev8m-E 14s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 同意授权请求
kubectl certificate approve node-csr-py6S6tMt8hyMXFb-cp6oloxg2ZaYw0L8C2ezUXNwLq8
kubectl certificate approve node-csr-tFBsqBiFMTTgBACstqeucyhHXOco2wRFKX8LOev8m-E
查看节点信息:
[root@k8s-master01 package]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 5h5m v1.20.15
k8s-node01 NotReady <none> 14s v1.20.15
k8s-node02 NotReady <none> 9s v1.20.1
上面的node节点状态为NotReady,是因为calico检测到有新加入的节点,还在部署pod,等一会就好了。
[root@k8s-master01 package]# kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-6dfcd885bf-sc8f4 1/1 Running 0 20m 192.168.32.129 k8s-master01 <none> <none>
kube-system calico-node-4drpr 1/1 Running 0 9m34s 10.0.1.203 k8s-node01 <none> <none>
kube-system calico-node-g7f5q 1/1 Running 0 20m 10.0.1.201 k8s-master01 <none> <none>
kube-system calico-node-kstt9 1/1 Running 0 9m13s 10.0.1.204 k8s-node02 <none> <none>
六、部署CoreDNS
CoreDNS用于集群内部Service名称解析:
coredns.yaml
需要修改下clusterIP字段,把IP修改你自己service网段所在的第二个IP。
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: coredns
image: coredns/coredns:1.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.244.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
部署:
kubectl apply -f coredns.yaml
查看:(STATUS状态是Running)
kubectl get po,svc,deploy -A -o wide
测试:
[root@k8s-master01 package]# kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.244.0.2
Address 1: 10.244.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.244.0.1 kubernetes.default.svc.cluster.local
/ #
七、扩容多Master(高可用架构)
7.1、部署Master02 节点
master02 IP: 10.0.202
Master02 与已部署的Master01所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可。
7.1.1、安装docker
在master02节点操作
在基础环境配置中已安装,这一步忽略。
7.1.2、创建etcd证书目录
在Master02创建etcd证书目录
mkdir -p /opt/etcd/ssl
7.1.3、拷贝master01配置文件到master02
拷贝Master01上所有K8s文件和etcd证书到Master02
在master01节点操作:
scp -r /opt/kubernetes k8s-master02:/opt && \
scp -r /opt/etcd/ssl k8s-master02:/opt/etcd && \
scp /usr/lib/systemd/system/kube* k8s-master02:/usr/lib/systemd/system && \
scp /usr/bin/kubectl k8s-master02:/usr/bin && \
scp -r ~/.kube k8s-master02:~
7.1.4、删除证书文件
master02操作
删除kubelet证书和kubeconfig文件
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig && \
rm -f /opt/kubernetes/ssl/kubelet*
7.1.5、修改配置文件IP和主机名
master02操作
修改apiserver、kubelet和kube-proxy配置文件为本地IP
vim /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=10.0.1.202 \
--advertise-address=10.0.1.202 \
...
vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master02
vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master02
7.1.6、启动并设置开机启动
master02操作
启动kube-apiserver、kube-controller-manage、kube-scheduler、kubelet、kube-proxy
systemctl daemon-reload && \
systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy && \
systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy && \
systemctl status kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
7.1.7、查看集群状态
master02操作
# 修改连接master为本机IP
vim ~/.kube/config
...
server: https://10.0.1.202:6443
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
7.1.8、批准kubelet证书申请
master02操作
# 查看证书请求,这里填写你自己生成的。
[root@k8s-master02 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-1dQ4UNIa31I21OGis424i42KCi11hX44f0cDD5ezZxA 3m57s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 授权请求
kubectl certificate approve node-csr-1dQ4UNIa31I21OGis424i42KCi11hX44f0cDD5ezZxA
# 查看Node,如果状态为NotReady,稍微等待一会儿就好了。
[root@k8s-master02 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 25d v1.20.15
k8s-master02 Ready <none> 2m30s v1.20.15
k8s-node01 Ready <none> 25d v1.20.15
k8s-node02 Ready <none> 25d v1.20.15
7.1.9、kubectl 命令自动补全
master02操作
yum install -y bash-completion && \
source /usr/share/bash-completion/bash_completion && \
source <(kubectl completion bash) && \
kubectl completion bash > ~/.kube/completion.bash.inc && \
source '/root/.kube/completion.bash.inc' && \
source $HOME/.bash_profile && \
echo "source <(kubectl completion bash)" >> ~/.bashrc && \
source ~/.bashrc
7.2、部署Master03 节点
master03 IP: 10.0.1.205
Master03 与已部署的Master01所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可。
7.2.1、安装docker
在master03节点操作
在基础环境配置中已安装,这一步忽略。
7.2.2、创建etcd证书目录
在Master03创建etcd证书目录
mkdir -p /opt/etcd/ssl
7.2.3、拷贝master01配置文件到master03
拷贝Master01上所有K8s文件和etcd证书到Master03
在master01节点操作:
scp -r /opt/kubernetes k8s-master03:/opt && \
scp -r /opt/etcd/ssl k8s-master03:/opt/etcd && \
scp /usr/lib/systemd/system/kube* k8s-master03:/usr/lib/systemd/system && \
scp /usr/bin/kubectl k8s-master03:/usr/bin && \
scp -r ~/.kube k8s-master03:~
7.2.4、删除证书文件
master03操作
删除kubelet证书和kubeconfig文件
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig && \
rm -f /opt/kubernetes/ssl/kubelet*
7.2.5、修改配置文件IP和主机名
master03操作
修改apiserver、kubelet和kube-proxy配置文件为本地IP
vim /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=192.168.110.133 \
--advertise-address=192.168.110.133 \
...
vim /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-master03
vim /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-master03
7.2.6、启动并设置开机启动
master03操作
启动kube-apiserver、kube-controller-manage、kube-scheduler、kubelet、kube-proxy
systemctl daemon-reload && \
systemctl start kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy && \
systemctl enable kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy && \
systemctl status kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy
7.2.7、查看集群状态
master03操作
# 修改连接master为本机IP
vim ~/.kube/config
...
server: https://10.0.1.205:6443
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
7.2.8、批准kubelet证书申请
master03操作
# 查看证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-6-Er6fY8-aBvipBowmdwQYsXsrRfyhULMPAR3XDSQeg 72s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 授权请求
kubectl certificate approve node-csr-6-Er6fY8-aBvipBowmdwQYsXsrRfyhULMPAR3XDSQeg
# 查看Node,如果状态为NotReady,稍微等待一会儿就好了
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 25d v1.20.15
k8s-master02 Ready <none> 28m v1.20.15
k8s-master03 Ready <none> 103s v1.20.15
k8s-node01 Ready <none> 25d v1.20.15
k8s-node02 Ready <none> 25d v1.20.15
7.2.9、kubectl 命令自动补全
master03操作
yum install -y bash-completion && \
source /usr/share/bash-completion/bash_completion && \
source <(kubectl completion bash) && \
kubectl completion bash > ~/.kube/completion.bash.inc && \
source '/root/.kube/completion.bash.inc' && \
source $HOME/.bash_profile && \
echo "source <(kubectl completion bash)" >> ~/.bashrc && \
source ~/.bashrc
至此,多master + 多node k8s集群部署完毕
7.3、部署Nginx+Keepalived高可用负载均衡器
1)在3台master节点安装软件包
注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。
注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。
在3台Master节点操作:
yum install epel-release -y && yum -y install nginx-all-modules.noarch && yum install nginx keepalived -y
2)Nginx配置文件(3台配置一样)
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四层负载均衡,为3台Master apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.0.1.201:6443; # Master1 APISERVER IP:PORT
server 10.0.1.202:6443; # Master2 APISERVER IP:PORT
server 10.0.1.205:6443; # Master2 APISERVER IP:PORT
}
server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
EOF
3)keepalived配置文件(Nginx Master01)
master01上操作:
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens32 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
10.0.1.200/24
}
track_script {
check_nginx
}
}
EOF
参数说明:
vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
virtual_ipaddress:虚拟IP(VIP)
准备上述配置文件中检查nginx运行状态的脚本:
master01上操作:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
添加可执行权限:
chmod +x /etc/keepalived/check_nginx.sh
注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移
4)keepalived配置文件(Nginx 从节点)
在master02上操作:
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.1.200/24
}
track_script {
check_nginx
}
}
EOF
准备上述配置文件中检查nginx运行状态的脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
添加可执行权限:
chmod +x /etc/keepalived/check_nginx.sh
在master03上操作:
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.1.200/24
}
track_script {
check_nginx
}
}
EOF
准备上述配置文件中检查nginx运行状态的脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
添加可执行权限:
chmod +x /etc/keepalived/check_nginx.sh
5)启动并设置开机启动
systemctl daemon-reload && \
systemctl start nginx keepalived && \
systemctl enable nginx keepalived && \
systemctl status nginx keepalived
6)查看keepalived工作状态
在master01节点执行以下命令可以看到网卡多了一个虚拟IP
ip a
7)Nginx+Keepalived高可用测试
关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
在Nginx Master执行systemctl stop nginx;
在Nginx Backup,ip addr命令查看已成功绑定VIP。
8)访问负载均衡器测试
找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:
[root@k8s-master01 ~]# curl -k https://10.0.1.200:16443/version
{
"major": "1",
"minor": "20",
"gitVersion": "v1.20.15",
"gitCommit": "8f1e5bf0b9729a899b8df86249b56e2c74aebc55",
"gitTreeState": "clean",
"buildDate": "2022-01-19T17:23:01Z",
"goVersion": "go1.15.15",
"compiler": "gc",
"platform": "linux/amd64"
}
可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver
通过查看Nginx日志也可以看到转发apiserver IP:
[root@k8s-master01 ~]# cat /var/log/nginx/k8s-access.log
192.168.110.131 192.168.110.131:6443 - [21/Sep/2022:09:12:15 +0800] 200 429
192.168.110.132 192.168.110.132:6443 - [21/Sep/2022:09:12:50 +0800] 200 429
192.168.110.133 192.168.110.133:6443 - [21/Sep/2022:09:12:53 +0800] 200 429
192.168.110.134 192.168.110.131:6443 - [21/Sep/2022:09:12:59 +0800] 200 429
192.168.110.135 192.168.110.131:6443 - [21/Sep/2022:09:13:01 +0800] 200 429
192.168.110.170 192.168.110.132:6443 - [21/Sep/2022:09:13:03 +0800] 200 429
9)修改所有Worker Node连接LB VIP
试想下,虽然我们增加了Master02 Node和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Worker Node组件连接都还是Master01 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。
因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来的10.0.1.201修改为10.0.1.200(VIP)。
在所有Worker Node执行:
sed -i 's#10.0.1.201:6443#10.0.1.200:16443#' /opt/kubernetes/cfg/*
systemctl restart kubelet kube-proxy
检查节点状态:
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 26d v1.20.15
k8s-master02 Ready <none> 36h v1.20.15
k8s-master03 Ready <none> 36h v1.20.15
k8s-node01 Ready <none> 26d v1.20.15
k8s-node02 Ready <none> 26d v1.20.15
至此,一套完整的二进制高可用集群算是完成了。