K8S集群搭建

发布时间 2024-01-10 15:32:29作者: Linux-王照炎

sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

3 确保各个节点MAC地址或product_uuid唯一

ifconfig eth0 | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid

温馨提示:
    一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。 
    Kubernetes使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。


4 检查网络节点是否互通

简而言之,就是检查你的k8s集群各节点是否互通,可以使用ping命令来测试。

5 允许iptable检查桥接流量

cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system

6 检查端口是否被占用

参考链接: https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/

7 禁用防火墙

systemctl disable --now firewalld

8 禁用selinux

sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
grep ^SELINUX= /etc/selinux/config

9 所有节点修改cgroup的管理进程为systemd

[root@master231 ~]# docker info | grep cgroup
Cgroup Driver: cgroupfs
[root@master231 ~]#
[root@master231 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn/","https://hub-mirror.c.163.com/","https://reg-mirror.qiniu.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master231 ~]#
[root@master231 ~]# systemctl restart docker
[root@master231 ~]#
[root@master231 ~]# docker info | grep "Cgroup Driver"
Cgroup Driver: systemd
[root@master231 ~]#

温馨提示:
如果不修改cgroup的管理驱动为systemd,则默认值为cgroupfs,在初始化master节点时会失败哟!

  • 所有节点安装kubeadm,kubelet,kubectl
    1 软件包说明
    你需要在每台机器上安装以下的软件包:
    kubeadm:
    用来初始化集群的指令。
    kubelet:
    在集群中的每个节点上用来启动Pod和容器等。
    kubectl:
    用来与集群通信的命令行工具。

kubeadm不能帮你安装或者管理kubelet或kubectl,所以你需要确保它们与通过kubeadm安装的控制平面(master)的版本相匹配。 如果不这样做,则存在发生版本偏差的风险,可能会导致一些预料之外的错误和问题。

然而,控制平面与kubelet间的相差一个次要版本不一致是支持的,但kubelet的版本不可以超过"API SERVER"的版本。 例如,1.7.0版本的kubelet可以完全兼容1.8.0版本的"API SERVER",反之则不可以。

2 配置软件源

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF

3 查看kubeadm的版本(将来你要安装的K8S时请所有组件版本均保持一致!)

yum -y list kubeadm --showduplicates | sort -r

4 安装kubeadm,kubelet,kubectl软件包

yum -y install kubeadm-1.23.17-0 kubelet-1.23.17-0 kubectl-1.23.17-0

当然,你也可以使用我下载好的软件包

tar xf oldboyedu-kubeadm-kubelet-kubectl.tar.gz && yum -y localinstall kubeadm-kubelet-kubectl/*.rpm

5 启动kubelet服务(若服务启动失败时正常现象,其会自动重启,因为缺失配置文件,初始化集群后恢复!)

systemctl enable --now kubelet
systemctl status kubelet

参考链接:
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/

  • 初始化master节点
    0.彩蛋:
    批量导出master镜像:
    [root@master231 ~]# docker save docker images | awk 'NR>1{print $1":"$2}' -o oldboyedu-master-v1.23.17.tar.gz

    导入master镜像
    [root@master231 ~]# docker load -i oldboyedu-master-v1.23.17.tar.gz

    1 使用kubeadm初始化master节点
    [root@master231 ~]# kubeadm init --kubernetes-version=v1.23.17 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=oldboyedu.com

相关参数说明:
--kubernetes-version:
指定K8S master组件的版本号。

--image-repository:
	指定下载k8s master组件的镜像仓库地址。
	
--pod-network-cidr:
	指定Pod的网段地址。
	
--service-cidr:
	指定SVC的网段

--service-dns-domain:
	指定service的域名。若不指定,默认为"cluster.local"。

使用kubeadm初始化集群时,可能会出现如下的输出信息:
[init]
使用初始化的K8S版本。

[preflight]
主要是做安装K8S集群的前置工作,比如下载镜像,这个时间取决于你的网速。

[certs]
生成证书文件,默认存储在"/etc/kubernetes/pki"目录哟。

[kubeconfig]
生成K8S集群的默认配置文件,默认存储在"/etc/kubernetes"目录哟。

[kubelet-start]
启动kubelet,
环境变量默认写入:"/var/lib/kubelet/kubeadm-flags.env"
配置文件默认写入:"/var/lib/kubelet/config.yaml"

[control-plane]
使用静态的目录,默认的资源清单存放在:"/etc/kubernetes/manifests"。
此过程会创建静态Pod,包括"kube-apiserver","kube-controller-manager"和"kube-scheduler"

[etcd]
创建etcd的静态Pod,默认的资源清单存放在:""/etc/kubernetes/manifests"

[wait-control-plane]
等待kubelet从资源清单目录"/etc/kubernetes/manifests"启动静态Pod。

[apiclient]
等待所有的master组件正常运行。

[upload-config]
创建名为"kubeadm-config"的ConfigMap在"kube-system"名称空间中。

[kubelet]
创建名为"kubelet-config-1.22"的ConfigMap在"kube-system"名称空间中,其中包含集群中kubelet的配置

[upload-certs]
跳过此节点,详情请参考”--upload-certs"

[mark-control-plane]
标记控制面板,包括打标签和污点,目的是为了标记master节点。

[bootstrap-token]
创建token口令,例如:"kbkgsa.fc97518diw8bdqid"。
如下图所示,这个口令将来在加入集群节点时很有用,而且对于RBAC控制也很有用处哟。

[kubelet-finalize]
更新kubelet的证书文件信息

[addons]
添加附加组件,例如:"CoreDNS"和"kube-proxy”

...

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u)?(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.231:6443 --token juanog.fbl440fv8rzp4d2q
--discovery-token-ca-cert-hash sha256:f3baddb1fd501b701c676ad128e3badba9369424fdea2faad0f5dd653714e113

2 拷贝授权文件,用于管理K8S集群

[root@master231 ~]# mkdir -p $HOME/.kube
[root@master231 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master231 ~]# sudo chown $(id -u)?(id -g) $HOME/.kube/config

3 查看集群节点

[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 6m7s v1.23.17
[root@master231 ~]#
[root@master231 ~]#
[root@master231 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
etcd-0 Healthy {"health":"true","reason":""}
scheduler Healthy ok
controller-manager Healthy ok
[root@master231 ~]#

  • 配置worker节点加入集群
    0.彩蛋:
    批量导出master镜像:
    [root@worker232 ~]# docker save docker images | awk 'NR>1{print $1":"$2}' -o oldboyedu-woker-v1.23.17.tar.gz
    [root@worker232 ~]#

    导入master镜像
    [root@worker233 ~]# docker load -i oldboyedu-woker-v1.23.17.tar.gz

    1.将worker节点加入集群,此处复制出你自己的节点的token,在初始化master节点时会有提示哟。
    [root@worker232 ~]# kubeadm join 10.0.0.231:6443 --token juanog.fbl440fv8rzp4d2q
    --discovery-token-ca-cert-hash sha256:f3baddb1fd501b701c676ad128e3badba9369424fdea2faad0f5dd653714e113

[root@worker233 ~]# kubeadm join 10.0.0.231:6443 --token juanog.fbl440fv8rzp4d2q
--discovery-token-ca-cert-hash sha256:f3baddb1fd501b701c676ad128e3badba9369424fdea2faad0f5dd653714e113

2.查看集群节点 

[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 13m v1.23.17
worker232 NotReady 4m58s v1.23.17
worker233 NotReady 26s v1.23.17
[root@master231 ~]#

  • 初始化网络组件
    1 查看现有的网络插件

推荐阅读:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

2 下载flannel资源清单文件

[root@master231 ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

3 修改flannel的配置文件

[root@master231 ~]# grep 16 kube-flannel.yml
"Network": "10.244.0.0/16",
[root@master231 ~]#
[root@master231 ~]# sed -i 's#10.244#10.100#' kube-flannel.yml
[root@master231 ~]#
[root@master231 ~]# grep 16 kube-flannel.yml
"Network": "10.100.0.0/16",
[root@master231 ~]#

因为我们在初始化K8S集群的时候,修改了Pod的网段。因此,这里也需要做相应修改哟。

4 部署flannel组件

[root@master231 ~]# kubectl apply -f kube-flannel.yml

5 验证flannel组件是否正常工作

[root@master231 ~]# kubectl get pods -A -o wide | grep flannel
kube-flannel kube-flannel-ds-44b2l 0/1 Init:0/2 0 7s 10.0.0.232 worker232
kube-flannel kube-flannel-ds-l2vbm 0/1 Init:0/2 0 7s 10.0.0.233 worker233
kube-flannel kube-flannel-ds-rv487 0/1 Init:0/2 0 7s 10.0.0.231 master231
[root@master231 ~]#

[root@master231 ~]# kubectl get pods -A -o wide | grep flannel
kube-flannel kube-flannel-ds-44b2l 0/1 Init:1/2 0 2m58s 10.0.0.232 worker232
kube-flannel kube-flannel-ds-l2vbm 1/1 Running 0 2m58s 10.0.0.233 worker233
kube-flannel kube-flannel-ds-rv487 1/1 Running 0 2m58s 10.0.0.231 master231
[root@master231 ~]#
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 60m v1.23.17
worker232 NotReady 52m v1.23.17
worker233 Ready 47m v1.23.17
[root@master231 ~]#

[root@master231 ~]# kubectl get pods -A -o wide | grep flannel
kube-flannel kube-flannel-ds-44b2l 1/1 Running 0 5m50s 10.0.0.232 worker232
kube-flannel kube-flannel-ds-l2vbm 1/1 Running 0 5m50s 10.0.0.233 worker233
kube-flannel kube-flannel-ds-rv487 1/1 Running 0 5m50s 10.0.0.231 master231
[root@master231 ~]#
[root@master231 ~]#
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 63m v1.23.17
worker232 Ready 54m v1.23.17
worker233 Ready 50m v1.23.17
[root@master231 ~]#

6.各节点验证是否有flennel.1设备

[root@master231 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::3853:4eff:fe31:1ccb prefixlen 64 scopeid 0x20
ether 3a:53:4e:31:1c:cb txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0

[root@master231 ~]#

[root@worker232 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::e886:7ff:fe7b:6813 prefixlen 64 scopeid 0x20
ether ea:86:07:7b:68:13 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0

[root@worker232 ~]#

[root@worker233 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.2.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::60c3:70ff:fee0:30b0 prefixlen 64 scopeid 0x20
ether 62:c3:70:e0:30:b0 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0

...

cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.2.1 netmask 255.255.255.0 broadcast 10.100.2.255
inet6 fe80::e45a:bbff:fe7f:b7ef prefixlen 64 scopeid 0x20
ether e6:5a:bb:7f:b7:ef txqueuelen 1000 (Ethernet)
RX packets 777 bytes 64788 (63.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 776 bytes 96116 (93.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@worker233 ~]#

  • 如果集群初始化失败可以重置
    kubeadm reset –f

  • 自动补全功能-新手必备

yum –y install bash-completion

kubectl completion bash > ~/.kube/completion.bash.inc
echo "source '$HOME/.kube/completion.bash.inc'" >> $HOME/.bash_profile
source $HOME/.bash_profile

[root@master231 ~]# kubectl # 连续按住2次tab键,发现有补全说明没问题。
alpha auth cordon diff get patch run version
annotate autoscale cp drain help plugin scale wait
api-resources certificate create edit kustomize port-forward set
api-versions cluster-info debug exec label proxy taint
apply completion delete explain logs replace top
attach config describe expose options rollout uncordon
[root@master231 ~]# kubectl

  • 手动创建cni0网卡
    ---> 假设 master231的flannel.1是10.100.0.0网段。
    ip link add cni0 type bridge
    ip link set dev cni0 up
    ip addr add 10.100.0.1/24 dev cni0

---> 假设 worker232的flannel.1是10.100.1.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.1.1/24 dev cni0

  • 资源清单的组成
    apiVersion:
    对应不同的API版本。

kind:
资源的类型。

metadata:
声明资源的源数据信息。

spec:
用户期望运行状态,可以让Pod运行指定滚动容器。

status:
资源的实际运行状态,由K8S组件自行维护。