k8s 1.26.3 部署(containerd+centos 7.9)

发布时间 2024-01-12 11:19:21作者: 麦兜的大表哥

一.系统环境初始化,所有节点都要做

  服务器清单

 10.12.121.190 k8s-01-master
 10.12.121.191 k8s-01-node
 
根据实际更改初始化化hosts解析以及hostname,改完以后直接分别在master,node节点上执行执行
 
#配置服务器时间保持一致
yum install -y chrony    
systemctl enable chronyd
systemctl restart chronyd


#关闭交换空间、关闭防火墙、禁用selinux、修改hosts文件
#关闭交换空间
sudo swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

#关闭防火墙和禁用selinux
systemctl stop firewalld && systemctl disable  firewalld
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#修改hosts文件(/etc/hosts) 插入内容分别是你的主机IP和主机名
#修改主机名命令:hostnamectl set-hostname xxx 修改 hostname
cat >> /etc/hosts << EOF
10.12.121.190 k8s-01-master   
10.12.121.191 k8s-01-node   
EOF

hostnamectl set-hostname k8s-01-master && bash


#修改Linux内核参数,添加网桥过滤器和地址转发功能
cat >> /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf

#加载网桥过滤器模块
modprobe br_netfilter
lsmod | grep br_netfilter # 验证是否生效

#配置ipvs功能
在kubernetes中Service有两种代理模型,一种是基于iptables的,一种是基于ipvs,两者对比ipvs的性能要高,如果想要使用ipvs模型,需要手动载入ipvs模块
yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4  
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules 
# 执行脚本
/etc/sysconfig/modules/ipvs.modules

#验证ipvs模块
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 使用yum-config-manager创建docker阿里存储库 
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安装并配置Containerd容器运行时
yum -y install containerd.io-1.6.6

mkdir -p /etc/containerd
wget -N https://945me.top/update/config.toml -P /etc/containerd/

cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

systemctl enable containerd  --now

mkdir /etc/containerd/certs.d/docker.io/ -p
cat > /etc/containerd/certs.d/docker.io/hosts.toml <<EOF
[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
  capabilities = ["pull"]
EOF

systemctl restart containerd

#配置国内yum源,一键安装 kubeadm、kubelet、kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum install -y kubelet-1.26.3 kubeadm-1.26.3 kubectl-1.26.3

systemctl enable kubelet.service --now

二.初始化集群

只在k8s--01-master操作

1.配置容器运行时

crictl config runtime-endpoint unix:///run/containerd/containerd.sock

 

 2.下载默认配置文件

mkdir -p /root/k8s-install

cd /root/k8s-install

vim kubeadm.yaml

更改advertiseAddress: 192.168.220.247

更改 name: k8s-master

sed -i 's/192.168.220.247/10.12.121.190/g' kubeadm.yaml

sed -i 's/k8s-master/k8s-01-master/g' kubeadm.yaml

 

advertiseAddress:更改为master的IP地址

criSocket:指定容器运行时

imageRepository:配置国内加速源地址

podSubnet:pod网段地址

serviceSubnet:services网段地址

末尾添加了指定使用ipvs,开启systemd

nodeRegistration.name:改为当前主机名称

 

 kubeadm.yaml 模版以如下内容加入,其中更改advertiseAddress:更改为master的IP地址,nodeRegistration.name:改为当前主机名称

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.12.121.190
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: k8s-01-master
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.26.3
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd


 

  

3.进行初始化

kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

初始化成功输出如下

Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.12.121.190:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:ddbad6bc94e766518998f6096d28b80dab03a2a40c1e17f9dca3a47dfccf50db
 

4.配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config


sudo chown $(id -u):$(id -g) $HOME/.kube/config

三. node 节点添加到集群

1.赋值初始化输出的token信息,在node节点执行

kubeadm join 10.12.121.190:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:ddbad6bc94e766518998f6096d28b80dab03a2a40c1e17f9dca3a47dfccf50db

输出如下就代表加成功了

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2.验证使用可以使用 kubectl 命令

kubectl get nodes

四.安装网络组件Calico,仅在master节点操作

1.安装calico

cd /root/k8s-install

wget  https://rancher.945me.top/update/calico.yaml

kubectl apply -f calico.yaml

2.查看组件状态 是否为 Running状态 如下图:

kubectl get pods -n kube-system

3.查看node节点状态,这样集群就算安装成功了

kubectl get node