kubernetes安装实战->稳定版本v1.14.3

发布时间 2023-06-30 11:15:56作者: wang_wei123

kubernetes安装方式有很多种,这里kubeadm方式安装,一主两从形式部署。

1、集群信息
a、集群节点规划
主机名     节点ip        角色   部署组件
k8s-master 192.168.1.203 master etcd、proxy、apiserver、controller-manage、scheduler、coredns、pause
k8s-node1  192.168.1.202 slave proxy、coredns、pause
k8s-node2  192.168.1.201 slave proxy、coredns、pause

b、组件版本
组件 版本 说明
ContOS 7.2/7.9.2009
Kernel 3.10.0-1160.el7.x86_64
etcd 3.3.10
coredns 1.3.1
proxy v1.14.3
controller-manager v1.14.3
apiserver v1.14.3
scheduler v1.14.3
coredns 1.3.1
kubelet-1.14.3 系统软件,yum安装
kubeadm-1.14.3 系统软件,yum安装
kubectl-1.14.3 系统软件,yum安装

2、安装前准备
操作节点:所有节点(master、node)均需执行
a、修改hostname
hostname必须只能包含小写字母、数字、",","-",且开头必须是小写字母或数字
在master节点上设置主机名:hostnamectl set-hostname k8s-master
在slave1节点上设置主机名:hostnamectl set-hostname k8s-node1
在slave2节点上设置主机名:hostnamectl set-hostname k8s-node2
b、添加hosts解析
cat >>/etc/hosts<<EOF
192.168.1.203 k8s-master
192.168.1.202 k8s-node1
192.168.1.201 k8s-node2
EOF

3、调整系统配置
操作节点:所有节点(master、node)均需执行
a、设置安全组开发端口(或者直接关闭iptables)
如果节点间无安全组限制(内网机器间可以任意访问),可以忽略,否则,至少保证如下端口可通
k8s-master节点:TCP:6443、2379、2380、60080、60081协议端口全部打开
k8s-slave节点:UDP协议端口全部打开
b、设置ptables:iptables -P FORWARD ACCEPT
c、关闭swap:swapoff -a 【禁止开机启动加载swap: sed -i '/ swap / s/^\(.*\)/#\1/g' /etc/fstab
d、关闭selinux和防火墙
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld
e、修改内核参数
cat <<EOF> /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
vm.max_map_count=262144
EOF
$ modprobe br_netfilter
$ lsmod |grep filter # 查看
$ sysctl -p /etc/sysctl.d/k8s.conf

f、设置yum源
时钟初始化:ntpdate s1a.time.edu.cn
基础命令初始化:yum install -y wget zip unzip rsync lrzsz vim-enhanced ntpdate tree lsof dstat net-tools
$ curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
$ cat <<EOF> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
$ yum clean all && yum makecache

4、安装docker
操作节点:所有节点(master、node)均需执行
## 查看所有可用版本:yum list docker-ce --showduplicates|sort -r
## 安装旧版本:yum install -y --setopt=obsoletes=0 docker-ce-17.03.3.ce-1.el7(或yum install -y docker-ce-17.03.3.ce-1.el7)
## 安装最新版本:yum install -y docker-ce

## 配置docker加速
$ mkdir -p /etc/docker
$ tee /etc/docker/daemon.json <<-'EOF'
{
  "insecure-registries": ["192.168.1.203:5000"],
  "registry-mirrors": ["https://4rnnw4sp.mirror.aliyuncs.com"]
}
EOF

##启动docker
$ systemctl daemon-reload
$ systemctl enable docker && systemctl restart docker

5、安装kubernetes
a、安装kubeadm、kubelet和kubectl
操作节点:所有节点(master、node)均需执行
yum install -y kubelet-1.14.3 kubeadm-1.14.3 kubectl-1.14.3 --disableexcludes=kubernetes
查看kubeadm版本:kubectl version
设置开机启动:systemctl enable kubelet
b、初始化配置文件
操作节点:master节点执行
kubeadm config print init-defaults >kubeadm.yaml 导出修改配置文件
[root@k8s-master ~]# grep '##' kubeadm.yaml 如下,修改了4处配置文件。
advertiseAddress: 192.168.1.203 ## master ip
imageRepository: k8s.gcr.io ## k8s.gcr.io-->registry.aliyuncs.com/google_containers
kubernetesVersion: v1.14.3 ## v1.14.0-->v1.14.3
podSubnet: 10.244.0.0/16 ## ""-->"10.244.0.0/16" pod net, flannle required

c、提前下载镜像
操作节点:master节点执行
# 查看需要使用的镜像列表,如无问题,得到如下列表。
[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml
k8s.gcr.io/kube-apiserver:v1.14.3
k8s.gcr.io/kube-controller-manager:v1.14.3
k8s.gcr.io/kube-scheduler:v1.14.3
k8s.gcr.io/kube-proxy:v1.14.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
# 提前下载镜像到本地
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.10
docker pull coredns/coredns:1.3.1
# 镜像标签(将镜像设置好k8s源)
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3
docker tag mirrorgooglecontainers/etcd-amd64:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1

docker rmi mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3
docker rmi mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3
docker rmi mirrorgooglecontainers/etcd-amd64:3.3.10
docker rmi mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3
docker rmi mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker rmi coredns/coredns:1.3.1
docker rmi mirrorgooglecontainers/pause-amd64:3.1
注意:镜像地址可能出现不可用,需要匹配到合适的镜像使用。

d、初始化master节点
操作节点:master节点执行
kubeadm init --config kubeadm.yaml
...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.203:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:2c5d02ebf2a7cec5e344967297d182664119923eb437a59eb3ad8f5881299124
$ mkdir -p $HOME/.kube
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m12s v1.14.3
此时使用kubectl get no查看节点应该处于NotReady状态,因为还没有配置网络插件;若初始化过程中出错,根据错误信息调整后,执行kubeadm reset后再次执行init操作即可。

e、添加slave节点到集群中
操作节点:所有slave节点均需执行,--前提,前面已完成了初始化。
执行如下命令,master节点上 kubectl get no查看。
kubeadm join 192.168.1.203:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:2c5d02ebf2a7cec5e344967297d182664119923eb437a59eb3ad8f5881299124
:如上,通过get查看slave节点已加入集群,但是还是处于NotReady状态。只因缺少(flanel)网络插件。
$ kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 60m v1.14.3
k8s-node1 NotReady <none> 18s v1.14.3

f、安装flannel插件
操作节点:只需在master节点上执行
下载插件(网络原因,多尝试几次):wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
修改kube-flannel.yml配置, 调整可用的镜像地址和指定网卡信息。
...
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64 ## select used images
#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.22.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=ens33 ## if host has many network nics, the first is default
resources:
requests:
cpu: "100m"
...
:也可指本地私有镜像仓库。
执行安装flannel插件
先拉镜像,提高速度:docker pull quay.io/coreos/flannel:v0.10.0-amd64
执行flannel安装:kubectl create -f kube-flannel.yml

正常情况下,这种情况node节点都是可以处于ready状态的
针对这情况处理方式:确认no-->查pod-->des pod
[root@k8s-node1 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 11h v1.14.3
k8s-node1 NotReady <none> 10h v1.14.3
[root@k8s-node1 ~]# kubectl get pods -A |grep -v Running
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-bg89f 0/1 Pending 0 11h
kube-system coredns-fb8b8dccf-vpxmf 0/1 Pending 0 11h
kube-system kube-proxy-hdtz7 0/1 ContainerCreating 0 10h
[root@k8s-node1 ~]# kubectl describe pod kube-proxy-hdtz7 -n kube-system
...
error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: i/o timeout
Warning FailedCreatePodSandBox 68s (x7 over 21m) kubelet, k8s-node1 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.189.82:443: i/o timeout
注意关键词,failed pulling image 获取镜像失败,此时需要在node节点上提前下载好镜像。
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker pull coredns/coredns:1.3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi mirrorgooglecontainers/pause-amd64:3.1
docker rmi mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
docker rmi coredns/coredns:1.3.1

还有一个报错: tailf /var/log/messages
Jun 29 15:01:22 k8s-master kubelet: W0629 15:01:22.792343 8210 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 29 15:01:24 k8s-master kubelet: E0629 15:01:24.157722 8210 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
参考方法:https://blog.csdn.net/weixin_40548480/article/details/122786504#:~:text=%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%9Avim,%2Fvar%2Flib%2Fkubelet%2Fkubeadm-flags.env
去掉配置文件/var/lib/kubelet/kubeadm-flags.env里面的--network-plugin=cni参数,重启systemctl restart kubelet(所有节点都需要修改)
此时get查看集群都正常了。

此时k8s集群所有部署工作全部完成。
=====================================================================

相关命令
kubectl cluster-info
kubectl explain no
kubectl describe pod coredns-fb8b8dccf-bg89f -n kube-system
kubectl get pod -A
kubectl get pods -o wide
kubectl describe pod nginx-deploy-55d8d67cf-mc9f7

后续维护
停机断电之后,需要手动拉取服务:systemctl restart kubelet