全网最全Kubernetes(k8s)知识点,看着一篇就够了

发布时间 2023-05-26 17:16:14作者: 潇潇love涛涛

一、引言

Kubernetes是谷歌强力推出的一款开源的容器编排技术,他的目标是让部署容器化的应用更简单高效,Kubernetes 提供了应用部署,规划,更新,维护的一系列机制,很多大公司都在使用。Kubernetes有叫k8s(下面我就简称k8s)。下面我们就进入k8s的世界吧!

二、k8s概述和特性

1、几点概述

  • k8s是谷歌在2014年推出的容器化集群管理系统
  • 使用k8s进行容器化应用部署
  • 使用k8s利于应用扩展
  • k8s目标是让部署容器化应用更加简洁高效

2、k8s特性

(1)自动装箱

基于容器对应用运行环境的资源配置要求

(2)自我修复

当容器失败时,会对容器进行重启
当所部署的Node节点有问题时,会对容器进行重新部署和重新调度
当容器未通过监控检查时,会关闭此容器直到容器正常运行时,才会对外提供服务

(3)水平扩展

当我们有大量的请求来临时,我们可以增加副本数量,从而达到水平扩展的效果

(4)服务发现

用户不需使用额外的服务发现机制,就能够基于Kubernetes 自身能力实现服务发现和负载均衡

(5)滚动更新

可以根据应用的变化,对应用容器运行的应用,进行一次性或批量式更新,添加应用的时候,不是加进去就马上可以进行使用,而是需要判断这个添加进去的应用是否能够正常使用

(6)版本回退

可以根据应用部署情况,对应用容器运行的应用,进行历史版本即时回退。类似回滚。

(7)密钥和配置管理

在不需要重新构建镜像的情况下,可以部署和更新密钥和应用配置,类似热部署。

(8)存储编排

自动实现存储系统挂载及应用,特别对有状态应用实现数据持久化非常重要
存储系统可以来自于本地目录、网络存储(NFS、Gluster、Ceph 等)、公共云存储服务

(9)批处理

提供一次性任务,定时任务;满足批量数据处理和分析的场景

三、k8s集群架构的组件

k8s集群架构主要是由Master node(主控节点)和work node(工作节点)组成。如图

1、Master node(主控节点)

  • apiserver
    集群统一入口,以restful方式,交给etcd存储。
  • scheduler
    节点调度,选择node节点应用部署
    *controller-manager
    处理集群中常规后台任务,一个资源对应一个控制器
  • etcd
    用于保存集群里面的各种数据,比如说一些状态数据,pod数据,service数据

2、work node(工作节点)

  • kubeelet
    master 派到node节点的代表,管理本机的容器
  • kube-proxy
    提供网络代理,负载均衡等操作

四、k8s核心概念

1、Pod

  • 最小部署单元
  • 一组容器的集合
  • 共享网络
  • 生命周期短

2、controller

  • 确保预期的pod副本数量
  • 无状态应用部署
  • 有状态应用部署
  • 确保所有的mode运行用一个pod
  • 一次性任务和定时任务

3、service

  • 定义一组pod的访问规则

五、kubernetes集群搭建介绍

1、部署 Kubernetes 集群方式

  • kubeadm
    Kubeadm 是一个K8s部署工具,提供kubeadm init 和 kubeadm join,用于快速部署Kubernetes集群。
    官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
  • 二进制包
    从github下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
    Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署 Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。

2、搭建kubernetes集群规划

  • 单master集群

  • 多master集群

六、kubeadm 快速部署k8s集群

1、环境准备

1.1、硬件环境

角色 IP
k8s_master 192.168.10.100
k8s_node1 192.168.10.102
k8s_node2 192.168.10.103
k8s_node4 192.168.10.104

1.2、环境初始化

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.10.100    k8smaster
192.168.10.102    k8snode1
192.168.10.103    k8snode2
192.168.10.104    k8snode4
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

2、所有节点安装Docker/kubeadm/kubelet

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

2.1、安装Docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo     #wget下载docker
yum -y install docker-ce-18.06.1.ce-3.el7        # 安装docker
systemctl enable docker && systemctl start docker           #设置自启动
docker --version      #检查是否安装成功
#设置docker的镜像
$ cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

2.2、添加阿里云YUM软件源

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.3、安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

$ yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
$ systemctl enable kubelet

3、部署Kubernetes Master

在192.168.10.100(Master)执行。

$ kubeadm init \
  --apiserver-advertise-address=192.168.10.100 \     # 设置apiserver的ip,也就是master的本机ip
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

如果你的swap没关闭,他就会报错,报错截图如下:

# 安装成功的信息如下
[root@hadoop100 yum.repos.d]# swapoff -a
[root@hadoop100 yum.repos.d]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@hadoop100 yum.repos.d]# kubeadm init   --apiserver-advertise-address=192.168.10.100   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.18.0   --service-cidr=10.96.0.0/12   --pod-network-cidr=10.244.0.0/16
W0526 14:35:38.129635   19861 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.10.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0526 14:36:47.126661   19861 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0526 14:36:47.128024   19861 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 25.002851 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: yns0bk.uts2jsm1unmvcbdp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.100:6443 --token yns0bk.uts2jsm1unmvcbdp \
    --discovery-token-ca-cert-hash sha256:4d76fcbd7aa9bc0aa9165010c0203a18864a417ee6c9ad8e9add835832475743 

根据他出现的信息,我们可以使用kubectl工具,如图:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、加入Kubernetes Node

根据master node的提示信息,我们需要在每个node节(192.168.10.102、192.168.10.103、192.168.10.104)点中执行如下内容:

kubeadm join 192.168.10.100:6443 --token yns0bk.uts2jsm1unmvcbdp \
    --discovery-token-ca-cert-hash sha256:4d76fcbd7aa9bc0aa9165010c0203a18864a417ee6c9ad8e9add835832475743 

加入node节点时,会创建一个token,默认有效期为24小时,当过期之后,该token就不可用了,这时候就需要重新创建token,操作如下:

kubeadm  d

5、部署CNI网络插件

所有的work node加入完成之后,这时候我们执行一下‘kubectl get nodes’命令,可以发现节点状态都是NotReady,这是因为该k8s集群还处于离线状态,我们需要给他设置一下网络。

这里有个小坑!直接执行下面镜像地址是无法访问的

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

报错信息如下:我刚开始以为网络问题,所以多试了几遍,还是不行,

感谢这位大哥的笔记k8s构建Flannel网络插件失败已经成功解决。
提示:这个需要在所以节点上执行,并且执行速度有点慢可以执行以下命令:

kubectl get pods -n kube-system      #查看所有的pods的状态
kubectl get nodes                    #查看所有的节点 观察是否变成Ready状态

6、测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx                 # 拉取nginx镜像并部署
$ kubectl expose deployment nginx --port=80 --type=NodePort                   # 暴露端口80
$ kubectl get pod,svc                 #查看pod和port(端口)


访问地址:http://NodeIP:Port
http://192.168.10.103:30636