cka考试题

发布时间 2024-01-04 13:51:19作者: 世界终将是黑大帅的

第一题,基于角色访问控制-RBAC

在官网搜索:RBAC,在页面搜索:命令行工具

image-20231209220452353

创建命令:

root@cwlmaster1:~# kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets                                                                                             
clusterrole.rbac.authorization.k8s.io/deployment-clusterrole created   

kubectl create clusterrole clusterrole_name --verb=create(对后面的三个角色拥有create权限) --resource=deployment,statefulsets,daemonsets

创建命名空间——因为模拟没有命名空间需要创建(考试时不需要创建):
root@cwlmaster1:~# kubectl create ns app-team1                                                                                                                                                                                
namespace/app-team1 created  

创建sa(指定命名空间):

root@cwlmaster1:~# kubectl create serviceaccount cicd-token -n app-team1                                                                                                                                                      
serviceaccount/cicd-token created  

kubectl create serviceaccount cicd-token(sa_name) -n app-team1(-n指定命名空间)

创建rolebinding(指定命名空间):

root@cwlmaster1:~# kubectl create rolebinding cicd-token-binding -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token                                                                      
rolebinding.rbac.authorization.k8s.io/cicd-token-binding created 

kubectl create rolebinding rolebinding_name(rolebinding的名字) -n app-team1(-n 指定命名空间) --clusterrole=deployment-clusterrole(将clusterrole绑定到deployment-clusterrole上) --serviceaccount=app-team1:cicd-token(将serviceaccount进行RBAC绑定,命名空间_name:sa_name)

验证:

root@cwlmaster1:~# kubectl describe  rolebinding cicd-token-binding -n app-team1                                                                                                                                              
Name:         cicd-token-binding                                                                                                                                                                                              
Labels:                                                                                                                                                                                                                 
Annotations:                                                                                                                                                                                                            
Role:                                                                                                                                                                                                                         
  Kind:  ClusterRole                                                                                                                                                                                                          
  Name:  deployment-clusterrole                                                                                                                                                                                               
Subjects:                                                                                                                                                                                                                     
  Kind            Name        Namespace                                                                                                                                                                                       
  ----            ----        ---------                                                                                                                                                                                       
  ServiceAccount  cicd-token  app-team1  

第二题,指定node节点不可用

在官网搜索:drain-node

image-20231209224332800

查看节点:

root@cwlmaster1:~# kubectl get nodes                                                                                                                                                                                          
NAME         STATUS   ROLES                  AGE    VERSION                                                                                                                                                                   
cwlmaster1   Ready    control-plane,master   5d1h   v1.23.1                                                                                                                                                                   
cwlnode1     Ready                     5d1h   v1.23.1  

设置node节点不可调度状态:

root@cwlmaster1:~# kubectl cordon cwlnode1                                                                                                                                                                                    
node/cwlnode1 cordoned                                                                                                                                                                                                        
root@cwlmaster1:~# kubectl get nodes                                                                                                                                                                                          
NAME         STATUS                     ROLES                  AGE    VERSION                                                                                                                                                 
cwlmaster1   Ready                      control-plane,master   5d1h   v1.23.1                                                                                                                                                 
cwlnode1     Ready,SchedulingDisabled                    5d1h   v1.23.1 

将节点上的pod全部驱逐(本身没有pod):

root@cwlmaster1:~# kubectl drain cwlnode1 --delete-emptydir-data --ignore-daemonsets --force                                                                                                                                  
node/cwlnode1 already cordoned                                                                                                                                                                                                
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-2w24r, kube-system/kube-proxy-ckdxs                                                                                                                         
node/cwlnode1 drained 

查看node节点上是否有pod(除了两个必要的pod,其他都将驱逐):

root@cwlmaster1:~# kubectl get pod -n kube-system -owide                                                                                                                                                                      
NAME                                       READY   STATUS    RESTARTS       AGE    IP               NODE         NOMINATED NODE   READINESS GATES                                                                             
calico-kube-controllers-677cd97c8d-7gn7p   1/1     Running   2 (176m ago)   5d1h   10.244.210.137   cwlmaster1                                                                                                    
calico-node-2w24r                          1/1     Running   2 (176m ago)   5d1h   192.168.72.132   cwlnode1                                                                                                      
calico-node-trgf7                          1/1     Running   2 (176m ago)   5d1h   192.168.72.131   cwlmaster1                                                                                                    
coredns-65c54cc984-jg7hd                   1/1     Running   2 (176m ago)   5d1h   10.244.210.136   cwlmaster1                                                                                                    
coredns-65c54cc984-mnzgk                   1/1     Running   2 (176m ago)   5d1h   10.244.210.135   cwlmaster1                                                                                                    
etcd-cwlmaster1                            1/1     Running   3 (176m ago)   5d1h   192.168.72.131   cwlmaster1                                                                                                    
kube-apiserver-cwlmaster1                  1/1     Running   4 (175m ago)   5d1h   192.168.72.131   cwlmaster1                                                                                                    
kube-controller-manager-cwlmaster1         1/1     Running   3 (176m ago)   5d1h   192.168.72.131   cwlmaster1                                                                                                    
kube-proxy-bbnmh                           1/1     Running   2 (176m ago)   5d1h   192.168.72.131   cwlmaster1                                                                                                    
kube-proxy-ckdxs                           1/1     Running   2 (176m ago)   5d1h   192.168.72.132   cwlnode1                                                                                                      
kube-scheduler-cwlmaster1                  1/1     Running   3 (176m ago)   5d1h   192.168.72.131   cwlmaster1                  

第三题,k8s版本升级(控制节点)

官网搜索:.kubeadm-upgrade

image-20231210152801347

将节点设置为不可用:

root@cwlmaster1:~# kubectl get nodes                                                                                                                                                                                          
NAME         STATUS                     ROLES                  AGE     VERSION                                                                                                                                                
cwlmaster1   Ready,SchedulingDisabled   control-plane,master   5d18h   v1.23.1                                                                                                                                                
cwlnode1     Ready                                       5d18h   v1.23.1 

将节点上的所有pod驱逐:

root@cwlmaster1:~# kubectl drain cwlmaster1 --delete-emptydir-data --ignore-daemonsets --force                                                                                                                                
node/cwlmaster1 already cordoned                                                                                                                                                                                              
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-trgf7, kube-system/kube-proxy-bbnmh                                                                                                                         
evicting pod kube-system/coredns-65c54cc984-mnzgk                                                                                                                                                                             
evicting pod kube-system/calico-kube-controllers-677cd97c8d-7gn7p                                                                                                                                                             
evicting pod kube-system/coredns-65c54cc984-jg7hd                                                                                                                                                                             
pod/calico-kube-controllers-677cd97c8d-7gn7p evicted                                                                                                                                                                          
pod/coredns-65c54cc984-jg7hd evicted                                                                                                                                                                                          
pod/coredns-65c54cc984-mnzgk evicted                                                                                                                                                                                          
node/cwlmaster1 drained   

从node节点上切换到master节点(并切换到root用户):

root@cwlnode1:~# sudo ssh linux@cwlmaster1                                                                                                                                                                                    
linux@cwlmaster1's password:           

更新apt install 源:

root@cwlmaster1:~# apt update                                                                                                                                                                                                 
Hit:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial InRelease                                                                                                                                                   
Hit:2 https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu focal InRelease                                                                                                                                                      
Hit:3 http://security.ubuntu.com/ubuntu focal-security InRelease                                                                                                                                                              
Hit:4 http://us.archive.ubuntu.com/ubuntu focal InRelease                                                                                                                                                                     
Get:5 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]                                                                                                                                                    
Hit:6 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease                                                                                                                                                           
Get:7 http://us.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [3,017 kB]                                                                                                                                        
Get:8 http://us.archive.ubuntu.com/ubuntu focal-updates/main i386 Packages [918 kB]                                                                                                                                           
Get:9 http://us.archive.ubuntu.com/ubuntu focal-updates/universe i386 Packages [762 kB]                                                                                                                                       
Get:10 http://us.archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1,140 kB]                                                                                                                                   
Fetched 5,950 kB in 5s (1,158 kB/s)                                                                                                                                                                                           
Reading package lists... Done                                                                                                                                                                                                 
Building dependency tree                                                                                                                                                                                                      
Reading state information... Done                                                                                                                                                                                             
149 packages can be upgraded. Run 'apt list --upgradable' to see them. 

更新 kubeadm (可以version查看当前版本/kubeadm upgrade plan 查看最高升级版本以及当前升级后的版本):

root@cwlmaster1:~# apt install kubeadm=1.23.2-00                                                                                                                                                                              
Reading package lists... Done                                                                                                                                                                                                 
Building dependency tree                                                                                                                                                                                                      
Reading state information... Done                                                                                                                                                                                             
The following held packages will be changed:                                                                                                                                                                                  
  kubeadm                                                                                                                                                                                                                     
The following packages will be upgraded:                                                                                                                                                                                      
  kubeadm                                                                                                                                                                                                                     
1 upgraded, 0 newly installed, 0 to remove and 148 not upgraded.                                                                                                                                                              
Need to get 8,580 kB of archives.                                                                                                                                                                                             
After this operation, 0 B of additional disk space will be used.                                                                                                                                                              
Do you want to continue? [Y/n] y                                                                                                                                                                                              
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.23.2-00 [8,580 kB]                                                                                                               
Fetched 8,580 kB in 1s (7,675 kB/s)                                                                                                                                                                                           
(Reading database ... 165352 files and directories currently installed.)                                                                                                                                                      
Preparing to unpack .../kubeadm_1.23.2-00_amd64.deb ...                                                                                                                                                                       
Unpacking kubeadm (1.23.2-00) over (1.23.1-00) ...                                                                                                                                                                            
Setting up kubeadm (1.23.2-00) ... 

升级kubeadm版本:

root@cwlmaster1:~# kubeadm  upgrade apply v1.23.2 --etcd-upgrade=false                                                                                                                                                        
[upgrade/config] Making sure the configuration is correct:                                                                                                                                                                    
[upgrade/config] Reading configuration from the cluster...                                                                                                                                                                    
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'                                                                                                            
W1210 17:01:28.111015  373281 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf                     
[preflight] Running pre-flight checks.                                                                                                                                                                                        
[upgrade] Running cluster health checks                                                                                                                                                                                       
[upgrade/version] You have chosen to change the cluster version to "v1.23.2"                                                                                                                                                  
[upgrade/versions] Cluster version: v1.23.17                                                                                                                                                                                  
[upgrade/versions] kubeadm version: v1.23.2                                                                                                                                                                                   
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y                                                                                                                                                 
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster                                                                                                                                                 
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection                                                                                                                         
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'                                                                                                                           
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.23.2"...                                                                                                                                        
Static pod: kube-apiserver-cwlmaster1 hash: dedd79e6d42ca64c20dc705303206e5c                                                                                                                                                  
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2012242312"                                                                                                           
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade                                                                                                                                                                   
[upgrade/staticpods] Renewing apiserver certificate                                                                                                                                                                           
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate                                                                                                                                                            
[upgrade/staticpods] Renewing front-proxy-client certificate                                                                                                                                                                  
[upgrade/staticpods] Renewing apiserver-etcd-client certificate                                                                                                                                                               
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-17-01-59/kube-apiserver.yaml"               
[upgrade/staticpods] Waiting for the kubelet to restart the component                                                                                                                                                         
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)                                                                                                                 
Static pod: kube-apiserver-cwlmaster1 hash: dedd79e6d42ca64c20dc705303206e5c                                                                                                                                                  
Static pod: kube-apiserver-cwlmaster1 hash: dedd79e6d42ca64c20dc705303206e5c                                                                                                                                                  
Static pod: kube-apiserver-cwlmaster1 hash: dedd79e6d42ca64c20dc705303206e5c                                                                                                                                                  
Static pod: kube-apiserver-cwlmaster1 hash: c45f0e9bb72fb6b71106220ce38d1eca                                                                                                                                                  
[apiclient] Found 1 Pods for label selector component=kube-apiserver                                                                                                                                                          
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!                                                                                                                                                        
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade                                                                                                                                                          
[upgrade/staticpods] Renewing controller-manager.conf certificate                                                                                                                                                             
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-17-01-59/kube-controller-manager.ya
ml"                                                                                                                                                                                                                           
[upgrade/staticpods] Waiting for the kubelet to restart the component                                                                                                                                                         
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)                                                                                                                 
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 883881d947a2bde0c9da30bd6579217a                                                                                                                                         
Static pod: kube-controller-manager-cwlmaster1 hash: 0dc9a191f0e0a16e9ab094973e0ea8c9                                                                                                                                         
[apiclient] Found 1 Pods for label selector component=kube-controller-manager                                                                                                                                                 
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!                                                                                                                                               
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade                                                                                                                                                                   
[upgrade/staticpods] Renewing scheduler.conf certificate                                                                                                                                                                      
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-17-01-59/kube-scheduler.yaml"               
[upgrade/staticpods] Waiting for the kubelet to restart the component                                                                                                                                                         
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)                                                                                                                 
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: b09a30b10a8de3ec671d16aef8d262e0                                                                                                                                                  
Static pod: kube-scheduler-cwlmaster1 hash: 8cd0666946262656bcfe815b2d17d5a5                                                                                                                                                  
[apiclient] Found 1 Pods for label selector component=kube-scheduler                                                                                                                                                          
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!                                                                                                                                                        
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)                                                                              
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace                                                                                                                   
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster                                                                                          
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will ha
ndle this transition transparently.                                                                                                                                                                                           
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"                                                                                                                                          
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes                                                                                                                                           
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials                                                                               
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token                                                                                            
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster                                                                                                         
[addons] Applied essential addon: CoreDNS                                                                                                                                                                                     
[addons] Applied essential addon: kube-proxy                                                                                                                                                                                  
                                                                                                                                                                                                                              
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.23.2". Enjoy!                                                                                                                                                  
                                                                                                                                                                                                                              
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.   

查看pod(当前pod升级完毕):

root@cwlmaster1:~# kubectl get pods -n kube-system                                                                                                                                                                            
NAME                                       READY   STATUS    RESTARTS        AGE                                                                                                                                              
calico-kube-controllers-677cd97c8d-gk5ds   1/1     Running   0               16m                                                                                                                                              
calico-node-2w24r                          1/1     Running   3 (7h41m ago)   5d19h                                                                                                                                            
calico-node-trgf7                          1/1     Running   3 (7h41m ago)   5d19h                                                                                                                                            
coredns-65c54cc984-9dszm                   1/1     Running   0               16m                                                                                                                                              
coredns-65c54cc984-xhb2x                   1/1     Running   0               16m                                                                                                                                              
etcd-cwlmaster1                            1/1     Running   4 (7h41m ago)   5d19h                                                                                                                                            
kube-apiserver-cwlmaster1                  1/1     Running   0               2m18s                                                                                                                                            
kube-controller-manager-cwlmaster1         1/1     Running   0               96s                                                                                                                                              
kube-proxy-h2jmp                           1/1     Running   0               61s                                                                                                                                              
kube-proxy-qjljx                           1/1     Running   0               68s                                                                                                                                              
kube-scheduler-cwlmaster1                  1/1     Running   0               81s

升级kubectl与kubelet:

root@cwlmaster1:~# apt install kubectl=1.23.2-00                                                                                                                                                                              
Reading package lists... Done                                                                                                                                                                                                 
Building dependency tree... 50%                                                                                                                                                                                               
Building dependency tree                                                                                                                                                                                                      
Reading state information... Done                                                                                                                                                                                             
The following held packages will be changed:                                                                                                                                                                                  
  kubectl                                                                                                                                                                                                                     
The following packages will be upgraded:                                                                                                                                                                                      
  kubectl                                                                                                                                                                                                                     
1 upgraded, 0 newly installed, 0 to remove and 148 not upgraded.                                                                                                                                                              
Need to get 8,929 kB of archives.                                                                                                                                                                                             
After this operation, 0 B of additional disk space will be used.                                                                                                                                                              
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.23.2-00 [8,929 kB]                                                                                                               
Fetched 8,929 kB in 1s (7,222 kB/s)                                                                                                                                                                                           
(Reading database ... 165352 files and directories currently installed.)                                                                                                                                                      
Preparing to unpack .../kubectl_1.23.2-00_amd64.deb ...                                                                                                                                                                       
Unpacking kubectl (1.23.2-00) over (1.23.1-00) ...                                                                                                                                                                            
Setting up kubectl (1.23.2-00) ... 
root@cwlmaster1:~# apt install kubelet=1.23.2-00                                                                                                                                                                              
Reading package lists... Done                                                                                                                                                                                                 
Building dependency tree                                                                                                                                                                                                      
Reading state information... Done                                                                                                                                                                                             
The following held packages will be changed:                                                                                                                                                                                  
  kubelet                                                                                                                                                                                                                     
The following packages will be upgraded:                                                                                                                                                                                      
  kubelet                                                                                                                                                                                                                     
1 upgraded, 0 newly installed, 0 to remove and 148 not upgraded.                                                                                                                                                              
Need to get 19.5 MB of archives.                                                                                                                                                                                              
After this operation, 0 B of additional disk space will be used.                                                                                                                                                              
Do you want to continue? [Y/n] y                                                                                                                                                                                              
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.23.2-00 [19.5 MB]                                                                                                                
Fetched 19.5 MB in 2s (9,522 kB/s)                                                                                                                                                                                            
(Reading database ... 165352 files and directories currently installed.)                                                                                                                                                      
Preparing to unpack .../kubelet_1.23.2-00_amd64.deb ...                                                                                                                                                                       
Unpacking kubelet (1.23.2-00) over (1.23.1-00) ...                                                                                                                                                                            
Setting up kubelet (1.23.2-00) ... 

查看当前节点的版本(控制节点已升级完毕):

root@cwlmaster1:~# kubectl get nodes                                                                                                                                                                                          
NAME         STATUS                     ROLES                  AGE     VERSION                                                                                                                                                
cwlmaster1   Ready,SchedulingDisabled   control-plane,master   5d19h   v1.23.2                                                                                                                                                
cwlnode1     Ready                                       5d19h   v1.23.1  

退出当前的用户以及节点:

root@cwlmaster1:~# exit                                                                                                                                                                                                       
logout                                                                                                                                                                                                                        
linux@cwlmaster1:~$ exit                                                                                                                                                                                                      
logout                                                                                                                                                                                                                        
Connection to cwlmaster1 closed.                                                                                                                                                                                              
root@cwlnode1:~#   

恢复控制节点(切换到控制节点的shell界面):

root@cwlmaster1:~# kubectl uncordon cwlmaster1                                                                                                                                                                                
node/cwlmaster1 uncordoned                                                                                                                                                                                                    
root@cwlmaster1:~# kubectl get nodes                                                                                                                                                                                          
NAME         STATUS   ROLES                  AGE     VERSION                                                                                                                                                                  
cwlmaster1   Ready    control-plane,master   5d19h   v1.23.2                                                                                                                                                                  
cwlnode1     Ready                     5d19h   v1.23.1 

第四题,etcd备份还原

在官网搜索: upgrade-etcd

image-20231210204158950

更改API版本:

root@cwlmaster1:~# export ETCDCTL_API=3   

备份文件:

root@cwlmaster1:~# mkdir /srv/data                                                                                                                                                                                            
root@cwlmaster1:~# ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /srv/d
ata/etcd-snapshot.db                                                                                                                                                                                                          
{"level":"info","ts":1702212583.2767987,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/srv/data/etcd-snapshot.db.part"}                                                                    
{"level":"info","ts":"2023-12-10T20:49:43.283+0800","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}                                                                                       
{"level":"info","ts":1702212583.2842011,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}                                                                                 
{"level":"info","ts":"2023-12-10T20:49:43.369+0800","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}                                                                                          
{"level":"info","ts":1702212583.380789,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"4.1 MB","took":0.103864642}                                                
{"level":"info","ts":1702212583.380884,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/srv/data/etcd-snapshot.db"}                                                                                              
Snapshot saved at /srv/data/etcd-snapshot.db                                                                                                                                                                                  
root@cwlmaster1:~# ll /srv/data/                                                                                                                                                                                              
total 4044                                                                                                                                                                                                                    
drwxr-xr-x 2 root root    4096 Dec 10 20:49 ./                                                                                                                                                                                
drwxr-xr-x 3 root root    4096 Dec 10 20:49 ../                                                                                                                                                                               
-rw------- 1 root root 4128800 Dec 10 20:49 etcd-snapshot.db

还原文件:

原命令基本不变将save改为restore,因为我已经恢复过一次所以报错。

root@cwlmaster1:~# ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot restore /sr
v/data/etcd-snapshot.db                                                                                                                                                                                                       
Error: data-dir "default.etcd" exists     

第五题,networkpolicy网络策略

在官网搜索:network-polices

image-20231210210027036

复制service/networking/networkpolicy.yaml内容并编辑:

apiVersion: networking.k8s.io/v1                                                                                                                                                                               
kind: NetworkPolicy                                                                                                                                                                                            
metadata:                                                                                                                                                                                                      
  name: test-network-policy                                                                                                                                                                                    
  namespace: default                                                                                                                                                                                           
spec:                                                                                                                                                                                                          
  podSelector:                                                                                                                                                                                                 
    matchLabels:                                                                                                                                                                                               
      role: db                                                                                                                                                                                                 
  policyTypes:                                                                                                                                                                                                 
    - Ingress                                                                                                                                                                                                  
    - Egress                                                                                                                                                                                                   
  ingress:                                                                                                                                                                                                     
    - from:                                                                                                                                                                                                    
        - ipBlock:                                                                                                                                                                                             
            cidr: 172.17.0.0/16                                                                                                                                                                                
            except:                                                                                                                                                                                            
              - 172.17.1.0/24                                                                                                                                                                                  
        - namespaceSelector:                                                                                                                                                                                   
            matchLabels:                                                                                                                                                                                       
              project: myproject                                                                                                                                                                               
        - podSelector:                                                                                                                                                                                         
            matchLabels:                                                                                                                                                                                       
              role: frontend                                                                                                                                                                                   
      ports:                                                                                                                                                                                                   
        - protocol: TCP                                                                                                                                                                                        
          port: 6379                                                                                                                                                                                           
  egress:                                                                                                                                                                                                      
    - to:                                                                                                                                                                                                      
        - ipBlock:                                                                                                                                                                                             
            cidr: 10.0.0.0/24                                                                                                                                                                                  
      ports:                                                                                                                                                                                                   
        - protocol: TCP                                                                                                                                                                                        
          port: 5978    

查看所有ns标签的labels:

root@cwlmaster1:~# kubectl get ns --show-labels                                                                                                                                                                
NAME              STATUS   AGE     LABELS                                                                                                                                                                      
app-team1         Active   35h     kubernetes.io/metadata.name=app-team1                                                                                                                                       
default           Active   6d12h   kubernetes.io/metadata.name=default                                                                                                                                         
kube-node-lease   Active   6d12h   kubernetes.io/metadata.name=kube-node-lease                                                                                                                                 
kube-public       Active   6d12h   kubernetes.io/metadata.name=kube-public                                                                                                                                     
kube-system       Active   6d12h   kubernetes.io/metadata.name=kube-system 

创建两个命名空间namespace(考试不需要创建):

root@cwlmaster1:~# kubectl create ns my-app                                                
namespace/my-app created                                                                   
root@cwlmaster1:~# kubectl create ns echo                                                  
namespace/echo created

文件修改为:

apiVersion: networking.k8s.io/v1                                                           
kind: NetworkPolicy                                                                        
metadata:                                                                                  
  name: allow-port-from-namespace                                                          
  namespace: my-app                                                                        
spec:                                                                                      
  podSelector:                                                                             
    matchLabels: {}                                                                        
  policyTypes:                                                                             
    - Ingress                                                                              
  ingress:                                                                                 
    - from:                                                                                
        - namespaceSelector:                                                               
            matchLabels:                                                                   
              project: echo                                                                
      ports:                                                                               
        - protocol: TCP                                                                    
          port: 9000

更新文件:

root@cwlmaster1:~# kubectl apply -f networkpolices.yaml                                    
networkpolicy.networking.k8s.io/allow-port-from-namespace created  

第六题,service四层代理

在官网搜索:service,deployment

image-20231211103017983 image-20231211103428574

创建deployments资源:

apiVersion: apps/v1                                                                        
kind: Deployment                                                                           
metadata:                                                                                  
  name: front-end                                                                           
  labels:                                                                                  
    app: nginx                                                                             
spec:                                                                                      
  replicas: 3                                                                              
  selector:                                                                                
    matchLabels:                                                                           
      app: nginx                                                                           
  template:                                                                                
    metadata:                                                                              
      labels:                                                                              
        app: nginx                                                                         
    spec:                                                                                  
      containers:                                                                          
      - name: nginx                                                                        
        image: nginx:1.14.2  
        
        
root@cwlmaster1:~# kubectl apply -f deployment.yaml 
deployment.apps/my-nginx created

root@cwlmaster1:~# kubectl get deployment
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
front-end   3/3     3            3           64s

生成并修改 front-end文件:

root@cwlmaster1:~# kubectl edit deployment front-end

在文件中招到容器并添加以下内容:

port:
- name: http
  containerPort: 80

root@cwlmaster1:~# kubectl edit deployment front-end                                       
deployment.apps/front-end edited 

查看详细信息,查看是否生效:

root@cwlmaster1:~# kubectl describe deploy front-end                                                                                                                                                           
Name:                   front-end                                                                                                                                                                              
Namespace:              default                                                                                                                                                                                
CreationTimestamp:      Mon, 11 Dec 2023 10:47:46 +0800                                                                                                                                                        
Labels:                 app=nginx                                                                                                                                                                              
Annotations:            deployment.kubernetes.io/revision: 2                                                                                                                                                   
Selector:               app=nginx                                                                                                                                                                              
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable                                                                                                                          
StrategyType:           RollingUpdate                                                                                                                                                                          
MinReadySeconds:        0                                                                                                                                                                                      
RollingUpdateStrategy:  25% max unavailable, 25% max surge                                                                                                                                                     
Pod Template:                                                                                                                                                                                                  
  Labels:  app=nginx                                                                                                                                                                                           
  Containers:                                                                                                                                                                                                  
   nginx:                                                                                                                                                                                                      
    Image:        nginx:1.14.2                                                                                                                                                                                 
    Port:         80/TCP                                                                                                                                                                                       
    Host Port:    0/TCP                                                                                                                                                                                        
    Environment:                                                                                                                                                                                         
    Mounts:                                                                                                                                                                                              
  Volumes:                                                                                                                                                                                               
Conditions:                                                                                                                                                                                                    
  Type           Status  Reason                                                                                                                                                                                
  ----           ------  ------                                                                                                                                                                                
  Available      True    MinimumReplicasAvailable                                                                                                                                                              
  Progressing    True    NewReplicaSetAvailable                                                                                                                                                                
OldReplicaSets:                                                                                                                                                                                          
NewReplicaSet:   front-end-d484bcd99 (3/3 replicas created)                                                                                                                                                    
Events:                                                                                                                                                                                                        
  Type    Reason             Age    From                   Message                                                                                                                                             
  ----    ------             ----   ----                   -------                                                                                                                                             
  Normal  ScalingReplicaSet  5m58s  deployment-controller  Scaled up replica set front-end-d484bcd99 to 1                                                                                                      
  Normal  ScalingReplicaSet  5m56s  deployment-controller  Scaled down replica set front-end-67579dd966 to 2                                                                                                   
  Normal  ScalingReplicaSet  5m56s  deployment-controller  Scaled up replica set front-end-d484bcd99 to 2                                                                                                      
  Normal  ScalingReplicaSet  5m55s  deployment-controller  Scaled down replica set front-end-67579dd966 to 1                                                                                                   
  Normal  ScalingReplicaSet  5m55s  deployment-controller  Scaled up replica set front-end-d484bcd99 to 3                                                                                                      
  Normal  ScalingReplicaSet  5m54s  deployment-controller  Scaled down replica set front-end-67579dd966 to 0 

创建service(kubectl expose --help 查看帮助命令):

root@cwlmaster1:~# kubectl expose --help

命令行创建service并指定端口协议并指定type:

root@cwlmaster1:~# kubectl expose deployment front-end --name=front-end-svc --port=80 --tar
get-port=http --type=NodePort                                                              
service/front-end-svc exposed   

查看svc和详细信息:

root@cwlmaster1:~# kubectl get svc                                                         
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE              
front-end-svc   NodePort    10.100.233.192           80:30515/TCP   74s              
kubernetes      ClusterIP   10.96.0.1                443/TCP        7d18h   
root@cwlmaster1:~# kubectl edit svc front-end-svc                                          
Edit cancelled, no changes made.

yaml文件创建service:

apiVersion: v1
kind: Service
metadata:
  name: front-end-svc
spec:
  type: NodePort
  selector:
    app: MyApp
  ports:
    - port: 80
      targetPort: http
      
 (已经存在了所以警告)     
root@cwlmaster1:~# kubectl apply -f service.yaml 
Warning: resource services/front-end-svc is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
service/front-end-svc configured  

第七题,Ingress代理

官网搜索:ingress

image-20231212165208335

招到(默认的 Ingress 类)并点击,复制yaml文件并添加 namespace:

apiVersion: networking.k8s.io/v1                                                           
kind: IngressClass                                                                         
metadata:                                                                                  
  labels:                                                                                  
    app.kubernetes.io/component: controller                                                
  name: nginx-example                                                                      
  annotations:                                                                             
    ingressclass.kubernetes.io/is-default-class: "true"                                    
  namespace: ing-internal                                                                  
spec:                                                                                      
  controller: k8s.io/ingress-nginx 

创建命名空间(考试时不需要创建):

root@cwlmaster1:~# kubectl create ns ing-internal                                          
namespace/ing-internal created 

生成文件并创建ingressclass:

root@cwlmaster1:~# kubectl apply -f ingressclass.yaml 
ingressclass.networking.k8s.io/nginx-example unchanged

创建ingress:

kind: Ingress                                                                              
metadata:                                                                                  
  name: pong                                                                               
  annotations:                                                                             
    nginx.ingress.kubernetes.io/rewrite-target: /                                          
  namespace: ing-internal                                                                  
spec:                                                                                      
  ingressClassName: nginx-example                                                          
  rules:                                                                                   
  - http:                                                                                  
      paths:                                                                               
      - path: /hello                                                                       
        pathType: Prefix                                                                   
        backend:                                                                           
          service:                                                                         
            name: hello                                                                    
            port:                                                                          
              number: 5678
              
root@cwlmaster1:~# kubectl apply -f ingress.yaml 
ingress.networking.k8s.io/pong created

查看ip:

root@cwlmaster1:~# kubectl  get ingress -n ing-internal

第八题,deploymentd实现pod扩容

命令行输入:

root@cwlmaster1:~# kubectl edit deploy front-end(podname)                                                                                                                                                                              
deployment.apps/front-end edited      

将 spec下的replicas: 值改为6

查看pod资源:

root@cwlmaster1:~# kubectl get deploy                                                                                                                                                                                         
NAME        READY   UP-TO-DATE   AVAILABLE   AGE                                                                                                                                                                              
front-end   6/6     6            6           33h   

第九题,Pod指定节点调度

官网搜索: pod

image-20231212203325854

复制模板创建yaml文件:

apiVersion: v1                                                                                                                                                                                                                
kind: Pod                                                                                                                                                                                                                     
metadata:                                                                                                                                                                                                                     
  name: nginx-kusc00401                                                                                                                                                                                                       
spec:                                                                                                                                                                                                                         
  containers:                                                                                                                                                                                                                 
  - name: nginx                                                                                                                                                                                                               
    image: nginx                                                                                                                                                                                                              
  nodeSelector:                                                                                                                                                                                                               
    disk: spinning  

生成文件:

root@cwlmaster1:~# kubectl apply -f pod.yaml                                                                                                                                                                                  
pod/nginx-kusc00401 created 

因为当前(模拟)没有节点是spinning标签,现打标签:

root@cwlmaster1:~# kubectl label nodes cwlnode1 disk=spinning                                                                                                                                                                 
node/cwlnode1 labeled 

查看pod:

root@cwlmaster1:~# kubectl get pods -owide                                                                                                                                                                                    
NAME                        READY   STATUS    RESTARTS      AGE     IP            NODE       NOMINATED NODE   READINESS GATES                                                                                                 
front-end-d484bcd99-cv6nl   1/1     Running   1 (20m ago)   4h40m   10.244.8.92   cwlnode1                                                                                                                        
front-end-d484bcd99-djx8k   1/1     Running   0             11m     10.244.8.97   cwlnode1                                                                                                                        
front-end-d484bcd99-kbtwd   1/1     Running   0             11m     10.244.8.95   cwlnode1                                                                                                                        
front-end-d484bcd99-lwmvz   1/1     Running   1 (20m ago)   4h40m   10.244.8.89   cwlnode1                                                                                                                        
front-end-d484bcd99-qsgj9   1/1     Running   1 (20m ago)   4h40m   10.244.8.94   cwlnode1                                                                                                                        
front-end-d484bcd99-rmqg9   1/1     Running   0             11m     10.244.8.96   cwlnode1                                                                                                                        
nginx-kusc00401             1/1     Running   0             3m11s   10.244.8.98   cwlnode1              

第十题,检查ready节点数量

查看node节点为ready状态:

root@cwlmaster1:~# kubectl get nodes | grep -w 'Ready'                                                                                                                                                                        
cwlmaster1   Ready    control-plane,master   7d23h   v1.23.2                                                                                                                                                                  
cwlnode1     Ready                     7d23h   v1.23.1 

统计节点:

root@cwlmaster1:~# kubectl get nodes | grep -w 'Ready' | wc -l                                                                                                                                                                
2

查找所有为ready状态的节点并筛选出污点的节点,并统计ready状态的数量:

root@cwlmaster1:~# kubectl describe nodes cwlmaster1 cwlnode1  | grep 'Taint'| grep 'NoSchedule' 
Taints:             node-role.kubernetes.io/master:NoSchedule
root@cwlmaster1:~# kubectl describe nodes cwlmaster1 cwlnode1  | grep 'Taint'| grep 'NoSchedule' | wc -l
1

注:将查到的所有ready状态的节点名称都要写在nodes后 管道符前.

将查找的数量写入文件中:

root@cwlmaster1:~# mkdir /opt/KUSC00402                                                                                                                                                                                       
root@cwlmaster1:~# kubectl describe nodes cwlmaster1 cwlnode1  | grep 'Taint'| grep 'NoSchedule' | wc -l  >> /opt/KUSC00402/kusc00402.txt                                                                                     
root@cwlmaster1:~# cat /opt/KUSC00402/kusc00402.txt                                                                                                                                                                           
1    

第十一题,Pod指定多个容器

官网搜索:pod

image-20231212210200060

复制pod模板并编辑:

apiVersion: v1                                                                                                                                                                                                                
kind: Pod                                                                                                                                                                                                                     
metadata:                                                                                                                                                                                                                     
  name: kucc1                                                                                                                                                                                                                 
spec:                                                                                                                                                                                                                         
  containers:                                                                                                                                                                                                                 
  - name: nginx                                                                                                                                                                                                               
    image: nginx                                                                                                                                                                                                             
  - name: redis                                                                                                                                                                                                               
    image: redis                                                                                                                                                                                                              
  - name: memcached                                                                                                                                                                                                           
    image: memcached                                                                                                                                                                                                          
  - name: consul                                                                                                                                                                                                              
    image: consul 

生成pod,并查看pod:

root@cwlmaster1:~# kubectl apply -f pod-kucc.yaml                                                                                                                                                                             
pod/kucc1 created                                                                                                                                                                                                             
root@cwlmaster1:~# kubectl get pods                                                                                                                                                                                           
NAME                        READY   STATUS              RESTARTS      AGE                                                                                                                                                     
front-end-d484bcd99-cv6nl   1/1     Running             1 (44m ago)   5h4m                                                                                                                                                    
front-end-d484bcd99-djx8k   1/1     Running             0             36m                                                                                                                                                     
front-end-d484bcd99-kbtwd   1/1     Running             0             36m                                                                                                                                                     
front-end-d484bcd99-lwmvz   1/1     Running             1 (44m ago)   5h4m                                                                                                                                                    
front-end-d484bcd99-qsgj9   1/1     Running             1 (44m ago)   5h4m                                                                                                                                                    
front-end-d484bcd99-rmqg9   1/1     Running             0             36m                                                                                                                                                     
kucc1                       0/4     ContainerCreating   0             5s                                                                                                                                                      
nginx-kusc00401             1/1     Running             0             27m  

第十二题,pv持久化存储

官网搜索: pv

image-20231213113543728

找到“持久卷”复制模板并修改:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
spec:
  capacity:
    storage: 2Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  hostPath:
    path: "/srv/app-config"

生成文件并查看pv:

root@cwlmaster1:~# kubectl apply -f pv.yaml                                                                            
persistentvolume/app-config created                                                                                    
root@cwlmaster1:~# kubectl get pv                                                                                      
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE                
app-config   2Gi        RWX            Retain           Available                                   6s    

第十三题,pvc挂载存储

官网搜索:pvc

image-20231213135346062

找到PV-Claim.yaml并复制模板并修改为:

apiVersion: v1                                                                                                         
kind: PersistentVolumeClaim                                                                                            
metadata:                                                                                                              
  name: pv-volume                                                                                                      
spec:                                                                                                                  
  storageClassName: mcsi-hostpath-sc                                                                                   
  accessModes:                                                                                                         
    - ReadWriteOnce                                                                                                    
  resources:                                                                                                           
    requests:                                                                                                          
      storage: 10Mi   
      
      
root@cwlmaster1:~# kubectl apply -f pvc.yaml 
persistentvolumeclaim/pv-volume created

找到pod模板并修改:

apiVersion: v1                                                                                                         
kind: Pod                                                                                                              
metadata:                                                                                                              
  name: web-server                                                                                                     
spec:                                                                                                                  
  volumes:                                                                                                             
    - name: pv-volume                                                                                                  
      persistentVolumeClaim:                                                                                           
        claimName: pv-volume                                                                                           
  containers:                                                                                                          
    - name: nginx                                                                                                      
      image: nginx                                                                                                     
      volumeMounts:                                                                                                    
        - mountPath: "/usr/share/nginx/html"                                                                           
          name: pv-volume  
          
          
          
注:因为没有创建sc所以为pending状态         
root@cwlmaster1:~# kubectl apply -f pvc-pod.yaml 
pod/web-server created
root@cwlmaster1:~# kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS       AGE
pv-volume   Pending                                      mcsi-hostpath-sc   4m47s
root@cwlmaster1:~# kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
web-server   0/1     Pending   0          24s

将pvc修改为70Mi(因为模拟没有创建存储pvc所有保存不了):

image-20231213141104454

第十四题,查看pod日志

因为当前是镜像恢复状态没有什么pod:

root@cwlmaster1:~# kubectl get pods                                                                                    
NAME         READY   STATUS    RESTARTS   AGE                                                                          
web-server   0/1     Pending   0          6m49s                                                                        
root@cwlmaster1:~# kubectl logs web-server | grep unable-access-website >> /opt/KUTR00101/web-server 

kubectl logs podname | grep 指定过滤的信息 >> 保存的位置及文件名

第十五题,sidecar代理容器

官网搜索:logging

image-20231213142002830

找到pod并修改为:

apiVersion: v1                                                                                                                                                                                                 
kind: Pod                                                                                                                                                                                                      
metadata:                                                                                                                                                                                                      
  name: legacy-app                                                                                                                                                                                             
spec:                                                                                                                                                                                                          
  containers:                                                                                                                                                                                                  
  - name: count                                                                                                                                                                                                
    image: busybox                                                                                                                                                                                             
    args:                                                                                                                                                                                                      
    - /bin/sh                                                                                                                                                                                                  
    - -c                                                                                                                                                                                                       
    - >                                                                                                                                                                                                        
      i=0;                                                                                                                                                                                                     
      while true;                                                                                                                                                                                              
      do                                                                                                                                                                                                       
        echo "$(date) INFO $i" >> /var/log/legacy-app.log;                                                                                                                                                     
        i=$((i+1));                                                                                                                                                                                            
        sleep 1;                                                                                                                                                                                               
      done                                                                                                                                                                                                     
                                                                                                                                                                                                               
    volumeMounts:                                                                                                                                                                                              
    - name: logs                                                                                                                                                                                               
      mountPath: /var/log                                                                                                                                                                                      
  - name: busybox                                                                                                                                                                                              
    image: busybox                                                                                                                                                                                             
    args: [/bin/sh , -c , 'tail -n+1 -f /var/log/legacy-app.log']                                                                                                                                              
    volumeMounts:                                                                                                                                                                                              
    - name: logs                                                                                                                                                                                               
      mountPath: /var/log                                                                                                                                                                                      
  volumes:                                                                                                                                                                                                     
  - name: logs                                                                                                                                                                                                 
    emptyDir: {}     

生成pod并查看:

root@cwlmaster1:~# kubectl apply -f legacy-app.yaml                                                                                                                                                            
pod/legacy-app created                                                                                                                                                                                         
root@cwlmaster1:~# kubectl get pod                                                                                                                                                                             
NAME         READY   STATUS              RESTARTS   AGE                                                                                                                                                        
legacy-app   0/2     ContainerCreating   0          4s                                                                                                                                                         
web-server   0/1     Pending             0          147m                                                                                                                                                       
root@cwlmaster1:~# kubectl get pod                                                                                                                                                                             
NAME         READY   STATUS    RESTARTS   AGE                                                                                                                                                                  
legacy-app   2/2     Running   0          15s                                                                                                                                                                  
web-server   0/1     Pending   0          147m  

第十六题,查看Pod的CPU使用情况

指定查看(考试):

root@cwlmaster1:~# kubectl top pod -l  name=cpu-loader --sort-by=cpu -A                                                                
No resources found  

查看当前pod:

root@cwlmaster1:~# kubectl top pod   --sort-by=cpu -A                                                                                  
NAMESPACE       NAME                                        CPU(cores)   MEMORY(bytes)                                                 
kube-system     kube-apiserver-cwlmaster1                   35m          233Mi                                                         
kube-system     calico-node-trgf7                           27m          56Mi                                                          
kube-system     calico-node-2w24r                           23m          55Mi                                                          
kube-system     kube-controller-manager-cwlmaster1          12m          55Mi                                                          
kube-system     etcd-cwlmaster1                             11m          50Mi                                                          
ingress-nginx   ingress-nginx-controller-7cd558c647-mqwbc   2m           90Mi                                                          
default         legacy-app                                  2m           0Mi                                                           
kube-system     kube-scheduler-cwlmaster1                   2m           25Mi                                                          
kube-system     kube-proxy-ckdxs                            1m           11Mi                                                          
kube-system     coredns-65c54cc984-mnzgk                    1m           14Mi                                                          
kube-system     kube-proxy-bbnmh                            1m           16Mi                                                          
kube-system     coredns-65c54cc984-jg7hd                    1m           15Mi                                                          
kube-system     calico-kube-controllers-677cd97c8d-7gn7p    1m           12Mi                                                          
kube-system     metrics-server-875fcb674-jgmxt              1m           16Mi

将查出的pod名字写入文件:

root@cwlmaster1:~# mkdir /opt/KUTR00104                                                                                                
root@cwlmaster1:~# echo 'legacy-app' >> /opt/KUTR00104/kutr00104.txt  

第十七题,节点故障排查

从node1节点 ssh 到node0节点(考试环境下)执行以下命令:

linux@cwlmaster1:~$ sudo -i                                                                                                             
root@cwlmaster1:~# 
root@cwlmaster1:~# systemctl status kubelet.service  
root@cwlmaster1:~# systemctl restart kubelet.service  
root@cwlmaster1:~# systemctl enable  kubelet 

退出到node1节点:

root@cwlmaster1:~# exit                                                                                                                 
logout                                                                                                                                  
linux@cwlmaster1:~$ exit                                                                                                                
logout                                                                                                                                  
Connection to cwlmaster1 closed.                                                                                                        
root@cwlnode1:~#