k8s自动伸缩应用

发布时间 2023-11-20 16:48:56作者: MaxBruce

原文:https://zhuanlan.zhihu.com/p/649662103

背景:

这篇文章主要讲的是kuberntes的自动伸缩pods的能力。讲述如何使用Horizontal Pod Autoscaler(HPA)来实现自动伸缩应用。 使用一个负载生成器来模拟服务负载高的情形。

一个HPA对象用来监控pods的资源使用情况,通过对比实际的资源情况与理想的资源情况,来控制应用的自动伸缩。 例如,一个应用初始化时有两个pods,当前CPU的平均使用率达到100%,但是我们想每个pod的CPU使用率为20%。HPA会计算应该有2 * (100 / 20) = 10 个pods来到理想的CPU利用率。 将会调整这个应用的pod数量为10,kubernetes负责调度8个新的pods。

Horizontal Pod Autoscaler如何工作?

  1. cAdvisor作为一个容器资源利用监控服务,它运行在每个节点的kubelet中
  2. 监控的CPU利用率被cAdvisor收集,然后被heapster聚合
  3. Heapster是运行在集群上的服务,它监控并且聚合指标
  4. Heapster从每个cAdvisor中查询指标
  5. 当HPA部署之后,controller会持续观察被Heapster报告的指标,根据要求自动伸缩。
  6. 基于配置的指标,HPA决定是否需要伸缩。

安装步骤

安装和配置Kubernets Metrics Server

首先需要有一个metrics server服务安装,运行在Kubernetes集群上。Horizontal Pod Autoscaler使用API来收集指标。 当然也可以使用自定义的metrics server,例如,Prometheus, Grafana等等。

2、开启防火墙

指标服务器使用4443端口与API server交互,所以这个端口必须在集群的每个节点上开启。

firewall-cmd --add-port=4443/tcp --permanent
firewall-cmd --reload

3、部署metrics-server

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

下载下来的components.yaml文件需要做一些简单的修改

...
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      hostNetwork: true   ## add this line
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-insecure-tls   ## add this line
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2
...

在components.yaml文件中添加的参数如下:

--kubelet-preferred-address-types
--kubelet-insecure-tls
hostNetwork: 开启hostNetwork模式

部署这个components.yaml

kubectl apply -f components.yaml

说明:当部署完成后,需要修改的话,可以使用如下命令修改

kubectl edit deployment metrics-server -n kube-system

部署完成,可以看到metrics-server

kubectl get deployment -n kube-system
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
coredns          1/1     1            1           9d
metrics-server   1/1     1            1           4d12h

验证连接状态:

kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apiregistration.k8s.io/v1","kind":"APIService","metadata":{"annotations":{},"labels":{"k8s-app":"metrics-server"},"name":"v1beta1.metrics.k8s.io"},"spec":{"group":"metrics.k8s.io","groupPriorityMinimum":100,"insecureSkipTLSVerify":true,"service":{"name":"metrics-server","namespace":"kube-system"},"version":"v1beta1","versionPriority":100}}
  creationTimestamp: "2023-08-07T22:34:41Z"
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
  resourceVersion: "66790"
  uid: 15fb943f-eaeb-4017-83d2-8532ca14f1a6
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
    port: 443
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: "2023-08-10T22:37:07Z"
    message: all checks passed
    reason: Passed
    status: "True"
    type: Available

现在就可以查看每个节点的CPU和内存使用 kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% minikube 58m 2% 1712Mi 80% minikube-m02 78m 3% 1099Mi 51% minikube-m03 93m 4% 1636Mi 77% minikube-m04 67m 3% 929Mi 43%

示例

基于CPU自动伸缩例子

metrics-server已经配置以及运行,现在需要创建Horizontal Pod Autoscaler(HPA)来自动伸缩应用。 1. 首先创建一个deployment

cat nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    type: dev
  name: nginx-deploy
spec:
  replicas: 1
  selector:
    matchLabels:
      type: dev
  template:
    metadata:
      labels:
        type: dev
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m

使用kubectl命令创建这个deployment

kubectl create -f nginx-deploy.yaml

列出可用的deployment

kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1/1     1            1           20s

创建Horizontal Pod Autoscaler

kubectl autoscale deployment nginx-deploy --cpu-percent=10 --min=1 --max=5

上面的命令是如果CPU的利用率达到10%,那么会自动伸缩1个deployment到5个deployment 或者使用下面的yaml文件创建HPA

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-deploy
spec:
  minReplicas: 1
  maxReplicas: 5
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deploy ## Name of the deployment
  targetCPUUtilizationPercentage: 10

检查HPA的状态

kubectl get hpa
NAME           REFERENCE                  TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy    <unknown>/10%    1         5         0          8s

刚开始的时候,target列的值是unknown,replicas是0,因为是默认是30秒轮训运行,所以在收集到真实的指标前,可能存在延迟。 默认的HPA同步间隔可以通过如下参数进行调整: --horizontal-pod-autoscaler-sync-period

当等几十秒之后,可以发现当前的指标被展示,target列显示的是(current/target),0%表示当前负载是0。

kubectl get hpa
NAME           REFERENCE                  TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy    0%/10%    1         5         1          4d12h

现在有一个nginx pod的副本

kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
nginx-deploy-9fcbb6c6c-c89j5   1/1     Running   0          8m19s

验证kubernetes自动伸缩应用 手动增加压力来增加CPU利用率,连接到上面的pod,使用dd命令增加CPU压力。 root@nginx1-deploy-b8dc4bb8c-s9fgs: dd if=/dev/zero of=/dev/null

正如所期望的,会增加CPU压力

kubectl get hpa
NAME           REFERENCE                 TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy   250%/10%   1         5         4          9m57s

HPA会自动再创建4个pods,因为最大的pods数量是5.

kubectl get pods
NAME                            READY   STATUS              RESTARTS   AGE
nginx-deploy-7979f78fb5-2qhtc   1/1     Running             0          11m
nginx-deploy-7979f78fb5-49v9f   1/1     Running             0          8s
nginx-deploy-7979f78fb5-7hdpt   0/1     ContainerCreating   0          8s
nginx-deploy-7979f78fb5-bpcq5   0/1     ContainerCreating   0          8s
nginx-deploy-7979f78fb5-qw7qb   0/1     ContainerCreating   0          8s

当我们退出dd命令后,CPU压力就会降下来。

kubectl get hpa
NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy   0%/10%    1         5         5          10m

当CPU压力降低后,5分钟之后,副本的数量就降为1

kubectl get pods
NAME                            READY   STATUS        RESTARTS   AGE
nginx-deploy-7979f78fb5-49v9f   1/1     Running       0          5m51s
nginx-deploy-7979f78fb5-7hdpt   0/1     Terminating   0          5m51s
nginx-deploy-7979f78fb5-qw7qb   0/1     Terminating   0          5m34s
nginx-deploy-7979f78fb5-bpcq5   0/1     Terminating   0          5m49s

可以使用kubectl describe 来检查事件

kubectl describe hpa nginx-deploy
...
Deployment pods:                                       1 current / 1 desired
Conditions:
  Type            Status  Reason            Message
  ----            ------  ------            -------
  AbleToScale     True    ReadyForNewScale  recommended size matches current size
  ScalingActive   True    ValidMetricFound  the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  True    TooFewReplicas    the desired replica count is less than the minimum replica count
Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  31m   horizontal-pod-autoscaler  New size: 4; reason: cpu resource utilization (percentage of request) above target
  Normal  SuccessfulRescale  31m   horizontal-pod-autoscaler  New size: 5; reason: cpu resource utilization (percentage of request) above target
  Normal  SuccessfulRescale  25m   horizontal-pod-autoscaler  New size: 1; reason: All metrics below target

基于内存自动伸缩例子

这个例子是基于内存使用率来自动伸缩应用。 1、创建deployment,基于上例子的nginx-deploy,修改CPU限制到内存限制

kubectl edit deployment nginx-deploy
...
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources:
          limits:
            memory: 524Mi  # Maximum amount of RAM in a container
          requests:
            memory: 256Mi  # Minimum amount of RAM available in a Pod
...

只要一保存,当前的副本就会终止,拥有新配置的pod将会创建。

kubectl get pods
NAME                            READY   STATUS              RESTARTS   AGE
nginx-deploy-7979f78fb5-49v9f   1/1     Running             0          45m
nginx-deploy-ffd7f4f57-6jnd2    0/1     ContainerCreating   0          5s

创建Horizontal Pod Autoscaler 不像基于CPU利用的自动伸缩,基于内存的自动伸缩仅仅可以 通过一个yaml或者json文件创建。 下面的yaml文件是基于内存,内存的阈值是30%,如果内存使用超过30%。那么将会自动增加nginx副本, 最大的副本数量是5, 说明: autoscaling.v1并不支持metrics。将会使用autoscaling/v2beta1

cat hpa-memory.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-memory
spec:
  maxReplicas: 5
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deploy
  metrics:
  - type: Resource
    resource:
      name: memory
      targetAverageUtilization: 30

使用kubectl命令创建这个HPA

kubectl create -f hpa-memory.yaml
horizontalpodautoscaler.autoscaling/nginx-memory created

列出可用的HPA资源

kubectl get hpa
NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy   0%/10%    1         5         1          69m
nginx-memory   Deployment/nginx-deploy   0%/30%    1         5         1          17s

验证kubernetes自动伸缩应用

root@nginx-deploy-ffd7f4f57-6jnd2:/# cat <(yes | tr \\n x | head -c $((1024*1024*100))) <(sleep 120) | grep n

当执行完上面的命令后,内存压力会增加,可以使用如下命令监控内存使用的变化。

kubectl get hpa -w
NAME           REFERENCE                  TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy    0%/10%    1         5         1          4d13h
nginx-memory   Deployment/nginx1-deploy   1%/30%    1         5         1          178m

检查pods的状态

kubectl get pods

只要内存使用低于阈值,5分钟之后,副本数量将会降低。

kubectl get hpa
NAME           REFERENCE                 TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
nginx-deploy   Deployment/nginx-deploy   0%/10%    1         5         1          94m
nginx-memory   Deployment/nginx-deploy   1%/30%    1         5         1          25m

说明: 不推荐同时激活CPU和内存监控,因为可能会彼此冲突, 依赖你的应用来决定CPU或者内存监控。