Release Pipeline

发布时间 2023-04-26 17:43:36作者: lavender2020

Requirements

  • When code PR merged into env branch, can build the code and push image into image registry automatically and efficiently

  • When new image been pushed into image registry, can detect the change and update the image of service running in k8s cluster

  • Continuous deployment process should be smooth and under-control

  • Message notification

Tool selection

  • Github Webhook / Argo Workflow

  • Helm

  • Argo Event

  • Argo Workflow

  • Argo CD

  • Argo Rollout

Compared with current Github action + ArgoCD

  • Provides a good UI to dev team

  • More configuration, less scripts

  • Better access management

Github Branch && Environments

 

Github Branch

Environment

development

dev

sandbox

sandbox

release

integration

main

live

 

Release Process

1.Code PR

Developer merge PR into development branch(PR merged and closed)

 

2.Argo Event

Argo event support many event source, such as Github PR event.

Argo event listen on Github action event(PR) of environment branch(development)

Github PR event Webhook trigger argo event

Argo event also has so many sensors and triggers(sensors used to parameterized input of trigger)

Argo event trigger Argo workflow with parameters

 

3.Argo Workflow

Argo workflow designed for Kubernetes workflow schedule.

In the Argo workflow template, we configured different type of tasks(application code clone、package build、application helm repo update)

All these tasks can run parallel or sequentially depends on your design. Can also define the task execute sequence(dependency).

We define our CICD pipeline task running sequentially(application code clone → package build → unit test → image build and push into ECR → update service helm repo )

Argo workflow run different workflow for different services base on your input parameters from argo event sensor and argo event trigger.

We can also define a task for sending slack message of final release result dependent on Argo CD release result(Get Argo CD application release result from argo CD api)

 

4. Argo CD

Argo CD is a Kubernetes-native continuous deployment (CD) tool. Unlike external CD tools that only enable push-based deployments, Argo CD can pull updated code from Git repositories and deploy it directly to Kubernetes resources. Automatic synchronization of application state to the current version of declarative configuration.

After Argo CD application configuration(Auto-sync with service helm repository), Argo CD server listen on service helm repo. If service helm repository has any change, Argo CD server will sync state of application to the current version of declarative configuration automatically.

So after argo workflow changed the service helm repo(by argo workflow), Argo CD syncs the desired state to current state which will trigger k8s api service to update the service state to desired state(update service pods tag to new version).

At the same time Argo CD keep track on the service pods health check, if new version pods health check failed, the old version pods will not been destroy. If health check successful, new version pods will perform rolling update

 

5. Argo Rollout

Argo Rollout can be considered as an extension of Kubernetes Deployment(argo CD uses deployment ), which makes up for the lack of functionality in the deployment release strategy, and supports the canary deployment or blue-green upgrade release policy.

We just need to change deployment to rollout, can got all above functions.

 

Samples

 

Argo event source(Github Webhook)

apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
  name: github
spec:
  service:
    ports:
      - name: vbsiv
        port: 12000
        targetPort: 12000
  github:
    vbsiv:
      repositories:
        - owner: argoproj
          names:
            - argo-events
            - argo-workflows
      # Github application auth. Instead of using personal token `apiToken` use app PEM            
      # Github will send events to following port and endpoint
      webhook:
        # endpoint to listen to events on
        endpoint: /push
        # port to run internal HTTP server on
        port: "12000"
        # HTTP request method to allow. In this case, only POST requests are accepted
        method: POST
        # url the event-source will use to register at Github.
        # This url must be reachable from outside the cluster.
        # The name for the service is in `<event-source-name>-eventsource-svc` format.
        # You will need to create an Ingress or Openshift Route for the event-source service so that it can be reached from GitHub.
        url: http://url-that-is-reachable-from-GitHub
      # type of events to listen to.
      # following listens to everything, hence *
      # You can find more info on https://developer.github.com/v3/activity/events/types/
      events:
        - "*"

      # apiToken refers to K8s secret that stores the github api token
      # if apiToken is provided controller will create webhook on GitHub repo
      # +optional
      apiToken:
        # Name of the K8s secret that contains the access token
        name: github-access
        # Key within the K8s secret whose corresponding value (must be base64 encoded) is access token
        key: token

      # type of the connection between event-source and Github.
      # You should set it to false to avoid man-in-the-middle and other attacks.
      insecure: true
      # Determines if notifications are sent when the webhook is triggered
      active: true
      # The media type used to serialize the payloads
      contentType: json

The github event source will create a service listen on port 12000 and also expose the web hook url to be public which github can access. The webhook service listen on all github POST request event. When developer create and closed a PR to service project, github will request a post request to argo event web hook service(url exposed to github) and create a github event to event bus.

 

Argo event bus(MQ used by argo work event, event source as producer, trigger as consumer)

apiVersion: argoproj.io/v1alpha1
kind: EventBus
metadata:
  name: default
spec:
  nats:
    native:
      # Optional, defaults to 3. If it is < 3, set it to 3, that is the minimal requirement.
      replicas: 3
      # Optional, authen strategy, "none" or "token", defaults to "none"
      auth: token
      containerTemplate:
        resources:
          requests:
            cpu: "10m"
      metricsContainerTemplate:
        resources:
          requests:
            cpu: "10m"
      antiAffinity: false
      persistence:
        storageClassName: gp2
        accessMode: ReadWriteOnce
        volumeSize: 10Gi

 

Argo event sensor(trigger argo workflow)

apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
  name: github
spec:
  template:
    serviceAccountName: cicd
  dependencies:
    - name: test-dep
      eventSourceName: github
      eventName: github
      filters:
        data:
          # Name of the event that triggered the delivery: [pull_request, push, yadayadayada]
          # https://docs.github.com/en/developers/webhooks-and-events/webhook-events-and-payloads
          - path: header.X-Github-Event
            type: string
            value:
              - pull_request
          - path: body.action
            type: string
            value:
              - opened
              - edited
              - reopened
              - synchronize
          - path: body.pull_request.state
            type: string
            value:
              - open
          - path: body.pull_request.base.ref
            type: string
            value:
              - development
  triggers:
    - template:
        name: workflow-template-cicd-vbsiv
        k8s:
          operation: create
          source:
            resource:
              apiVersion: argoproj.io/v1alpha1
              kind: Workflow
              metadata:
                name: workflow-template-cicd-vbsiv-
              spec:
                entrypoint: main
                arguments:
                  parameters:
                    - name: pr-title
                    - name: pr-number
                    - name: short-sha
                templates:
                  - name: main
                    inputs:
                      parameters:
                        - name: pr-title
                        - name: pr-number
                        - name: short-sha
                    container:
                      image: docker/whalesay:latest
                      command: [cowsay]
                      args: ["{{inputs.parameters.pr-title}}"]
          parameters:
            - src:
                dependencyName: test-dep
                dataKey: body.pull_request.title
              dest: spec.arguments.parameters.0.value
            - src:
                dependencyName: test-dep
                dataKey: body.pull_request.number
              dest: spec.arguments.parameters.1.value
            - src:
                dependencyName: test-dep
                dataTemplate: "{{ .Input.body.pull_request.head.sha | substr 0 7 }}"
              dest: spec.arguments.parameters.2.value
            # Append pull request number and short sha to dynamically assign workflow name <github-21500-2c065a>
            - src:
                dependencyName: test-dep
                dataTemplate: "{{ .Input.body.pull_request.number }}-{{ .Input.body.pull_request.head.sha | substr 0 7 }}"
              dest: metadata.name
              operation: append
      retryStrategy:
        steps: 3

Event sensor can consume the event with event name(github) from event bus. Event sensor will create parameter which be used by trigger from github post request data. And then run the trigger which creates a argo workflow template.

 

Argo Workflow Template (service vbsiv)

metadata:
  name: workflow-template-cicd-vbsiv # Argo event trigger will execute the workflow from the name
  generateName: workflow-template-cicd-vbsiv-
  namespace: argocd
spec:
  entrypoint: main                   # argo workflow start  
    arguments:                       # the parameters are used by template tasks which will sent from argo sensor
      parameters:
        - name: repo
          value: 'https://github.com/Appen/sid_verification_backend_api.git'
        - name: branch
          value: integration
        - name: registry
          value: 411719562396.dkr.ecr.us-east-1.amazonaws.com/datacollect-vbsiv
        - name: namespace
          value: argocd
        - name: servicerepopath
          value: ''
        - name: deploynamespace
          value: datacollect
        - name: servicename
          value: vbsiv
        - name: argoproject
          value: datacollect
        - name: helmrepo
          value: 'https://github.com/Appen-International/datacollect-helm.git'
        - name: helmbranch
          value: integration
        - name: helmrepopath
          value: app
        - name: helmservicename
          value: vbsiv
    serviceAccountName: datacollect   
    volumeClaimTemplates:
      - metadata:
          name: work
          creationTimestamp: null
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 20Gi
        status: {}

  templates:
    - name: main
      inputs: {}
      outputs: {}
      metadata: {}
      dag:
        tasks:
          - name: clone
            template: clone
            arguments:
              parameters:
                - name: repo
                  value: '{{workflow.parameters.repo}}'
                - name: branch
                  value: '{{workflow.parameters.branch}}'
          - name: build
            template: build
            arguments:
              parameters:
                - name: registry
                  value: '{{workflow.parameters.registry}}'
                - name: servicerepopath
                  value: '{{workflow.parameters.servicerepopath}}'
                - name: servicename
                  value: '{{workflow.parameters.servicename}}'
                - name: repo
                  value: '{{workflow.parameters.repo}}'
            dependencies:
              - clone
          - name: updateHelmRepo
            template: updateHelmRepo
            arguments:
              parameters:
                - name: helmrepo
                  value: '{{workflow.parameters.helmrepo}}'
                - name: helmbranch
                  value: '{{workflow.parameters.helmbranch}}'
                - name: helmrepopath
                  value: '{{workflow.parameters.helmrepopath}}'
                - name: servicename
                  value: '{{workflow.parameters.servicename}}'
                - name: helmservicename
                  value: '{{workflow.parameters.helmservicename}}'
                - name: argoproject
                  value: '{{workflow.parameters.argoproject}}'
                - name: servicerepopath
                  value: '{{workflow.parameters.servicerepopath}}'
            dependencies:
              - build
    - name: clone
      inputs:
        parameters:
          - name: repo
          - name: branch
      outputs: {}
      metadata: {}
      script:
        name: ''
        image: 'alpine/git:v2.30.1'
        command:
          - sh
        workingDir: /work
        env:
          - name: GITHUB_TOKEN
            valueFrom:
              secretKeyRef:
                name: github-secret
                key: GITHUB_TOKEN
                optional: false
        resources: {}
        volumeMounts:
          - name: work
            mountPath: /work
        source: >
          authurl=`echo '{{inputs.parameters.repo}}' |awk -F '//' '{print $2}'`
          && \

          fullpath="https://${GITHUB_TOKEN}:x-oauth-basic@${authurl}" && \

          echo && \

          echo "Repo url: '{{inputs.parameters.repo}}'" && \

          echo "Repo: $authurl" && \

          echo "Branch: '{{inputs.parameters.branch}}'" && \

          echo && \

          echo "git config:" && \

          git config --global user.email "xgeng@appen.com" && \

          git config --global user.name "Xudong Geng" && \

          git config --global credential.helper store && \

          echo "clone project:" && \

          git clone --branch '{{inputs.parameters.branch}}' --single-branch
          $fullpath && \

          ls -la ./*
    - name: build
      inputs:
        parameters:
          - name: registry
          - name: servicerepopath
          - name: servicename
          - name: repo
      outputs:
        parameters:
          - name: revision
            valueFrom:
              path: /mainctrfs/work/revision.txt
            globalName: servicerevision
      metadata: {}
      script:
        name: ''
        image: 'docker:20.10.21-git'
        command:
          - sh
        workingDir: /work/
        env:
          - name: GITHUB_USER
            valueFrom:
              secretKeyRef:
                name: github-secret
                key: GITHUB_USER
                optional: false
          - name: GITHUB_TOKEN
            valueFrom:
              secretKeyRef:
                name: github-secret
                key: GITHUB_TOKEN
                optional: false
          - name: DOCKER_HOST
            value: 127.0.0.1
        resources: {}
        volumeMounts:
          - name: work
            mountPath: /work
        securityContext:
          privileged: true
        source: >
          apk add alpine-sdk git libffi-dev openssh openssl-dev py3-pip
          python3-dev && \

          pip3 install awscli && \


          reponame=`echo '{{inputs.parameters.repo}}' |awk -F '/' '{print $NF}'
          |awk -F '\.git' '{print $1}'`

          cd "$reponame" && \


          if [ {{inputs.parameters.servicerepopath}} != "" ]

          then
            cd {{inputs.parameters.servicerepopath}} 
          fi && \


          TAG=`git rev-parse HEAD` && \

          ecrregion=`echo '{{inputs.parameters.registry}}' |awk -F '.' '{print
          $4}'` && \

          ecrrepo=`echo '{{inputs.parameters.registry}}' |awk -F '/' '{print
          $1}'` && \

          mkdir -p /mainctrfs/work/ && \

          echo "$TAG" > /mainctrfs/work/revision.txt && \


          echo "=======================INFO=================================" &&
          \

          echo "Revision: $TAG" && \

          echo -n "AWS-cli version:" && aws --version && \

          echo "Service repo path: '{{inputs.parameters.servicerepopath}}'" && \

          echo -n "Service work directory:" && pwd && \

          echo "Directory content:" && ls
          /work/'{{inputs.parameters.servicerepopath}}' && \

          echo -n "Tag info: " && cat /mainctrfs/work/revision.txt && \

          echo "ECR repo: $ecrrepo" && \

          echo "ECR region: $ecrregion" && \

          echo "=======================END==================================" &&
          \


          echo && \

          aws ecr get-login-password --region "$ecrregion" | docker login
          --username AWS --password-stdin "$ecrrepo" && \

          docker build -t '{{inputs.parameters.registry}}':"$TAG" -f Dockerfile
          . && \

          docker push '{{inputs.parameters.registry}}':"$TAG"
      sidecars:
        - name: dind
          image: 'docker:19.03.13-dind'
          command:
            - dockerd-entrypoint.sh
          env:
            - name: DOCKER_TLS_CERTDIR
          resources: {}
          securityContext:
            privileged: true
          mirrorVolumeMounts: true
    - name: updateHelmRepo
      inputs:
        parameters:
          - name: helmrepo
          - name: helmbranch
          - name: argoproject
          - name: helmrepopath
          - name: servicename
          - name: helmservicename
          - name: servicerepopath
          - name: servicerevision
            value: '{{workflow.outputs.parameters.servicerevision}}'
      outputs: {}
      metadata: {}
      script:
        name: ''
        image: 'docker:20.10.21-git'
        command:
          - sh
        workingDir: /work/
        env:
          - name: GITHUB_TOKEN
            valueFrom:
              secretKeyRef:
                name: github-secret
                key: GITHUB_TOKEN
                optional: false
        resources: {}
        volumeMounts:
          - name: work
            mountPath: /work
        source: >
          apk add alpine-sdk git libffi-dev openssh openssl-dev py3-pip
          python3-dev && \

          pip3 install awscli && \


          helmauthurl=`echo '{{inputs.parameters.helmrepo}}' | awk -F '//'
          '{print $2}'` && \

          helmfullpath="https://${GITHUB_TOKEN}:x-oauth-basic@${helmauthurl}" &&
          \

          revision='{{inputs.parameters.servicerevision}}' && \

          reponame=`echo '{{inputs.parameters.helmrepo}}' | awk -F '/' '{print
          $NF}' | awk -F '\.git' '{print $1}'` && \


          for i in range{1..40} ;do echo -n "#";done && echo && \

          echo -n "Current dir:" && pwd && \

          echo "Current files:" && ls ./*  && \

          echo "Helm repo: $helmauthurl" && \

          echo "Helm branch: '{{inputs.parameters.helmbranch}}'" && \

          echo "Revision: $revision" && \

          echo "Repo name: $reponame" && \

          for i in range{1..40} ;do echo -n "#";done && echo && \


          git config --global user.email "xgeng@appen.com" && \

          git config --global user.name "Xudong Geng" && \

          git config --global credential.helper store && \

          git clone --branch '{{inputs.parameters.helmbranch}}' --single-branch
          $helmfullpath && \


          cd $reponame && \

          if test -z '{{inputs.parameters.helmrepopath}}'

          then
            valuespath={{inputs.parameters.helmservicename}}
          else
            valuespath={{inputs.parameters.helmrepopath}}/{{inputs.parameters.helmservicename}}
          fi


          for yamlfile in `ls $valuespath/values*.yaml`

          do
            sed -i "s@tag: \"\([0-9a-zA-Z_]\{1,\}\)\"@tag: \""$revision"\"@g" $yamlfile
          done && \

          git add . && git commit -a -m "update service helm repo resivion." &&
          \

          git push origin HEAD

The argo workflow template creates argo workflow including several ordinal tasks(clone application repository ->build package and image and push into ECR->update helm repo)

 

Argo CD Application

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: vbsiv-service
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: datacollect
  source:
    repoURL: https://github.com/Appen-International/datacollect-helm
    path: app/vbsiv
    targetRevision: development
  destination:
    server: https://kubernetes.default.svc
    namespace: datacollect
  syncPolicy:
    automated: {}

Argo CD listen on the service repository(https://github.com/Appen-International/datacollect-helm) with service path “app/vbsiv“ and branch “development“. There is any update on the service helm repo which will cause Argo CD (k8s controller) to update the service state to desired state(new version). We just need to update service image tag and will update the service the new tag to the destination namespace of k8s cluster.

 

Argo Rollout

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  labels:
    app: vbsiv
  name: vbsiv
spec:
  replicas: 5
  selector:
    matchLabels:
      app: vbsiv
  strategy:
    canary:
      steps:
      - setWeight: 20
      - pause: {}
      - setWeight: 40
      - pause: {duration: 10m}
      - setWeight: 60
      - pause: {duration: 10m}
      - setWeight: 80
      - pause: {duration: 10m}
  template:
    metadata:
      labels:
        app: vbsiv
    spec:
      containers:
      - image: vbsiv:v1
        name: vbsiv-service

Argo CD supports rolling update and rollout. The above sample is configured as rollout for canary scenario.

The release is divided into eight steps, where step pause is stopped and will remain stopped until triggered by an external event, such as an automated tool or a user manually promote.

The first step is to set the weight to 20%. Since there are five replicas, upgrading to 20% means only one replica is upgraded, followed by 40%, 60%, 80%, and so on.

One copy of the new version was found in step 1. Since we did not pass the step 2 test , the Service did not split the traffic and approximately 20% of the traffic was forwarded to the new version.

If canaryService and stableService are specified in the.spec.strategy, traffic is divided after the upgrade. CanaryService forwards only the traffic of the new version, while stableService forwards only the traffic of the old version. This is done by changing the Selector of the Service. The upgrade will automatically add a Hash to the two services. Continue promote after you perform this step, the new version is 40% .Thus, we can use rollout to progressively release our services by defining the canary policy.