azure-devops | aks | helm

Azure DevOps - CI/CD Pipeline involving Helm 3, ACR & AKS

Setup Continuous Integration in Azure DevOps which push Docker image, Helm 3 chart to ACR, and Continuous Deployment pipeline that deploys the chart to AKS.

Abhith RajanAugust 18, 2020 Β· 9 min read Β· Last Updated:

Azure is my favorite cloud provider. We use Azure for most of the infra & services. Our code goes to Azure DevOps, we use Azure Container Registry (ACR) to host our docker container images and our Kubernetes clusters running in Azure Kubernetes Service (AKS).

We configured the CI/CD pipelines in Azure DevOps. In my case, we have a monorepo that contains several ASP.NET Core microservices and the folder structure looks like below, which we inherited from eShopOnContainers.

1- build
2 - azure-devops
3 - common
4 - ci-steps-template.yml
5 - ci-vars-template.yml
6 - project-one
7 - ci-pipeline.yml
8 - project-two
9- deploy
10
11 - azure-devops
12 - common
13 - cd-steps-template.yml
14 - cd-steps-template-prod.yml
15 - cd-vars-template.yml
16 - project-one
17 - cd-pipeline.yml
18 - project-two
19 - k8s
20 - helm
21 - project-one
22 - project-two
23
24- src
25 - Services
26 - Project-One
27 - Project-Two

One of the great articles that helped me to initial setup the CI/CD is given below. Its kind of outdated now since it is using az acr helm commands which were deprecated later. But it is still worth reading. So definitely check it out.

πŸ‘‰ Tutorial: Using Azure DevOps to setup a CI/CD pipeline and deploy to Kubernetes

CI Pipeline

The CI pipeline does the following,

  • Build a Docker image and push to ACR
  • Build Helm chart and push to ACR

Prerequisites

  • Helm chart for your project. Here my chart directory is located at deploy > k8s > helm. To create a new chart for your project, refer Helm Create.
  • acr-connection-name: ACR service connection in Azure DevOps. You can add it under Azure DevOps > Project > Project Settings > Service Connections.

The ACR credentials I stored in the Azure DevOps Variable Groups (acr-variable-group).

NameValue
registryNameYour ACR name
registryLoginACR login
registryPasswordACR password

Variable group definition
Variable group definition

Common

ci-vars-template.yml

1parameters:
2 projectName: ""
3 dockerRegistryServiceConnectionName: ""
4 dockerfile: ""
5 buildContext: ""
6
7variables:
8 helmVersion: 3.2.3
9 HELM_EXPERIMENTAL_OCI: 1
10 registryServerName: "$(registryName).azurecr.io"
11 dockerRegistryServiceConnectionName: ${{ parameters.dockerRegistryServiceConnectionName }}
12 dockerfile: ${{ parameters.dockerfile }}
13 buildContext: ${{ parameters.buildContext }}
14 projectName: ${{ parameters.projectName }}
15 imageName: ${{ parameters.projectName }}
16 imageTag: $(build.sourceBranchName)
17 helmChartVersion: $(build.sourceBranchName)
18 helmfrom: $(Build.SourcesDirectory)/deploy/k8s/helm
19 helmto: $(Build.ArtifactStagingDirectory)/deploy/k8s/helm

Few things to note here,

  • HELM_EXPERIMENTAL_OCI is to enable OCI support in the Helm 3 client. Currently, this support is experimental.
  • build.sourceBranchName as the image tag and chart version is handy if you are following Gitflow (which we follow) or similar git branching convention, so each release (eg, refs/tags/project-one/2.2.6) will generate Docker image and Helm chart with the same version.

ci-steps-template.yml

1steps:
2 - task: Docker@2
3 displayName: Build and push an image to container registry
4 inputs:
5 command: buildAndPush
6 repository: $(imageName)
7 dockerfile: $(dockerfile)
8 containerRegistry: $(dockerRegistryServiceConnectionName)
9 buildContext: $(buildContext)
10 tags: |
11 $(imageTag)
12
13 - task: HelmInstaller@1
14 displayName: "install helm"
15 inputs:
16 helmVersionToInstall: $(helmVersion)
17 - bash: |
18 echo $(registryPassword) | helm registry login $(registryName).azurecr.io --username $(registryLogin) --password-stdin
19 cd deploy/k8s/helm/
20 helm chart save $(helm package --app-version $(imageTag) --version $(helmChartVersion) ./$(projectName) | grep -o '/.*.tgz') $(registryName).azurecr.io/charts/$(projectName):$(imageTag)
21 helm chart push $(registryName).azurecr.io/charts/$(projectName):$(imageTag)
22 echo $(jq -n --arg version "$(helmChartVersion)" '{helmChartVersion: $version}') > $(build.artifactStagingDirectory)/variables.json
23 failOnStderr: true
24 displayName: "helm package"
25 - task: CopyFiles@2
26 inputs:
27 sourceFolder: $(helmfrom)
28 targetFolder: $(helmto)
29 - publish: $(build.artifactStagingDirectory)
30 artifact: build-artifact

The steps in the CI pipeline we moved to a common template file ci-steps-template.yml so that we can reuse it on other pipelines as well, and the steps include,

  1. Build and push the docker image

  2. Installs Helm client

  3. A series of script which does

    • Authenticate to ACR
    • Creates and push Helm chart to ACR.
    • Creates variables.json which contain the newly created Helm chart version. Which we will use to fetch the right chart version during CD.
  4. Copy some additional files to the artifact. Which we can use to override Helm chart values.

ci-pipeline.yml

1trigger:
2 branches:
3 include:
4 - refs/tags/project-one/*
5 paths:
6 include:
7 - src/Services/ProjectOne/*
8
9pr: none
10
11pool:
12 vmImage: "ubuntu-latest"
13
14variables:
15 - group: acr-variable-group
16 - template: ../common/ci-vars-template.yml
17 parameters:
18 projectName: "project-one"
19 dockerRegistryServiceConnectionName: "acr-connection-name"
20 dockerfile: "src/Services/Project-One/Dockerfile"
21 buildContext: "$(System.DefaultWorkingDirectory)"
22
23steps:
24 - template: ../common/ci-steps-template.yml

If everything went well, you will have two repositories under your ACR.

  • project-one which contains the Docker image
  • chart/project-one for the Helm chart

CD Pipeline

The CD pipeline will install the Helm chart on AKS. The CD pipeline stage requires following details,

NameValue
aksAKS name
rgAKS resource group
aksSpTenantIdSubscription tenant id
aksSpIdService principal Id
aksSpSecretService principal password

These credentials I stored in another varible group named aks-variable-group.

Helpful commands

Service principal credentials

Create new service principal aks-name-deploy by

1az ad sp create-for-rbac -n aks-name-deploy --scopes aks-resource-id --role "Azure Kubernetes Service Cluster User Role" --query password -o tsv

Where aks-resource-id is,

1az aks show -n $aks -g $rg --query id -o tsv

The above command will output service principal password aksSpSecret.

To get service principal id aksSpId,

1az ad sp show --id http://aks-name-deploy --query appId -o tsv

Also we need to attach ACR with AKS so that AKS can pull our private docker images from our ACR.

Attach ACR with AKS

1az aks update -g $rg -n $aks --attach-acr acr-resource-id

Where acr-resource-id is the output of,

1az acr show -n $registryName -g acr-resource-group-name --query id -o tsv

Get Azure Tenant Id

To get tenantId aksSpTenantId,

1az account show --query tenantId -o tsv

Now lets explore the pipeline YAML files.

Common

cd-vars-template.yml

1parameters:
2 projectName: ""
3
4variables:
5 helmVersion: 3.2.3
6 HELM_EXPERIMENTAL_OCI: 1
7 registryServerName: "$(registryName).azurecr.io"
8 projectName: ${{ parameters.projectName }}

cd-steps-template.yml

1steps:
2 - checkout: none
3 - task: HelmInstaller@1
4 displayName: "install helm"
5 inputs:
6 helmVersionToInstall: $(helmVersion)
7 - download: ci-pipeline
8 artifact: build-artifact
9 - bash: |
10 az login \
11 --service-principal \
12 -u $(aksSpId) \
13 -p '$(aksSpSecret)' \
14 --tenant $(aksSpTenantId)
15 az aks get-credentials \
16 -n $(aks) \
17 -g $(rg)
18 echo $(registryPassword) | helm registry login $(registryServerName) --username $(registryLogin) --password-stdin
19 helmChartVersion=$(jq .helmChartVersion $(pipeline.workspace)/ci-pipeline/build-artifact/variables.json -r)
20 helm chart pull $(registryServerName)/charts/$(projectName):$helmChartVersion
21 helm chart export $(registryServerName)/charts/$(projectName):$helmChartVersion --destination $(pipeline.workspace)/install
22 helm upgrade \
23 --namespace $(k8sNamespace) \
24 --create-namespace \
25 --install \
26 --wait \
27 --version $helmChartVersion \
28 --set image.repository=$(registryServerName)/$(projectName) \
29 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/app.yaml \
30 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/inf.yaml \
31 $(projectName) \
32 $(pipeline.workspace)/install/$(projectName)
33 failOnStderr: true
34 displayName: "deploy helm chart"

The common CD steps include a series of script which does,

  • Authenticate to Azure using the service principal credentials
  • Set the specified AKS cluster as the context.
  • Authenticate ACR with the ACR credentials (The same credentials we used in CI pipeline defined in the acr-variable-group)
  • Extract the Helm chart version that need to install
  • Pulls the Helm chart and installs (or upgrade) it. Here we are overriding the chart image repository to our ACR repository and some additional common values (app.yaml & inf.yaml).

cd-pipeline.yml

1trigger: none
2pr: none
3
4# define variables: registryName, registryLogin and registryPassword in the Azure pipeline UI definition
5variables:
6 - group: acr-variable-group
7 - template: ../common/cd-vars-template.yml
8 parameters:
9 projectName: "project-one"
10 - name: k8sNamespace
11 value: myteam
12
13resources:
14 pipelines:
15 - pipeline: ci-pipeline
16 source: "project-one-ci"
17 trigger:
18 enabled: true
19 branches:
20 include:
21 - refs/tags/project-one/*
22
23# define 5 variables: aks, rg, aksSpId, aksSpSecret and aksSpTenantId in the Azure pipeline UI definition
24stages:
25 - stage: test
26 displayName: test
27 jobs:
28 - deployment: test
29 variables:
30 - group: aks-variable-group
31 displayName: deploy helm chart into AKS
32 pool:
33 vmImage: ubuntu-latest
34 environment: test-$(projectName)
35 strategy:
36 runOnce:
37 deploy:
38 steps:
39 - template: ../common/cd-steps-template.yml
40 - stage: production
41 displayName: production
42 jobs:
43 - deployment: production
44 variables:
45 - group: aks-prod-variable-group
46 displayName: deploy helm chart into AKS
47 pool:
48 vmImage: ubuntu-latest
49 environment: production-$(projectName)
50 strategy:
51 runOnce:
52 deploy:
53 steps:
54 - template: ../common/cd-steps-template-prod.yml

In the CD pipeline above, I have defined two stages, one for TEST and one for PROD. The main difference between them is in the variable group used. aks-variable-group has the TEST cluster values and you guessed right, aks-prod-variable-group has the PROD cluster values. And the difference between cd-steps-template.yml and cd-steps-template-prod.yml is that prod file has some additional chart value overrides with respect to our PRODUCTION environment.

cd-steps-template-prod.yml

1steps:
2 - checkout: none
3 - task: HelmInstaller@1
4 displayName: "install helm"
5 inputs:
6 helmVersionToInstall: $(helmVersion)
7 - download: ci-pipeline
8 artifact: build-artifact
9 - bash: |
10 az login \
11 --service-principal \
12 -u $(aksSpId) \
13 -p '$(aksSpSecret)' \
14 --tenant $(aksSpTenantId)
15 az aks get-credentials \
16 -n $(aks) \
17 -g $(rg)
18 echo $(registryPassword) | helm registry login $(registryServerName) --username $(registryLogin) --password-stdin
19 helmChartVersion=$(jq .helmChartVersion $(pipeline.workspace)/ci-pipeline/build-artifact/variables.json -r)
20 helm chart pull $(registryServerName)/charts/$(projectName):$helmChartVersion
21 helm chart export $(registryServerName)/charts/$(projectName):$helmChartVersion --destination $(pipeline.workspace)/install
22 helm upgrade \
23 --namespace $(k8sNamespace) \
24 --create-namespace \
25 --install \
26 --wait \
27 --version $helmChartVersion \
28 --set image.repository=$(registryServerName)/$(projectName) \
29 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/app.yaml \
30 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/inf.yaml \
31 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/inf-prod.yaml \
32 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/$(projectName)/values-prod.yaml \
33 $(projectName) \
34 $(pipeline.workspace)/install/$(projectName)
35 failOnStderr: true
36 displayName: "deploy helm chart"

Few more Notes

  • CD pipeline is also YAML based (You gonna like it), hence create it like a regular pipeline (Not as RELEASE) in the Azure DevOps, and choose the cd-pipeline.yml after choosing to create pipeline based on Existing Azure Pipelines YAML file.
  • Once you create the CD pipeline, check the Environments under Azure DevOps Pipelines. There will be two environments as per the above example, test-project-one and production-project-one. Inside each, you can configure the approvals and more for the respective CD stages.

A sample reference source code is also pushed to here.

If you have any grey area in this article, feel free to shoot it in the comments below πŸ‘‡, I will try to shed some light on that part.

Written by Abhith Rajan
Abhith Rajan is an aspiring software engineer with more than 8 years of experience and proven successful track record of delivering technology-based products and services.
Buy me a coffee β˜•

Was this helpful?

πŸ‘ˆ This is a live react editor.

This page is open source. Noticed a typo? Or something unclear?
Improve this page on GitHub

Webmentions

12
Sai

Related VideosView All

Azure Kubernetes Service Networking Deep Dive

Related StoriesView All