Deploying Kubernetes applications using Helm 3 and Jenkins

Deploying Jenkins CI/CD pipelines to Kubernetes cluster has many advantages: build system easily scales up and down, it is easy to run tasks in parallel, you can easily use custom docker images for Jenkins slaves, etc. This is especially attractive for Cloud Native applications, as both Jenkins pipelines and application start using the same set of technologies. To illustrate the simplicity and power of such a setup, a simple Jenkins pipeline deploying Kubernetes applications with the help of Helm is considered.

Jenkins installation

To install Jenkins on Kubernetes, one must refer to instructions of particular cloud provider. For installation to GKE (Google Cloud Kubernetes) there is a Helm chart which installs Jenkins along with GKE plugin providing additional functionality (easy Docker builds, simplified access to Google cloud storage, cross-cluster deployments, etc). To use kubectl and Helm after installation, only one additional step could required: create additional Kubernetes service-account and bind it to cluster-admin role:

# Create service account:
kubectl create serviceaccount helm
# And bind it to the cluster-admin cluster role so that it can make changes:
kubectl create clusterrolebinding helm-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:helm

Deployment using Helm

After installation, one may deploy applications using kubectl and Helm pretty easily:

    stages {
        stage("deploy") {
            agent {
                kubernetes {
                    cloud 'kubernetes'
                    label 'cd'
                    yamlFile 'jenkins/python-cd.yaml'
                }
            }

            steps {
                container('cd') {
                    dir("helm") {
                        sh "echo 'Simple kubectl install!'"
                        sh "curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl"
                        sh "chmod +x ./kubectl"
                        sh "mv ./kubectl /usr/local/bin/kubectl"

                        sh "echo 'Simple helm install!'"
                        sh "wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz"
                        sh "tar zxfv helm-v3.2.4-linux-amd64.tar.gz"
                        sh "cp linux-amd64/helm ."

                        sh "echo 'upgrade app!'"
                        sh "./helm upgrade --install --wait app ./app"
                    }
                }
            }
        }
    }

Here, for deployment, a separate Docker image was used (pod configuration is in jenkins/python-cd.yaml). A kubectl and Helm are installed first, but later they could be moved into Docker image. After installation, application could be deployed using helm upgrade --install --wait app ./app command. Additionally cluster could be configured using kubectl commands. But in a nutshell, this is really only a few lines of code!

Helm command may fail if default service account doesn't provide enough privileges. To be able to use Helm and kubectl, just run Jenkins slave pod using previously created service account helm:

apiVersion: v1
kind: Pod
metadata:
  name: python-cd
  labels:
    app: app-cd
spec:
  serviceAccountName: helm
  containers:
  - image: python
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: cd
  restartPolicy: Always

In the Jenkins pipeline above, application is deployed under the assumption that application Docker image has been built elsewhere. In real production pipeline, Docker image most likely will be created in the same Jenkinsfile. When running cluster in GKE, you can take advantage of Jenkins GKE plugin, saving the last snapshot of application in GCS bucket and building Docker image using Kaniko:

stage("Build") {
    agent {
        kubernetes {
            cloud 'kubernetes'
            label 'python-cd'
            yamlFile 'jenkins/python-cd.yaml'
        }
    }
    steps {
        container('python') {
            dir("webapp") {
                // archive the build context for kaniko
                sh "tar -zcvf /tmp/$BUILD_CONTEXT ."
                step([$class: 'ClassicUploadStep',
                credentialsId: env.JENK_INT_IT_CRED_ID,
                bucket: "gs://${BUILD_CONTEXT_BUCKET}",
                pattern: env.BUILD_CONTEXT])
            }
        }
    }
}
stage("Publish Image") {
    agent {
        kubernetes {
            cloud 'kubernetes'
            label 'kaniko-pod'
            yamlFile 'jenkins/kaniko-pod.yaml'
        }
    }

    environment {
        PATH = "/busybox:/kaniko:$PATH"
    }
    steps {
        container(name: 'kaniko', shell: '/busybox/sh') {
            dir("webapp") {
                sh '''#!/busybox/sh
                /kaniko/executor -f `pwd`/Dockerfile -c `pwd` --context="gs://${BUILD_CONTEXT_BUCKET}/${BUILD_CONTEXT}" --destination="${GCR_IMAGE}"
                '''
            }
        }
    }
}

As all Jenkins pipeline steps take place in ordinary Docker image, they could be easily tested/debugged on developers' desktops or in the cluster using kubectl exec -it <pod> -- /bin/bash command.

Summary

Kubernetes application deployment using Jenkins, Docker containers and Helm is conceptually quite easy. Of course in real life production environment one may also wish to use namespaces or even separate clusters, may give some consideration to the security, etc. To play with the pipeline further, refer to source code on Github.