Skip to content

Instantly share code, notes, and snippets.

@abdennour
Last active August 31, 2022 12:22
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save abdennour/4a1c2ac93fbe7edc36768a5ae80a2671 to your computer and use it in GitHub Desktop.
Save abdennour/4a1c2ac93fbe7edc36768a5ae80a2671 to your computer and use it in GitHub Desktop.
Jenkins declarative Pipeline in Kubernetes with Parallel and Sequential steps
apiVersion: v1
kind: Pod
spec:
# dnsConfig:
# options:
# - name: ndots
# value: "1"
containers:
- name: dind
image: abdennour/docker:19-dind-bash
command:
- cat
tty: true
volumeMounts:
- name: dockersock
readOnly: true
mountPath: /var/run/docker.sock
resources:
limits:
cpu: 1000m
memory: 768Mi
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
// Sequential
pipeline {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile '00-infra.yaml'
}
}
stages {
stage('Build') {
steps {
container('dind') {
sh 'docker build --network=host portal/ -t portal:dev'
sh 'docker build --network=host iam/ -t iam:dev'
sh 'docker push portal:dev'
sh 'docker push iam:dev'
}
containerLog 'dind'
}
}
}
}
pipeline {
agent any
stages {
stage('Build') {
parallel {
// stage 1-a
stage('build-portal') {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile '00-infra.yaml'
}
}
steps {
container('dind') {
sh 'docker build --network=host portal/ -t portal:dev'
sh 'docker push portal:dev'
}
}
}
// stage 1-b
stage('build-iam') {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile '00-infra.yaml'
}
}
steps {
container('dind') {
sh 'docker build --network=host iam/ -t iam:dev'
sh 'docker push iam:dev'
}
}
}
}
}
}
}
def services = ['portal', 'iam']
def parallelBuildStagesMap = services.collectEntries {
["${it}" : generateBuildStage(it)]
}
def generateBuildStage(service) {
return {
stage("build-${service}") {
agent {
kubernetes {
defaultContainer 'jnlp'
yamlFile '00-infra.yaml'
}
}
steps {
container('dind') {
sh 'docker build -t ${service}:dev'
sh 'docker push ${service}:dev'
}
}
}
}
}
pipeline {
agent any
stages {
stage('Build') {
steps {
script { // <-- script must be used to integrate imperative pipeline into Declarative
parallel parallelBuildStagesMap
}
}
} // end of main stage Build
} // end of stages
} // end of pipeline
@Yethal
Copy link

Yethal commented Jan 19, 2022

@harshavmb You need to use agent directive inside prepareOneBuildStage function, agent is for declarative pipeline only

@JohnnyChiang
Copy link

JohnnyChiang commented Jan 25, 2022

@sarg3nt @harshavmb

Here's the fix of OP's script

def services = ['portal', 'iam']


def parallelBuildStagesMap = services.collectEntries {
    ["${it}": generateBuildStage(it)]
}

def generateBuildStage(service) {
    return {
        node(POD_LABEL) {
            stage("build-${service}") {
                steps {
                    container('dind') {
                        sh 'docker build -t ${service}:dev'
                        sh 'docker push ${service}:dev'
                    }
                }
            }
        }
    }
}

pipeline {
    kubernetes {
        yamlFile '00-infra.yaml'
    }
    stages {
        stage('Build') {
            steps {
                script { // <-- script must be used to integrate imperative pipeline into Declarative
                    parallel parallelBuildStagesMap
                }
            }
        } // end of main stage Build
    } // end of stages
} // end of pipeline

@toabi
Copy link

toabi commented Mar 25, 2022

But this will run both in parallel in one agent?

I'm struggling to find a way how to run code-generated stages with kubernetes plugin agents in parallel but using one agent per stage.

@JohnnyChiang
Copy link

@toabi Does my script fit your needs?

@toabi
Copy link

toabi commented Apr 8, 2022

Actually I figured out how to run one agent stage:

This is getDeployStage.groovy:

def call(target, String tagName) {
  return [
      "${target.cluster} ${target.release}@${tagName}",
      {
        agentK8s {
          node(POD_LABEL) {
            stage("${target.cluster}/${target.namespace}/${target.release}/${tagName}") {
               doSomethingInSomeContainer()
            }
          }
        }
      }
  ]
}

While agentK8s is defined in vars/agentK8s.groovy as such. This will create a new agent, and inside the closure the POD_LABEL variable is accessible which is used above in the node(POD_LABEL)

def call(body) {
  podTemplate(
      containers: [
          containerTemplate(
              name: "kubectl",
              image: "bitnami/kubectl:1.22",
              command: "sleep",
              args: "infinity",
              runAsUser: "0"
          ),
          containerTemplate(
              name: "helm",
              image: "alpine/helm:3.8.1",
              command: "sleep",
              args: "infinity",
              runAsUser: "0"
          )
      ]
  ) {
    body.call()
  }
}

Additionally there's something which creates a list of the getDeployStage outputs, for example like that:

deployBranchStages.groovy

def call(deployTargets, currentBranch) {
    return deployTargets.findAll({it.branch == currentBranch}).collectEntries { target ->
        return getDeployStage(target, env.pushedImageTag)
    }
}

So in the end in the pipeline the stage with dynamic parallel stages with one agent per stage looks like that:

stage('Deploy Branch') {
    when {
        beforeAgent true
        expression { opts.deploy.targets.size > 0 }
    }
    steps {
        script {
            parallel deployBranchStages(opts.deploy.targets, BRANCH_NAME)
        }
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment