Explains running scripted Jenkins pipelines with Kubernetes agents, defining pod templates, handling explicit checkouts, and transferring dependencies using stash and unstash.
This lesson shows how to run scripted Jenkins pipeline stages inside Kubernetes pods using the Jenkins Kubernetes plugin. Unlike declarative pipelines where an agent block often handles checkout and workspace for you, scripted pipelines require more explicit control: define pod templates, manage checkouts on each node, and transfer artifacts between agents with stash/unstash.You will learn:
How to define a Kubernetes pod via podTemplate and containerTemplate.
How to run some stages on a static Jenkins agent and other stages inside Kubernetes containers.
How to move installed dependencies (for example node_modules) between agents using stash/unstash.
Below is a pod template you can generate from Jenkins “Pipeline Syntax” and then simplify for use inside a Jenkinsfile. This example defines a pod with two Node.js container templates:
To produce the initial Groovy block, use Jenkins’ “Pipeline Syntax” generator. The form looks like this when creating a pod template for a cloud:
When adding container templates in the UI, make sure to set fields such as image, command, args, and allocate a TTY if required:
The generator often emits a verbose, YAML-like Groovy snippet including probes and resource defaults. Trim it to the essentials: cloud, label, and containers with name, image, command, args, tty, and privileged flags as needed. For example:
A static Jenkins agent (e.g., long-lived Ubuntu/Docker executor) to perform checkout and install dependencies (fast, cache-friendly).
stash on that agent to capture installed dependencies.
A Kubernetes pod/container to run unit tests after unstashing the artifacts and checkout scm on that node.
Full example Jenkinsfile (scripted pipeline):
Copy
// Jenkinsfile (scripted pipeline)podTemplate( cloud: 'dasher-prod-k8s-us-east', label: 'nodejs-pod', containers: [ containerTemplate( name: 'node-18', image: 'node:18-alpine', command: 'sleep', args: '9999999', ttyEnabled: true, privileged: true ) ]) { node('ubuntu-docker-jdk17-node20') { // static agent for heavy operations, checkout, caching // Tools and env setup on static node env.NODEJS_HOME = "${tool 'nodejs-22-6-0'}" env.PATH = "${env.NODEJS_HOME}/bin:${env.PATH}" env.MONGO_URI = "mongodb+srv://supercluster.d83jj.mongodb.net/superData" properties([]) stage('Checkout') { checkout scm } wrap([$class: 'TimestamperBuildWrapper']) { stage('Installing Dependencies') { // Example cache wrapper (plugin-specific) cache(maxCacheSize: 550, caches: [ arbitraryFileCache( cacheName: 'npm-dependency-cache', cacheValidityDecidingFile: 'package-lock.json', includes: '**/*', path: 'node_modules' ) ]) { sh 'node -v' sh 'npm install --no-audit' // stash node_modules to transfer to the k8s container later stash(includes: 'node_modules/**', name: 'solar-system-node-modules') } } } } // Run Unit Testing inside the Kubernetes pod/container stage('Unit Testing') { // node label must match the podTemplate label defined above node('nodejs-pod') { // select the container inside that pod container('node-18') { // In scripted pipelines you must explicitly checkout when switching nodes/pods checkout scm // restore the dependencies unstash 'solar-system-node-modules' // Run tests sh 'node -v' sh 'npm test' } } }}
In scripted pipelines, switching to a different agent/node (for example from a static agent to a Kubernetes pod via node('label')) does not carry over the workspace or checked-out files. Always run checkout scm on the node where you will execute build/test commands, or use stash/unstash to transfer files between agents.
stash the produced artifacts on the producer agent (where you installed dependencies) and unstash them on the consumer agent (the Kubernetes container) to make tools and dependencies available where they are needed.
Missing repository files on the Kubernetes container (ENOENT: package.json).
Cause: forgetting to checkout scm inside the k8s node(...) block. Failing log example:
Copy
+ node -vv18.20.4+ npm testnpm ERR! code ENOENTnpm ERR! syscall opennpm ERR! path /home/jenkins/agent/workspace/n_solar-system_pipeline_scripted/package.jsonnpm ERR! errno -2npm ERR! enoent Could not read package.json Error: ENOENT: no such file or directory, open '/home/jenkins/agent/workspace/n_solar-system_pipeline_scripted/package.json'npm ERR! enoent This is related to npm not being able to find a file.
Fix: run checkout scm on the same node/container where you invoke npm test.
Missing dev/test tools (e.g., mocha: not found).
Cause: dependencies were installed on a different agent and not transferred. Symptom: sh: mocha: not found.
Fix: stash node_modules on the installing agent and unstash on the test agent, or install dependencies inside the test container.
Be mindful of stash size and limits: stashing large directories (e.g., entire build artifacts) may increase build time and storage use. Prefer caching plugins or artifact repositories for large dependencies. Also ensure workspace paths and ownership are compatible across agents (uid/gid differences can affect file access).
After adding checkout scm and unstash to the unit testing stage, the pipeline should successfully provision pods and run tests. Monitor pod provisioning, logs, and step output in the Jenkins UI (Blue Ocean or classic UI):
When the pod is provisioned, Jenkins prints the generated Pod YAML and container details in the agent logs. Example snippet you may see in the console output: