Building Docker Images with Ephemeral Jenkins Slaves on Amazon ECS

Dec 27, 2025·
Benjamin Rabiller
Benjamin Rabiller
· 4 min read
Image credit: medium.com

Context

In my use case, I had several ECS clusters available 24/7, and I found it unnecessary to instantiate additional EC2 instances (even temporary ones) just to host Jenkins slave agents. It made no financial sense. The same applies to using AWS CodeBuild, which by definition would have incurred extra costs.

Therefore, I turned to the Amazon ECS Jenkins plugin. This plugin allows you to instantiate Docker containers as Jenkins agents on an ECS cluster, whether using the EC2 launch type or Fargate.

I won’t describe the full architecture in this article, as there are already many high-quality resources on the subject (see a few selected examples below):

The goal here is instead to describe how I leverage these slaves to build Docker containers without Docker and without mounting the Docker socket into my containers. This avoids exposing the Docker socket in our Jenkins slaves and removes the need to run our slaves in privileged mode.

Docker-in-Docker without Docker

How can we build a Docker image without Docker? Quite simply by using Kaniko. Kaniko is an open-source tool that builds Docker images and pushes them to registries without requiring a Docker daemon.

The minor challenge is that, similar to GitLab CI, we must use a source Docker image that contains the Jenkins agent. This means we cannot use any random image; we must either customize an image to install the Jenkins agent or use the official image provided by Jenkins (which is what I chose): jenkins/inbound-agent.

Consequently, we need to customize this image to integrate Kaniko. This can be tedious… With GitLab CI, we could have directly used the Kaniko Docker image. To achieve this here, we use a Multi-Stage Build in our Dockerfile:

FROM gcr.io/kaniko-project/executor:v1.3.0 as kaniko

FROM jenkins/inbound-agent

ARG DOCKER_VERSION=5:20.10.2~3-0~debian-buster
ARG AWS_CLI_VERSION=2.1.17

USER root

RUN curl "[https://awscli.amazonaws.com/awscli-exe-linux-x86_64-$](https://awscli.amazonaws.com/awscli-exe-linux-x86_64-$){AWS_CLI_VERSION}.zip" -o "awscliv2.zip" && \
    unzip awscliv2.zip && \
    ./aws/install && \
    rm awscliv2.zip

RUN apt-get update && apt-get install --no-install-recommends -y \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common && \
    curl -fsSL [https://download.docker.com/linux/debian/gpg](https://download.docker.com/linux/debian/gpg) | apt-key add - && \
    add-apt-repository "deb [arch=amd64] [https://download.docker.com/linux/debian](https://download.docker.com/linux/debian) $(lsb_release -cs) stable" && \
    apt-get update && \
    apt-get --no-install-recommends -y install docker-ce-cli=${DOCKER_VERSION} && \
    apt-get clean

COPY --from=kaniko /kaniko /kaniko

## I explain in the article why I commented out this line
#USER jenkins

One might argue that I’m “cheating” by installing docker-ce-cli in the image. While true, it is only used to authenticate with ECR so that Kaniko can push the image.

Even though I configured the Jenkins plugin so that the generated Task Definition is associated with an IAM Role allowing interactions with the ECR registry, Kaniko didn’t seem to assume this role automatically. I therefore had to use the AWS CLI and Docker CLI to generate the secret in ~/.docker/config.json.

If you have found a cleaner solution for this, feel free to let me know in the comments!

I wanted to revert to the jenkins user to avoid running the slave as root, but it made building the image impossible. I don’t fall under the specific case described in the Kaniko README:

“If you have a minimal base image (SCRATCH or similar) that doesn’t require permissions to unpack, and your Dockerfile doesn’t execute any commands as the root user, you can run kaniko without root permissions.”

Below is the IAM policy linked to the task definition role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:GetDownloadUrlForLayer",
                "ecr:BatchGetImage",
                "ecr:BatchCheckLayerAvailability",
                "ecr:PutImage",
                "ecr:InitiateLayerUpload",
                "ecr:UploadLayerPart",
                "ecr:CompleteLayerUpload"
            ],
            "Resource": "*"
        }
    ]
}

Usage within a Jenkinsfile

pipeline {
   agent {
    label 'cluster-ecs'
   }
   environment {
      ACCOUNT_ID = 'xxxxxxxxxx'
      ECR_URL = '[xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/apache-php](https://xxxxxxxxx.dkr.ecr.eu-west-1.amazonaws.com/apache-php)'
   }
   stages {
      stage('Docker Build/Push image') {
        steps {
          sh '''
          aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin xxxxxxx.dkr.ecr.eu-west-1.amazonaws.com
          /kaniko/executor --dockerfile ./Dockerfile --destination ${ECR_URL}:${BRANCH_NAME} --context ./ --force && echo "Successfully pushed image ${ECR_URL}:${BRANCH_NAME}"
          '''
        }
      }
      stage('Security Scan') {
        steps {
          sh '''
          aws ecr wait image-scan-complete --repository-name apache-php --image-id imageTag=${BRANCH_NAME}
          '''
        }
      }
   }
}

This job consists of two stages:

  1. Docker Build/Push: Builds and pushes the image to our ECR registry.
  2. Security Scan: Waits for the AWS API to confirm the security scan is complete. You can then further process these results.

Note that the label used for the agent matches the one configured in the Amazon ECS Jenkins plugin to ensure the agent runs on the correct cluster.

Of course, this job assumes a source SCM like GitHub, GitLab, or Bitbucket is configured. The source code containing the Dockerfile is located at the root of the Jenkins slave agent.

Conclusion

In this article, we’ve seen that it is possible to build Docker images securely without the Docker daemon using Kaniko within ephemeral Jenkins slaves on Amazon ECS.

Ultimately, it wasn’t as straightforward as it seemed. In my opinion, Jenkins and its JNLP-based agent management make this process significantly less intuitive than it is on GitLab CI.

Benjamin Rabiller
Authors
DevOps/Cloud Architect

Actuellement ingénieur DevOps/Architecte Cloud, j’étais initialement interessé par l’administration système et grâce aux entreprises dans lesquelles j’ai pu travailler Oxalide et maintenant Claranet j’ai eu la chance de découvrir l’univers du Cloud et de l’automatisation.

Je me suis décidé a publier ce blog pour vous faire partager ma passion mais également pour enrichir avec modestie tout ce que l’on peut trouver sur internet. Bonne lecture !