Docker image sizes grow quietly until suddenly your CI pipeline is spending ten minutes on image pulls and your ECS startup time is slow enough to affect health check grace periods. As someone who inherited a CI system where the base image pull alone took longer than the build, I learned how multi-stage builds actually eliminate the problem rather than paper over it. Today I’ll share the patterns that cut image sizes by 60–90%.

Why Images Get Large
A standard Dockerfile installs build tools, compiles your application, and copies everything into the final image — including the build tools, the compiler, the package manager cache, and the test dependencies you only needed during CI. Everything that goes into a Docker layer stays in the image unless you explicitly remove it in the same layer. Removing something in a later RUN statement doesn’t shrink the image; it just adds a layer that marks files as deleted without actually removing them from the underlying layer.
Multi-stage builds solve this by using separate stages for building and for the final runtime image. The builder stage has everything you need to compile. The final stage starts from a minimal base image and copies only the compiled artifacts — nothing else.
The Basic Pattern
# Stage 1: Build
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Runtime
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/index.js"]
The AS builder label names the first stage. COPY --from=builder in the second stage pulls specific files from the first stage. The final image is built from node:20-alpine (roughly 50MB) instead of node:20 (roughly 1.1GB). That’s what makes multi-stage builds endearing to anyone who has watched a deployment pipeline — the difference in pull time is immediately obvious.
The Go Pattern (Extreme Size Reduction)
Go compiles to a static binary, which means the final image can be stripped down to almost nothing:
# Stage 1: Build
FROM golang:1.22 AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/server ./cmd/server
# Stage 2: Runtime
FROM scratch
COPY --from=builder /app/server /server
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
EXPOSE 8080
ENTRYPOINT ["/server"]
FROM scratch is an empty base image — no OS, no shell, no anything. The final image contains exactly your binary and the TLS certificates it needs to make HTTPS calls. A Go binary that would be in a 1.2GB image with the full Go toolchain ends up in a 15–30MB image.
Python: Where It Gets Trickier
Python is harder to optimize because it’s interpreted. You can’t compile to a single binary. The pattern here is to install and build dependencies in one stage (handling compiled extensions), then copy the virtual environment to a slim final image:
# Stage 1: Dependencies
FROM python:3.12 AS builder
WORKDIR /app
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install --no-cache-dir --target=/app/packages -r requirements.txt
# Stage 2: Runtime
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /app/packages /app/packages
COPY src/ ./src/
ENV PYTHONPATH=/app/packages
CMD ["python", "src/main.py"]
The key win here is using python:3.12-slim instead of python:3.12 for the runtime stage — roughly 130MB vs 1GB.
Practical Wins to Look For
Check your current image sizes with docker images | sort -k7 -h. Anything over 500MB deserves a review. Look at what’s taking up space with docker history your-image:latest. The other half of the optimization is .dockerignore. Probably should have led with this, honestly — a minimal .dockerignore that excludes node_modules, .git, and test data files often reduces build context size significantly and speeds up the first COPY step before you’ve even touched the Dockerfile.
Leave a Reply