TL;DR: Containers aren’t just a shipping box. They’re a repeatable dev computer. If you design your Dockerfiles and Compose setup for day‑to‑day development, your deployment story becomes boring in the best possible way.
Why developers keep tripping over Docker
Most teams meet Docker at the end of the pipeline. “Wrap it up, push it to a registry, job done.” Then the pain begins: flaky builds, bloated images, weird UID issues with mounted volumes, and the classic “works on my machine” but not inside the container.
I’ve been there across Web3 backends, bots, and home‑lab chaos on Raspberry Pis. The shift was realising Docker is a dev tool first. If you design for repeatable local dev, production becomes a checkbox.
This post covers:
- What containers actually are (just enough to be dangerous)
- Writing reliable, fast, and secure Dockerfiles
- Using Compose to run a full dev stack (watch, profiles, health checks)
- Troubleshooting the stuff that bites everyone
Containers in one page: what’s under the hood
Containers are regular Linux processes with:
- Namespaces for isolation (PID, net, mount, user, UTS),
- cgroups for resource limits,
- A union filesystem (overlay2) stacked from image layers.
A container isn’t a mini‑VM. It’s a process tree inside an isolated environment. That’s why PID 1 behaviour matters and why volumes exist: the container FS is ephemeral by design.
Mental model
- Image: a read‑only snapshot built from your Dockerfile layers.
- Container: a running instance of that image plus writable bits.
- Layers: each Dockerfile instruction usually creates one. Their order affects cache hits, build speed, and size.
Dockerfile fundamentals that actually matter
1) Choose the right base image
- Prefer distro images that match your tooling needs (e.g.
debian:bookworm-slim
,alpine
when you know musl is fine, or language images likenode:20-bookworm
). - For Go or single‑binary apps, consider
distroless
orscratch
for production. Keep a fatter builder image for compiling.
2) Multi‑stage builds are the default, not an optimisation
Build your artefacts in one stage, copy only what you need into a small runtime stage.
# syntax=docker/dockerfile:1.7
FROM node:20-bookworm AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
FROM node:20-bookworm AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM gcr.io/distroless/nodejs20-debian12 AS runtime
WORKDIR /app
# Non-root runtime
USER 10001
COPY --from=build /app/dist ./dist
# Node distroless uses `node` as entrypoint, so run your script
CMD ["/app/dist/index.js"]
3) Cache like you mean it (BuildKit mounts)
Use BuildKit to persist package manager caches between rebuilds. Speed without polluting the final image.
# Python example
# syntax=docker/dockerfile:1.7
FROM python:3.12-bookworm AS deps
WORKDIR /app
COPY pyproject.toml poetry.lock ./
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --upgrade pip && pip wheel --wheel-dir=/wheels -r <(pipx run poetry export -f requirements.txt --without-hashes)
FROM python:3.12-slim AS runtime
WORKDIR /app
COPY --from=deps /wheels /wheels
RUN pip install --no-index --find-links=/wheels -r <(pipx run poetry export -f requirements.txt --without-hashes)
COPY . .
USER 10001
CMD ["python","-m","app"]
Tip: move slow, infrequently changing steps (like installing dependencies) above fast‑changing
COPY . .
to keep cache hits.
4) Secrets belong in secret mounts, not layers
Never bake tokens into images. Use BuildKit secrets and SSH mounts for private dependency fetches during build.
# syntax=docker/dockerfile:1.7
RUN --mount=type=secret,id=npm_token \
npm config set //registry.npmjs.org/:_authToken="$(cat /run/secrets/npm_token)" && npm ci
Build with:
docker build --secret id=npm_token,env=NPM_TOKEN .
5) ENTRYPOINT vs CMD
- ENTRYPOINT: what always runs. Think executable.
- CMD: default args or fallback command. Overridable.
For most app images: set a stable ENTRYPOINT and optional CMD. Avoid shell form unless you need it; exec form handles signals better.
6) Health checks catch zombie “it’s running” lies
Add a HEALTHCHECK
that reflects app readiness. Your orchestrator and Compose can then wait on it.
HEALTHCHECK --interval=10s --timeout=3s --retries=3 \
CMD wget -qO- http://127.0.0.1:3000/health || exit 1
7) USER and PID 1
- Don’t run as root in production images. Create or use a non‑root UID and ensure file ownership matches.
- PID 1 doesn’t forward signals or reap zombies unless your process handles it. If you need an init, run the container with
--init
or use a tini/dumb‑init entrypoint in non‑distroless images.
8) .dockerignore
isn’t optional
Exclude node_modules
, venv
, build artefacts, secrets, large directories. Smaller context = faster builds, fewer surprises.
# .dockerignore
node_modules
.env
.git
**/__pycache__
dist
.DS_Store
Compose for development: treat it like your local cloud
Compose isn’t “just for demos.” It gives you a portable, versioned definition of your entire dev stack: app, DB, cache, message broker, reverse proxy, whatever.
Modern Compose facts to internalise
- The top‑level
version:
is obsolete. Use the latest spec by default. - Give your project a name with top‑level
name:
to avoid namespace collisions. - Use profiles to toggle optional services (debug tools, tracing, seeders).
- Use healthchecks and
depends_on.condition
so services start in a sane order. - For fast feedback loops, use develop/watch with
docker compose up --watch
.
A practical dev stack
Let’s wire a typical web API with Postgres and Redis. We want hot reload for the app, a real DB, and optional tools gated by profiles.
# compose.yaml
name: appstack
services:
api:
build:
context: .
command: npm run dev
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://postgres:postgres@db:5432/app
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
develop:
watch:
- action: sync
path: ./src
target: /app/src
ignore:
- node_modules/
- action: rebuild
path: package.json
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://127.0.0.1:3000/health || exit 1"]
interval: 10s
timeout: 3s
retries: 3
db:
image: postgres:16
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: app
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 10s
retries: 5
start_period: 30s
cache:
image: redis:7
mailhog:
image: mailhog/mailhog:latest
profiles: [devtools]
ports:
- "8025:8025" # only when profile active
volumes:
pgdata:
Usage:
# default stack (api, db, cache)
docker compose up --watch
# include optional tools
docker compose --profile devtools up --watch
Why this works well for dev
develop.watch
syncs only source files, notnode_modules
, so performance stays sane on macOS/Windows.depends_on.condition: service_healthy
prevents your app from booting before Postgres is ready.name:
avoids accidentally reusing volumes from another project called “default.”
Profiles: one file, many shapes
Add debugging or seed jobs behind profiles instead of splitting files and forgetting which override you used.
jaeger:
image: jaegertracing/all-in-one:1.60
profiles: [observability]
ports:
- "16686:16686"
COMPOSE_PROFILES=observability docker compose up
When to use bind mounts vs watch
- If your framework supports hot reload, prefer watch sync and keep
node_modules
inside the container to avoid cross‑OS module pain. - For quick scripts or tools, a plain bind mount can be fine. Don’t mount over path trees your runtime needs to own.
Production hardening checklist (you’ll thank yourself later)
- Multi‑stage builds with a small runtime image
- Non‑root
USER
with correct file ownership -
HEALTHCHECK
reflecting real readiness - Minimal attack surface: no compilers or package managers in the final image
- Pin base images by digest for reproducibility on CI
- Labels for provenance (org.opencontainers.image.*)
-
STOPSIGNAL
set when apps need something other than SIGTERM - SBOM/vuln scan in CI (e.g.
docker scout
)
Common gotchas and fixes
Mount performance on macOS/Windows
Cross‑FS mounts can be slow. Prefer develop.watch
and keep dependencies inside the container. If you must mount, narrow the mount and ignore large dirs.
UID/GID mismatches on bind mounts Create a user with the same UID/GID as your host dev user or use volumes instead of bind mounts for write‑heavy paths.
“It’s running but not ready”
Your process started but isn’t serving yet. Add a HEALTHCHECK
in the Dockerfile and depends_on.condition: service_healthy
in Compose.
Zombie processes and signal handling
If your app forks or spawns children, either handle SIGTERM properly or run with an init (docker run --init
) so PID 1 reaps zombies.
Image bloat
Move COPY . .
lower, use multi‑stage, and don’t leave package manager caches in the final image. Use BuildKit cache mounts for speed without size.
Alpine vs Debian
Alpine is tiny but uses musl. Some native deps and glibc‑expecting binaries behave oddly. If you see weird ABI issues, switch to -slim
Debian.
End‑to‑end example: FastAPI + Postgres dev flow
Dockerfile
# syntax=docker/dockerfile:1.7
FROM python:3.12-slim AS base
ENV PYTHONDONTWRITEBYTECODE=1 PYTHONUNBUFFERED=1
WORKDIR /app
FROM base AS deps
COPY pyproject.toml poetry.lock ./
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --upgrade pip && \
pip install poetry && \
poetry export -f requirements.txt --output requirements.txt --without-hashes && \
pip wheel --wheel-dir=/wheels -r requirements.txt
FROM base AS runtime
COPY --from=deps /wheels /wheels
RUN pip install --no-cache-dir --no-index --find-links=/wheels -r /wheels/requirements.txt
COPY app ./app
USER 10001
EXPOSE 8000
HEALTHCHECK CMD python -c "import urllib.request as u; u.urlopen('http://127.0.0.1:8000/health').read()" || exit 1
CMD ["python","-m","uvicorn","app.main:app","--host","0.0.0.0","--port","8000"]
compose.yaml excerpt
services:
api:
build: .
environment:
DATABASE_URL: postgresql+psycopg://postgres:postgres@db:5432/app
ports: ["8000:8000"]
depends_on:
db:
condition: service_healthy
develop:
watch:
- action: sync
path: ./app
target: /app/app
- action: rebuild
path: pyproject.toml
db:
image: postgres:16
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: app
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 10s
retries: 5
start_period: 30s
volumes:
pgdata:
Run it:
docker compose up --watch
Edit Python files and your API reloads; change dependencies and Compose rebuilds.
Final thoughts
Treat Docker as a portable development computer. Write Dockerfiles that build fast and run safe. Use Compose to encode your whole stack with health‑aware startup and fast feedback loops. The day you deploy is just another up
.