Skip to content

The Docker Lifecycle: Build, Ship, Run

Docker standardizes the complex process of software delivery into a simple, three-phase lifecycle: Build, Ship, and Run. This approach creates a consistent, repeatable pipeline that moves an application from a developer's source code to a production-ready deployment.

  • Build: The process of creating a container image from source code.

  • Ship: The process of versioning, testing, and distributing the image to a registry.

  • Run: The process of pulling the image from the registry and running it as a container on a host system.

Phase 1: Build (Creating the Artifact)

The "Build" phase moves the process of creating a shippable artifact from a manual "black art" to a transparent, automated, and code-based practice.

The Dockerfile

The foundation of every image is the Dockerfile. This is a plain-text file containing a set of instructions that define exactly how to assemble the image. It specifies the base OS, language runtimes, application code, dependencies, and metadata.

Each instruction in a Dockerfile (e.g., FROM, RUN, COPY) creates a new filesystem layer in the final image.

The Build Process and Revision Control

The image is built using the docker image build (or docker build) command, which reads the Dockerfile and executes its instructions. This process has two built-in forms of revision control:

  1. Filesystem Layers (Build Cache): Because images are composed of stacked, immutable layers, Docker can leverage a build cache. If a Dockerfile is rebuilt, Docker will reuse any layers that have not changed, starting from the first modified line. This provides:

    • Speed: Only the changed layers are rebuilt.

    • Efficiency: When pushing or pulling an image, only the layers that are not already present on the remote system are transferred, saving time and bandwidth.

  2. Image Tagging: The output of a successful build is a Docker image. This image must be given a tag to provide human-readable versioning. This tag is the standard mechanism for tracking application revisions. Tags typically correspond to a Git commit hash or a semantic version number (e.g., myapp:v1.2.1 or myapp:b4dcommit). This system makes it trivial to identify previous versions and perform rollbacks.

Packaging: The OCI Artifact

The "Packaging" step is an inherent part of the "Build" phase. The container image is the package.

  • Standardized Format: The image is a standardized, Open Container Initiative (OCI) compliant artifact.

  • Self-Contained: It is an immutable bundle that contains the application, its libraries, configurations, and all other dependencies.

  • Portable: This single artifact is the "unit of packaging" that will be used, unchanged, across all subsequent environments (testing, staging, production).

Modern Build Techniques: Multistage Builds

To keep production images small and secure, multistage builds are a best practice. A multistage build uses multiple FROM instructions in a single Dockerfile.

This allows you to use a large "build" image (e.g., one containing compilers, SDKs, and build tools) to compile the application, and then copy only the final binary into a small, minimal "production" image (e.g., a slim or Alpine-based image) that does not contain any of the build-time dependencies. This drastically reduces the size and attack surface of the final image.

Phase 2: Ship (Testing and Distribution)

The "Ship" phase involves validating the built artifact and pushing it to a central location.

Testing the Immutable Artifact

Docker itself is not a testing framework, but it provides a superior environment for testing. Its core advantage is that it allows you to test the exact artifact that will be deployed.

This practice finally solves the "it works on my machine" problem, as tests are no longer run against source code in a development environment. They are run against the final, immutable container image.

A Common Test Workflow:

  1. Build: A Dockerfile is used to build an image (e.g., myapp:test-v1).

  2. Run Tests: A docker container run command is executed, using the new image but overriding its default command (CMD) to run the test suite instead.

  3. Get Result: The container runs the tests and then exits. The container's exit status (e.g., 0 for success, non-zero for failure) is captured by the CI/CD system to determine if the test passed.

  4. Manage Dependencies: For tests that require external dependencies (like a database or cache), Docker Compose can be used to orchestrate a complete, multi-container testing environment that accurately mirrors production.

Distribution (The Registry)

Once an image has passed all tests, it is "shipped" by being pushed to an image registry (like Docker Hub, Harbor, or a cloud provider's registry). The registry serves as the centralized "source of truth" and the hand-off point between the build/test pipeline and the deployment process.

Phase 3: Run (Deploying the Container)

Deployment is the final phase, where the verified image is pulled from the registry and executed on one or more production servers.

Single-Host vs. Multi-Host (Orchestration)

  • Single-Host: On a single server, the docker command-line tool is sufficient to pull and run containers.

  • Multi-Host (Cluster): To deploy applications at scale across a cluster of servers, an orchestration platform like Kubernetes or Docker Swarm is required.

The most powerful feature of an orchestrator is scheduling—the process of automatically placing containers on the best-fit host based on resource availability and other constraints.

Declarative vs. Imperative Deployment

  • Imperative: Manually running commands like docker container run. This is suitable for development but is not a repeatable, version-controlled, or scalable deployment method for production.

  • Declarative (Best Practice): This is the model used by Kubernetes. The user defines the desired state of the application (e.g., "I want 3 replicas of myapp:v1.2.1 running and exposed on port 80") in a YAML manifest file. This manifest is stored in version control (Git). The orchestrator then continuously works to reconcile the actual state of the cluster with the desired state in the file.

Deployment Strategies

Orchestrators like Kubernetes provide several strategies for rolling out a new version of an application, ensuring zero-downtime updates:

  • Rolling Update (Default): This is the most common strategy. It gradually replaces old containers (Pods in Kubernetes) with new ones, one by one or in batches. It waits for the new container to be "healthy" before shutting down an old one.

  • Recreate: This strategy simply destroys all old containers at once and then creates all the new ones. This results in brief downtime and is generally not preferred.

  • Blue/Green Deployment: A complete, new deployment of the application ("Green") is launched in parallel with the old version ("Blue"). Once the "Green" deployment is verified, traffic is switched over instantly (e.g., at the load balancer level).

  • Canary Deployment: A small number of new containers ("canaries") are exposed to a subset of production traffic. This allows for monitoring their behavior in a real-world environment. If no errors occur, the rollout is gradually expanded.