Skip to content

Container Networking

Container networking is managed by a set of drivers that leverage the host's kernel features to connect containers to each other and to the outside world. The core isolation is provided by Linux network namespaces, which give each container its own dedicated network stack (IP address, routing tables, and port list).

Default Configuration: The bridge Driver

By default, Docker configures containers to use the bridge network driver. This creates a private virtual network inside the host.

  1. The docker0 Interface: When Docker is installed, it creates a virtual Ethernet bridge on the host named docker0. This bridge acts as a virtual switch for all containers attached to it.

  2. Private IP Allocation: Docker allocates a private IP subnet (from the RFC 1918 private address ranges) to the docker0 bridge. Each container on this bridge is then assigned its own private IP address from this subnet (e.g., 172.17.0.2, 172.17.0.3, etc.).

  3. Virtual Ethernet Pairs (veth): To connect a container to the bridge, Docker creates a pair of virtual Ethernet interfaces. One interface is placed inside the container's network namespace (e.g., eth0), and the other is attached to the docker0 bridge on the host. These act as a virtual patch cable.

Communication Flow in Bridge Mode

  • Container-to-Container: By default, all containers on the docker0 bridge can communicate with each other freely using their private IP addresses.

  • Container-to-External World (Outbound): For outbound traffic, Docker creates a masquerading rule (SNAT) using iptables (or nftables) on the host. When a container sends a packet to an external destination, the host's network stack replaces the container's private source IP with the host's public IP address. To the outside world, the traffic appears to be coming directly from the Docker host.

  • External World-to-Container (Inbound): To allow inbound traffic to a container, you must explicitly "publish" or "map" a port using the -p or --publish flag (e.g., docker run -p 8080:80 ...). This tells Docker to forward traffic from a port on the host to a port inside the container.

This forwarding is handled by two components:

  1. docker-proxy: This is a user-space process that Docker creates for each port mapping. It listens on the specified host port (e.g., 8080) and forwards any incoming traffic to the container's private IP and port (e.g., 172.17.0.2:80).

  2. iptables / nftables: Docker also creates Network Address Translation (NAT) rules. These rules are more efficient than the proxy and are often used to route the traffic directly. The docker-proxy serves as a fallback and ensures connectivity in all environments.

User-Defined Bridge Networks

While docker0 is the default, it is a best practice to create user-defined bridge networks for applications.

docker network create my-app-network

Containers can then be attached to this custom network. User-defined bridges have a critical advantage over the default docker0 bridge:

  • Automatic Service Discovery: Containers on the same user-defined network can resolve each other's IP addresses using their container names. Docker provides an embedded DNS server that handles this. This is far more robust than relying on hard-coded IP addresses, which can change when a container is restarted.

Other Network Drivers and Modes

Docker provides several other network drivers for different use cases.

host Mode

This mode disables network isolation entirely. A container started with --net=host does not get its own network namespace.

  • Function: The container shares the host's network stack directly.

  • Implications:

    • The container uses the host's IP address and port space.

    • No virtual interfaces or docker-proxy are created, offering a slight performance advantage by bypassing the network abstraction.

    • Significant Security Risk: The container loses all network isolation. It can access any service running on the host's loopback interface and may collide with ports already in use by the host.

none Mode

This mode provides complete network isolation.

  • Function: The container is given its own network namespace but is not attached to any network.

  • Implications:

    • The container only has a loopback interface (lo).

    • It is "air-gapped" and cannot communicate with other containers or the external world.

    • This is useful for secure, batch-processing workloads that only need to perform computations and write to disk (via a mounted volume).

overlay Driver

This driver is designed for multi-host networking.

  • Function: It creates a distributed, private network that "overlays" on top of the host network, allowing containers on different Docker hosts to communicate securely and directly.

  • Use Case: This is the primary driver used in Docker Swarm mode to enable communication between services in a cluster.

macvlan Driver

This advanced driver allows a container to appear as a physical device on the host's network.

  • Function: It assigns a real MAC address to each container and bridges it directly to the host's physical network interface.

  • Use Case: Useful for legacy applications that expect to be directly on the physical network or for network monitoring tools that need to inspect traffic.

Network Management and Best Practices

  • Command: The docker network subcommand (e.g., docker network ls, docker network create, docker network connect) is used to manage networks.

  • Dynamic Mapping: Applications should be designed to rely on the platform (Docker or an orchestrator) to map ports dynamically, rather than hard-coding port numbers.

  • Protocol Support: Protocols that map random ports for return traffic (like some modes of FTP or RTSP) are difficult to support in a containerized platform and should be avoided.

  • Advanced Networking: For complex networking policies and security in large-to-scale environments (like Kubernetes), more advanced networking solutions are often used, which conform to the Container Network Interface (CNI) standard. Examples include Project Calico and Cilium.