Skip to content

Principles of Congestion Control

Congestion occurs when too many sources send data too quickly for the network's routers and links to handle. Simply retransmitting lost packets doesn't solve the problem—it only adds more traffic.

Effective congestion control aims to manage the sender's transmission rate to prevent network overload.

Causes and Costs of Congestion

The negative effects of congestion become worse as network load increases.

Scenario 1: Two Senders, Infinite Buffers

  • Setup: Two hosts in a single-hop path send data through a single router with infinite buffer space and a link capacity of R.

  • Outcome: As the combined sending rate approaches R, packets are queued faster than they can be sent.

  • Unbounded queuing. The queuing delay grows infinitely large, rendering the network unusable even though no packets are lost.

Scenario 2: Two Senders, Finite Buffers

  • Setup: The same as above, but the router's buffer space is limited.

  • Outcome: When the sending rate is too high, the buffer overflows, causing packet loss. Senders then re-transmits the lost packets.

  • Key Costs:

    • Unnecessary Retransmissions: Due to timeouts, senders might prematurely re-transmit packets that were only delayed, not lost.

    • Wasted Bandwidth: The link is used to carry duplicate copies of packets that were already successfully delivered.

    • Reduced throughput of original data delivery as sending duplicate packets through retransmission increases.

Scenario 3: Four Senders, Multi-Hop Paths

  • Setup: Multiple senders communicating over paths that span several routers.

  • Outcome: A packet may be successfully forwarded through several links only to be dropped at a congested router near its destination due to bottleneck.

  • Key Cost: Wasted Upstream Bandwidth. All the transmission capacity used to carry a packet across the initial hops is wasted if it's dropped before reaching its final destination.

TCP Congestion Control

TCP uses an end-to-end congestion control mechanism. TCP sender infers network congestion from observed events (like packet loss) in the network and adjusts its sending rate accordingly.

The data sending rate is governed by a variable called the Congestion Window (cwnd), which limits the amount of unacknowledged data a sender can have in flight.

The algorithm operates in several phases, using a threshold variable (ssthresh) to switch between them.

1. Slow Start

  • Goal: To quickly probe for available network bandwidth without causing immediate congestion.

  • The connection begins with a small cwnd (typically 1 MSS). For every acknowledged segment, cwnd is increased, resulting in an exponential growth , doubling approximately every RTT.

  • Exit Condition: This phase ends when the cwnd reaches the ssthresh value or when a packet loss event is detected.

2. Congestion Avoidance

  • Goal: To slowly increase the sending rate once the network capacity is nearly reached.

  • Once cwnd exceeds ssthresh set during slow start, TCP enters Congestion Avoidance. Here, cwnd increases by only one MSS per RTT, resulting in slow linear growth.

3. Reacting to Packet Loss

TCP's reaction depends on how the loss was detected:

  • On a Timeout Event (Major Congestion): This indicates a significant loss event.

    • ssthresh is set to half of the current cwnd.
    • cwnd is reset to 1 MSS.
    • The sender re-enters Slow Start.
  • On a Triple Duplicate ACK (Minor Congestion): This signals a single lost packet while other data is still getting through. This triggers Fast Recovery.

    • ssthresh is set to half of the current cwnd.
    • cwnd is also set to half its previous value, instead of resetting to 1 and full reset.
    • The sender retransmits the lost packet and enters Congestion Avoidance after it recieves acknowledgement.

TCP Tahoe vs. TCP Reno

This difference in reacting to triple duplicate ACKs defines the classic TCP versions:

  • TCP Tahoe: Treats any loss event (timeout or triple duplicate ACK) the same way: it resets cwnd to 1 and enters Slow Start.

  • TCP Reno: Implements Fast Recovery. It avoids a full reset on triple duplicate ACKs, allowing it to recover from minor packet loss much more quickly.

Virtual Circuit vs. Datagram Networks

The network layer provides a host-to-host communication service. This service can be implemented using one of two fundamental models: connection-oriented virtual circuits or connectionless datagrams networks.

The key difference lies in where the "connection" state is maintained:

  • Transport-Layer Connection (e.g., TCP): State is maintained only in the end systems. Routers are unaware of the connection.

  • Network-Layer Connection (Virtual Circuit): State is maintained both in the routers along the path and the end systems.

FeatureVirtual-Circuit (VC) NetworksDatagram Networks
Service TypeConnection-OrientedConnectionless
Setup PhaseRequired (VC setup via signaling)Not required
Router StateRouters maintain per-connection stateRouters are stateless
Packet ForwardingBased on a small VC identifier (VCI) in routersBased on the full destination address
PathAll packets follow the same fixed pathPackets may take different paths
ExamplesATM, Frame RelayThe Internet (IP)

Virtual-Circuit (VC) Networks

Virtual Circuit (VC) is a connection-oriented service offered at the network layer. In a VC network, a predetermined path, or virtual circuit, is established before any data is sent.

A virtual circuit consists of a path through the network, a VC number for each link, and corresponding Forwarding table entries of each router along the path.

  • Forwarding: Packets carry a small VC identifier (VCI). At each router, the VCI is looked up, swapped for a new outgoing VCI, and the packet is sent along the pre-established path.

  • Routers maintain connection state to track active VCs and corresponding output links.

  • 3 Phases of VC Lifecycle:

    1. VC Setup: The sender uses a signaling protocol to request a connection. The network establishes a path, assigns VC numbers for each hop and updates the forwarding tables in all routers along that path. May also involve resource (bandwidth) reservation.

    2. Data Transfer: Once the VC is established, data packets flow quickly using their small VCIs.

    3. VC Teardown: When the connection ends, a teardown message is sent from sender or receiver to clear the state from the routers.

  • Benefits: By establishing a fixed path, VCs can offer Predictable and reliable service and Quality of Service (QoS) guarantees. Reduces Packet header size since each router cooses VC number independently.

  • Drawbacks: The required connection state and signaling make routers more complex and can limit network scalability.

Datagram Networks

A datagram network provides a connectionless network-layer service. (Internet's IP protocol).

Each packet, or datagram, in a datagram network contains the full destination IP address, it is treated as an independent unit and are routed independently.

  • No Connection Setup: Senders can transmit data immediately.

  • Forwarding: Every packet must contain the full destination address. Routers make an independent forwarding decision for each packet by looking up the destination address in their forwarding table. They use the longest prefix match rule to find the most specific route when multiple entries match a destination address.

  • Stateless Routers: Routers in a datagram network maintain forwarding tables but do not maintain any per-connection state. This makes them highly scalable and robust.

  • No Fixed Path: Because each packet is routed independently, different packets in the same flow may take different paths, potentially arriving out of order.

Datagram networks are simple, scalable, and highly resilient by adapting to topology changes by router failures without disrupting ongoing flow.

Datagram routing enables stateless and distributed control, unlike centralized VC setups.

  • Drawbacks: They do not inherently provide timing or delivery guarantees.

Made with ❤️ for students, by a fellow learner.