Skip to content

Packet Switching

Packet switching is the fundamental communication model used in the Internet. It allows messages from source end systems to be divided into smaller units called packets, which are then transmitted independently over a network to the destination end system.

Key characteristics:

  • Large messages are broken into packets.

  • Each packet typically includes:

    • A header (with destination address, sequence number, etc.)

    • A payload (a segment of the original message)

  • Packets travel independently through a network of communication links and packet switches (typically routers or link-layer switches).

  • The time to transmit a packet of L bits over a link with transmission rate R is L/R seconds.

Store and Forward

In store-and-forward transmission, used in packet-switched networks, a router must receive an entire packet before forwarding it to the next node.

  • This introduces a delay at each hop.

  • For a packet of L bits over a link with R bits/sec, transmission time per link is L/R seconds.

  • A router adds 2L/R seconds delay (receive + forward).

  • In a pipeline of multiple packets, if three packets are sent back-to-back, the final one arrives at 4L/R seconds.

  • For N links (and N−1 routers), the end-to-end delay is NL/R.

This mechanism ensures reliable, orderly packet delivery.

Queuing Delays and Packet Loss

In packet-switched networks, output buffers (queues) are present for each router’s outgoing link.

  • If the link is busy, arriving packets are queued, introducing queuing delay.

  • Delay varies with network congestion.

  • If the buffer is full when a new packet arrives, packet loss occurs. The arriving packet may be dropped or an older one discarded.

  • Common during high traffic or bursty transmissions which exceeds the links capacity.

  • Example: Two hosts sending data over a shared 1.5 Mbps link may congest the router.

Analogy: Like waiting in line at a bank or tollbooth—the more crowded, the longer the wait or risk of being turned away.

Circuit Switching

Circuit switching is another fundamental method for transmitting data, used alongside packet switching.

  • A dedicated path with reserved resources (bandwidth, buffers) is set up for the entire communication session.

  • Once the circuit is established, transmission occurs at a constant rate.

  • Traditional telephone networks are based on circuit switching.

    • Each call sets up a fixed-rate connection across intermediate switches.

    • For example, with 1 Mbps links divided into 4 circuits, each gets 250 kbps.

This method ensures reliability and predictability, but can be inefficient when resources are unused during silence or inactivity.

Three Phases of Circuit Switching:

  1. Connection Establishment

  2. Data Transfer

  3. Connection Teardown (Disconnection)

Multiplexing in Circuit-Switched Networks

Circuit-switched links can carry multiple connections using multiplexing:

1. Frequency-Division Multiplexing (FDM)

  • The frequency spectrum is split into bands.

  • Each connection gets a dedicated frequency band.

  • Used in traditional phone systems (4 kHz per voice call), FM radio (88–108 MHz).

2. Time-Division Multiplexing (TDM)

  • Time is divided into frames; each frame is split into time slots.

  • Each connection gets a fixed time slot per frame.

  • Example: 8,000 frames/sec × 8 bits = 64 kbps per circuit.

A link transmitting 8,000 frames/second, each time slot(second) carries 8 bits, then a single circuit provides 64 kbps.

Drawbacks of Circuit Switching:

  • Resource underutilization during idle periods (e.g., silence in calls). Circuit switching can be wasteful, as reserved resources (like time slots or frequency bands) remain idle during silence or inactivity.

    Example: In a voice call or a radiologist’s session, when there’s no data transfer (e.g., during pauses), the allocated bandwidth goes unused.

  • Complex signaling required to set up and maintain dedicated circuits. Setting up and managing dedicated circuits requires complex control signaling.

Packet Switching vs Circuit Switching

FeaturePacket SwitchingCircuit Switching
DefinitionBreaks data into packets sent independentlyEstablishes a dedicated path for the entire session
Resource AllocationDynamic, on-demand, shared among usersPre-allocated for the full session
EfficiencyHigh (resources used only when needed / when data is sent)Low during idle times (reserved resources may go unused)
Data Transfer PathVaries per packetFixed for the session, flows in dedicated path
Setup TimeNone or minimalRequired before data transfer
Delay CharacteristicsVariable; can be high under loadLow and predictable once circuit is set
Example UsageInternet, email, file transferTelephone networks, dedicated VoIP lines
RobustnessHigh; can reroute around failuresLow; failure in the circuit breaks the connection
ScalabilityHighly scalable even with bursty and large trafficLess scalable due to fixed resource allocation
CostTypically lower due to shared usageTypically higher due to resource reservation

Network of Networks

The Internet is a network of networks, where multiple types of Internet Service Providers (ISPs) interconnect to form the global infrastructure.

Hierarchical Internet Structure:

  • Access ISPs:

    • Connect end users (e.g., homes, businesses, universities) to the Internet.

    • Cannot directly connect to every ISP worldwide.

  • Regional ISPs:

    • Aggregate traffic from multiple access ISPs within a geographic region.

    • Provide a bridge between access ISPs and Tier-1 networks.

  • IXPs (Internet Exchange Points):

    • Neutral facilities where ISPs exchange traffic (peering).

    • Reduce latency and costs by enabling direct peering.

  • Tier-1 ISPs (Global Transit Providers):

    • Core backbone of the Internet.

    • Peer with each other and offer global routing.

    • Examples: Level 3 (now Lumen), AT&T, NTT.

Delay, Loss, and Throughput in Packet-Switched Networks

As a packet travels through a network, it encounters multiple types of delays at each node (router):

1. Processing Delay

  • Time to inspect packet header and perform error checking.

  • Typically a few microseconds.

2. Queuing Delay

  • Time spent in the router’s queue before being transmitted.

  • Depends on traffic intensity; can vary from microseconds to milliseconds.

3. Transmission Delay

  • Time to push all packet bits onto the link.

  • Transmission delay = L / R

    • L = packet length (bits)

    • R = transmission rate (bps)

4. Propagation Delay

  • Time for the bits to travel across the link.

  • Propagation delay = d / s

    • d = distance

    • s = propagation speed (typically 2×10⁸ to 3×10⁸ m/s)

Total nodal delay:
d_nodal = d_proc + d_queue + d_trans + d_prop

Queuing Delay and Traffic Intensity

Queuing delay is the most complex and variable delay component and depends on traffic intensity:

  • Traffic intensity = La / R

    • L = packet length (bits)

    • a = average arrival rate (packets/sec)

    • R = link transmission rate (bps)

As La/R approaches 1, queuing delays increase rapidly and can lead to packet loss.

Unlike other delays (processing, transmission, propagation), queuing delay changes for each packet based on network traffic conditions. For example, when multiple packets arrive simultaneously, the first may experience no delay, while others wait their turn.

Packet Loss

In real-world networks, queues have limited capacity, unlike the theoretical assumption of infinite queues. When a packet arrives and the queue is already full, the router drops the packet, resulting in packet loss.

  • Routers have limited buffer space, and cannot store all incoming traffic under congestion.

  • Packet loss occurs when there is no available buffer space to queue an incoming packet.

  • As traffic intensity (expressed as La/R) increases, the likelihood of packet loss also increases.

  • In theory, high traffic leads to infinite delay; in practice, routers drop excess packets instead.

  • From the end system’s perspective, a lost packet appears to have vanished in the network.

  • Packet loss, along with delay, is a key metric for evaluating network performance.

  • Reliable transport protocols like TCP:

    • Detect lost packets using acknowledgments (ACKs) and timeouts.
    • Automatically retransmit the lost packets to recover from loss.

End-to-End Delay

Nodal delay occurs at a single router.

End-to-end delay is the total time a packet takes to reach from source to destination, across all routers.

Assumptions (Simplified Model):

  • N total nodes (N−1 routers).

  • No congestion (ignore queuing delay).

  • Equal packet size L, transmission rate R, and delay components.

Formula:

  • d_end-to-end = N × (d_proc + d_trans + d_prop)

    • d_trans = L / R

Jitter in Computer Networks

Jitter refers to variability in packet delay between successive packets.

  • Important in real-time applications like VoIP, gaming, and video conferencing.

  • High jitter results in out-of-order delivery and poor user experience.

Transmission Modes

Communication between two devices can occur in one of three modes:

  1. Simplex

    • Data flows in one direction only.

    • Example: Television broadcast

  2. Half-Duplex

    • Data flows in both directions, but only one direction at a time.

    • Example: Walkie-talkies, police radios

  3. Full-Duplex

    • Data flows in both directions simultaneously.

    • Example: Telephone conversations

Made with ❤️ for students, by a fellow learner.