(lec 2) CS 536 Park Network Performance - Flashcards

Architecture and design philosophy

  • Networks are designed with performance in mind; speed is a premium and slow networks are often not used for practical tasks
    • Cryptographic protocols may be turned off at routers due to overhead
  • End-to-end paradigm / lightweight core
    • Push heavyweight processing toward the edge (hosts/servers) and keep the core lightweight
    • This has guided Internet design and evolution historically; many alternative approaches have been tried and failed
  • Practical implication: system design aims to minimize cost and overhead while maximizing usable performance at the edge of the network

Performance yardsticks

  • Bandwidth (bps)
    • Defined as the bandwidth of physical media; from raw physical layer capacity to bits per second
    • Represents link bandwidth, independent of contention, software overhead, or protocol overhead
  • Throughput (bps)
    • Actual data delivered per second end-to-end
    • Includes overhead from software layers: firmware in NICs, device drivers in OS, user-space overhead in applications
    • In practice, app/user-space overhead leads to further slow-down beyond raw link bandwidth
  • Latency and delay (ms)
    • Latency: propagation delay plus processing and buffering delay (queueing)
    • Propagation delay depends on distance and signal speed
    • Processing delay includes router/switch computation; buffering delay arises from queues
  • Jitter
    • Variation in delay across packets
    • Even if average delay is small, large maximum delays can degrade multimedia performance

Meaning of “high-speed” networks

  • Propagation speed bound (SOL) limits how fast a single bit can move; the speed-of-light sets a fundamental bound
    • Approximate signal propagation velocity: v \approx 1.86\times 10^5\ \text{miles/s} \approx 3\times 10^5\ \text{km/s}
    • This bound is independent of the media (optical fiber, copper) and is slower than the ideal SOL
  • Lower bound latency example (Purdue to West Coast)
    • Distance: about d = 2000\ \text{miles}
    • One-way propagation lower bound: t_{\text{prop}} \ge \frac{d}{v} = \frac{2000}{186{,}000} \approx 0.0108\ \text{s} \approx 10\ \text{ms}
  • Geostationary satellite latency example
    • Distance: approx d \approx 22.2{,}000\ \text{miles}
    • One-way latency: t_{\text{prop}} \approx \frac{d}{v} \approx \frac{22{,}200}{186{,}000} \approx 0.119\ \text{s} \approx 120\ \text{ms}
    • End-to-end (one-way) latency ≈ 120 ms; Round-trip (two-way) ≈ 240 ms; roughly half a second in some scenarios
  • Fundamental consequence for applications
    • Some latency is unavoidable due to physics; this constrains interactive and real-time applications

Meaning of high-speed (interpretation)

  • A single bit cannot move faster; speed increases by increasing bandwidth (bits per second)
    • Analogy: widening a highway increases throughput by adding more lanes
    • Also called broadband
  • Interpretation of high-speed ≈ many lanes; effect on completion time
    • For large files, higher speed shortens completion time
    • For small files, the marginal benefit is smaller
  • Internet workload distribution
    • Most files are small, a minority are very large
    • The minority of large files consumes a bulk of network bandwidth

Purdue backbone snapshot (illustrative network picture)

  • Purdue’s backbone network features a large set of nodes and interconnections (example device names and links shown in the transcript)
  • Link speeds observed in the snapshot include:
    • 1.54 Mb
    • 10 Mb
    • 155 Mb
    • 1 Gbps (1 GigE)
    • 2 Gbps (2 GigE)
    • 10 Gbps (10 GigE)
  • Indicates a heterogeneous backbone with regional and campus links, ISP interfaces, and core aggregation
  • Document is labeled as “Data Network Version 1.3” and is a NOC copy; illustrates the diversity and scale of a university backbone

Level 3 backbone (Tier-1 ISP) context

  • Level 3 (www.level3.com) is a Tier-1 backbone provider; now part of CenturyLink (as of historical context)
  • The backbone speed shown in an outdated diagram is 10 Gbps, equal to Purdue’s shown backbone
  • Reality (as of the time of the material): faster backbone speeds exist now, including 40 Gbps, 100 Gbps, and 400 Gbps

What is traveling on the wires?

  • Traffic mix on networks includes:
    • Bulk data (data, image, video, audio files)
    • Voice
    • Streaming video/audio
    • Real-time interactive data (e.g., games, social media interaction)
    • AI-related traffic
  • Most Internet traffic historically has been TCP-based bulk file transfers (data traffic)
  • Multimedia streaming has grown rapidly (YouTube, Netflix)
    • Real-time interactive services (VoIP, video conferencing, online gaming) are also significant
  • This mix is a driver for traffic management practices, including traffic shaping and “unlimited” data plan policies

Burstiness of traffic (example)

  • Bursty traffic example: MPEG-compressed real-time video shows bursty traffic patterns
  • Visual intuition: video compression exploits inter-frame dependencies, producing bursts of activity
  • Consequence: burstiness is challenging for networks because bursts can exceed average capacity and cause buffering or packet loss

90/10 property: mice and elephants

  • Observation: traffic often follows a 90/10 (or 80/20) split
    • 90% of flows are mice (small flows)
    • 10% are elephants (large, bursty flows)
  • Why it matters
    • The elephants cause spikes that dominate congestion and delay
    • Active traffic control (e.g., TCP congestion control) may have limited efficacy due to bursts and the disproportionate impact of elephants

How to make sense of all this?

  • The study of network performance is approached through three interconnected dimensions:
    • Architecture: system design and how real networks realize those designs in practice
    • Algorithms: how components operate and coordinate (e.g., congestion control, scheduling, routing)
    • Implementation: how algorithms and architectures are realized in hardware and software, including practical constraints and complexities
  • Central concern: performance; slow performance means a design is unlikely to be used in practice
  • Interdependencies
    • Performance characteristics influence architectural decisions (core vs edge), the algorithms selected (congestion control, QoS), and the implementation details (drivers, firmware, OS stacks)
  • Real-world implications
    • Design choices affect latency, throughput, and quality of experience for users
    • Edge-centric designs raise considerations about security, privacy, and reliability at the host/application boundary
    • Traffic shaping and policy decisions impact user experience and service fairness

Equations and numerical references (summary)

  • Propagation speed bound (sol):
    • Propagation velocity: v \approx 1.86\times 10^5\ \text{miles/s} \approx 3\times 10^5\ \text{km/s}
  • Propagation latency (lower bound):
    • For distance d: t_{\text{prop}} \ge \frac{d}{v}
  • Purdue-to-West-Coast example (approx 2000 miles):
    • t_{\text{prop}} \approx \frac{2000}{186{,}000} \approx 0.0108\ \text{s} \approx 10\ \text{ms}
  • Geostationary satellite example (distance ≈ 22{,}200 miles):
    • One-way latency: t_{\text{prop}} \approx \frac{22{,}200}{186{,}000} \approx 0.119\ \text{s} \approx 120\ \text{ms}
    • Round-trip latency: ≈ 240\ \text{ms}
  • Throughput and bandwidth relationships (conceptual):
    • Throughput is limited by bandwidth and overhead: \text{Throughput} \approx \text{Bandwidth} \times \text{efficiency}
    • File transfer time for size S with end-to-end throughput R: T = \frac{S}{R}

Summary of practical implications

  • Physics imposes a hard limit on latency due to propagation speed; higher speeds can only reduce the impact by reducing distance or improving routing efficiency, but the speed of light is a fundamental constraint
  • Increasing bandwidth (more lanes) is the primary lever to reduce completion times for large transfers, with diminishing returns for small transfers
  • Real networks must manage burstiness and heterogeneous link speeds; a small fraction of large flows (elephants) can dominate congestion behavior
  • Effective performance requires integrating architecture, algorithms, and implementation with an understanding of traffic patterns, real-world constraints, and policy considerations
  • Edge-centric designs, while beneficial for performance, raise considerations for security, privacy, and reliability that must be addressed in practical systems