6-1 - Network Layer 1 - Intro and Router
Why Network Layer?
- Responsible for delivering packets from source host to destination host.
- Determines which intermediate hops (routes) to take, managed by routing algorithms.
- Routing can be distributed (traditional) or centralized (SDN).
- Distinction between the control plane and data plane:
- Control plane: Determines the end-to-end route for the packet.
- Data plane: Forwards the packet to the appropriate next hop based on local decisions.
Protocols in the Network Layer
- Only one primary protocol is used to ensure consistency—IP (Internet Protocol) with versions IPv4 and IPv6.
- A packet at this layer is referred to as a datagram.
Data Plane vs Control Plane
- Data Plane:
- Handles the local function of forwarding datagrams.
- Operates in real-time (nanoseconds) at the router level.
- Control Plane:
- Manages broader network decisions and routing based on global metrics.
- Operates on a slower timeframe (milliseconds).
Router Architecture
- Components:
- Input ports
- Output ports
- High-speed switching fabric
- Routing processors
- Functions of input ports include line termination, lookup, forwarding, and queueing
- Forwarding is based on destination IP address, facilitating efficient routing.
Output Port Functions
- Buffering is essential when datagrams arrive faster than transmission capacity.
- Implement a drop policy: Determine which datagrams to drop under buffer overflow.
- Scheduling: Determine which queued datagram gets sent next.
Destination-Based Forwarding
- Forwarding decisions are based solely on the destination IP address in the forwarding table.
- Longest Prefix Matching:
- When searching for the appropriate entry, use the longest match in the address prefix.
- Example: For destination address decisions needing matching to the forwarding table entries.
Switching Fabrics
- Function: Transfer packets from input links to the chosen output links.
- Switching rate should ideally match or exceed input/output line rates (N times).
Types of Switching Fabrics
- Memory-based Switching: Traditional methods reliant on bus memory; limited by memory bandwidth.
- Bus Switching: Direct switching through a shared bus which can lead to contention and limited speed.
- Interconnection Networks: Use multiple small switches to optimize switching speed and capacity via parallelism.
Queuing and Scheduling
Input Port Queueing:
- When switching fabric speed is exceeded by input rates, queuing delays occur.
- HOL (Head-of-Line) Blocking: The first packet in a queue can block others from being processed.
Output Queueing:
- Buffering is required to manage faster arrivals than link transmission capability.
- Implement drop policies and scheduling disciplines to decide datagram transmission.
Buffer Management
- Buffering Rules: RFC 3439 recommends buffer size be equal to the product of average RTT and link capacity.
- Managerial Strategies:
- Drop strategies: e.g., tail drop vs. priority drops.
- Marking: Identify packets for congestion signaling.
Packet Scheduling Techniques
- First Come First Served (FCFS): Queued packets are transmitted in the order of arrival.
- Priority Scheduling: Packets are queued and transmitted based on class or priority.
- Round Robin: Cycles through queues, serving one packet from each.
- Weighted Fair Queuing (WFQ): Guarantees minimum bandwidth for different traffic classes based on assigned weights.
Acknowledgements
- Slides adopted from Kurose's Computer Networking Slides.