BCSE308L - Computer Networks: Quality of Service (QoS) Notes
Definition: Quality of Service (QoS) refers to a set of techniques and mechanisms designed to guarantee the performance and efficiency of networks. QoS allows for predictable service delivery to various applications, ensuring that the network can meet the requirements of applications that demand high performance, such as video streaming, voice over IP (VoIP), and online gaming.
Data-Flow Characteristics
QoS is critically dependent on an in-depth understanding of the data flow characteristics, which include several key parameters:
Reliability: This characteristic ensures accurate, complete data delivery from the source to the destination without loss, duplication, or corruption. It often involves techniques such as acknowledgment packets and error-correcting codes to facilitate recovery of lost packets.
Delay (Latency): Latency is the time taken for a packet to travel from the source to its destination. Besides network speed, latency can be affected by factors such as routing paths, network congestion, and processing delays at each node.
Jitter: Jitter refers to the variation in delay between packets in a data flow. This variation can severely impact real-time applications, like VoIP, where a steady stream of packets is essential for maintaining call quality and synchronization.
Bandwidth: Bandwidth is the maximum rate at which data can be transmitted over a network link, measured in bits per second (bps), such as Mbps or Gbps. Understanding bandwidth is crucial for planning and to ensure that applications have enough capacity to perform effectively.
Flow Control Techniques to Improve QoS
Purpose: Flow control mechanisms are essential for preventing a fast sender from overwhelming slower receivers or network links, thereby optimizing data delivery and network performance.
Key Objectives:
Prevent buffer overflow at receivers or intermediate nodes, which can lead to data loss.
Maintain smooth and reliable data delivery to improve the user experience during data transmission.
Support QoS goals such as low latency, minimal jitter, and high throughput, ensuring that performance aligns with application requirements.
Techniques:
Scheduling: This determines the order of packet processing within a router, influencing how quickly packets are sent to their desired destinations.
Traffic Shaping: This controls the data flow to ensure consistent delivery and helps in avoiding congestion during peak usage times.
Resource Reservation: This ensures that specific network resources, such as bandwidth and buffer space, are allocated for certain applications or data flows identified as critical.
Admission Control: This technique manages the flow of new applications and processes based on available network resources, ensuring that existing flows are not compromised.
Scheduling Techniques
Scheduling techniques are crucial in routers and switches to manage how packets are queued and processed. Effective scheduling ensures fairness among different types of traffic and meets QoS requirements.
Types of Scheduling:
FIFO (First-In, First-Out) Queuing:
Packets are processed in the order they arrive.
Operating Principle: Packets wait in a queue until processing is available; however, if the arrival rate exceeds the processing rate, packets can be discarded, leading to potential data loss.
Characteristics: This method is simple and widely used but can introduce delays and jitter, particularly with packets of varying sizes.
Priority Queuing:
Packets are assigned to different priority classes, with higher-priority packets processed first to meet the demands of time-sensitive applications.
Conditions: Higher-priority traffic, especially for real-time applications, is processed with less delay.
Risks: A continuous flow of high-priority packets can lead to starvation of lower-priority packets, which may experience long delays.
Weighted Fair Queuing (WFQ):
Similar to priority queuing, but categorizes packets into various classes. The processing is based on weights assigned to each class, determining the proportion of packets allowed through relative to other classes.
Example: If the weights are set to 3, 2, and 1, during each round of processing, packets from each class are transmitted according to their assigned weights.
Traffic Shaping Techniques
Traffic shaping manages and controls the flow of data across a network, ensuring improved QoS by reducing congestion and prioritizing essential traffic.
Key Methods:
Leaky Bucket:
This technique controls the rate of data flow within a network.
Functionality: Incoming packets are added to a buffer (the bucket) at varying rates, but they leak out at a constant rate. If the bucket overflows due to an excessive rate of incoming traffic, the excess packets are discarded.
Token Bucket:
This method allows for bursts of data flow while still controlling the average transmission rate.
Functionality: Tokens are generated and stored in a bucket over time. Each packet sent requires a token; if there aren’t enough tokens available, packets will either be delayed or dropped, thereby capping the overall transmission rate.
Resource Reservation and Admission Control
Resource Reservation: This is the process of allocating specific network resources for particular applications before data transmission begins. This guarantees that the necessary bandwidth and other resources are available to meet the QoS requirements of those applications.
Admission Control: Acts as a gatekeeper for new data flows, assessing whether the network can accommodate them without degrading the quality of existing flows. This helps to prevent network overload and ensures the stability of network performance.
Summary
In summary, QoS is essential for managing network performance and ensuring the quality of data delivery. It employs various techniques such as flow control, scheduling, traffic shaping, and resource reservations to meet the performance demands of modern applications in a dynamic network environment.