1/37
The link layer
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
List possible services that the link layer may provide.
framing, physical addressing, error detection/correction, flow control, and access control to the physical medium.
Explain where the link layer is implemented in a computer system.
The link layer is implemented in the network interface card (NIC) of a computer system, which connects the device to the physical network.
Describe the purpose of error detection techniques.
Error detection techniques are used to identify errors that may occur during data transmission, ensuring data integrity and reliability.
Define parity in the context of error detection.
Parity is a method of error detection that adds a bit to a binary message to ensure that the total number of 1-bits is even (even parity) or odd (odd parity).
Describe the function of single bit parity schemes.
Single bit parity schemes add one parity bit to a data unit, allowing the detection of single-bit errors by checking if the number of 1-bits is even or odd.
Explain two-dimensional parity schemes.
Two-dimensional parity schemes use a grid of bits where both rows and columns have parity bits, allowing for the detection and correction of errors in both dimensions.
Describe how checksumming is used in data transmission.
Checksumming involves calculating a sum of data segments and appending it to the data, allowing the receiver to verify data integrity by recalculating the sum.
Explain why checksumming might not be sufficient for error detection.
It can fail to detect certain types of errors, such as when multiple bits are altered in a way that results in the same checksum.
Explain the basic approach of how CRC works.
CRC, or Cyclic Redundancy Check, is a method used to detect errors in digital data. It works by treating the data as a polynomial and dividing it by a predetermined polynomial divisor, generating a remainder that is appended to the data. The receiver performs the same division and checks if the remainder matches the one sent.
Articulate the need for multiple access protocols.
Multiple access protocols are necessary to manage how multiple users or devices share a communication channel, preventing data collisions and ensuring efficient use of the channel.
List the three basic categories of multiple access protocols.
1) Time Division Multiple Access (TDMA)
2) Frequency Division Multiple Access (FDMA)
3) Code Division Multiple Access (CDMA).
Explain the desirable characteristics of multiple access protocols.
Fairness, efficiency, simplicity, robustness, and the ability to handle varying traffic loads.
Explain three forms of channel partitioning protocols.
1) Time Division Multiple Access (TDMA), which allocates time slots to users
2) Frequency Division Multiple Access (FDMA), which assigns different frequency bands to users
3) Code Division Multiple Access (CDMA), which uses unique codes to differentiate users on the same frequency.
Explain the operation of CSMA and CSMA/CD access protocols.
"Carrier Sense Multiple Access (CSMA) is a protocol that listens to the channel before transmitting to avoid collisions. CSMA/CD (Collision Detection) enhances this by detecting collisions during transmission and allowing devices to stop and retry after a random backoff period."
Explain the purpose of the binary exponential backoff algorithm used in Ethernet.
The binary exponential backoff algorithm is used in Ethernet to manage retransmissions after a collision. It increases the wait time exponentially with each successive collision, reducing the likelihood of repeated collisions and improving network efficiency.
Describe two types of taking-turns protocol.
Token Ring and Polling.
Token Ring uses a token that circulates in the network, allowing the device holding the token to transmit data.
Polling involves a central controller that asks each device in turn if it has data to send.
Explain the multiple access controls involved with cable internet access as part of the DOCSIS specification.
Time Division Multiple Access (TDMA) and Frequency Division Multiple Access (FDMA).
TDMA allows multiple users to share the same channel by dividing time into slots
FDMA allocates different frequency bands to different users.
Explain where in a LAN environment one might find MAC addresses.
MAC addresses can be found in the data link layer of a LAN environment, specifically in the Ethernet frames that are used for communication between devices on the same local network.
Explain why networked devices need to have MAC addresses even when they already have IP addresses.
Networked devices need MAC addresses for local network communication, as MAC addresses operate at the data link layer, allowing devices to identify each other on the same local network, while IP addresses are used for routing data across different networks.
Explain the purpose of the address resolution protocol and where it is used.
The Address Resolution Protocol (ARP) is used to map IP addresses to MAC addresses within a local area network. It allows devices to discover the MAC address associated with a given IP address, enabling proper data packet delivery.
Explain how the ARP protocol functions in a switch environment.
The ARP (Address Resolution Protocol) operates in a switch environment by mapping IP addresses to MAC addresses. When a device wants to communicate with another device on the same local network, it sends an ARP request to the switch, which then broadcasts this request to all devices on the network. The device with the matching IP address responds with its MAC address, allowing the switch to create a mapping in its MAC address table for future communications.
Describe the operation of the ARP protocol in a combination switch and router environment.
In a combination switch and router environment, the ARP protocol still maps IP addresses to MAC addresses, but it also involves routing. When a device sends an ARP request, the switch forwards it to the router if the destination IP address is on a different subnet. The router then checks its ARP table and responds with the appropriate MAC address, allowing the switch to forward packets correctly between different networks.
Define the purpose of the fields in the Ethernet frame structure.
The fields in the Ethernet frame structure serve specific purposes: the destination MAC address identifies the intended recipient, the source MAC address indicates the sender, the EtherType field specifies the protocol encapsulated in the frame, and the payload carries the actual data being transmitted. Additionally, the frame includes a Frame Check Sequence (FCS) for error detection.
Explain why modern Ethernet environments do not require a MAC protocol.
Modern Ethernet environments do not require a MAC protocol because of advancements in technology such as full-duplex communication and switches that manage data traffic efficiently. These developments eliminate the need for collision detection and handling, which were essential in older shared media networks, allowing for more streamlined and reliable data transmission.
Describe the concepts of filtering and forwarding in link-layer switches.
Filtering is the process of examining incoming frames and deciding whether to forward or discard them based on MAC address tables.
Forwarding involves sending the frame to the appropriate port based on the destination MAC address, ensuring that data reaches the correct device while minimizing unnecessary traffic on other ports.
Explain the self-learning property of link-layer switches.
Link-layer switches have a self-learning property that allows them to automatically learn the MAC addresses of devices on the network. As frames are received, the switch records the source MAC address and the port it came from in a MAC address table. This enables the switch to forward frames only to the appropriate port, improving network efficiency.
List and explain three benefits of switches compared to hubs.
1. Improved Performance: Switches reduce network collisions by creating a separate collision domain for each connected device, leading to better overall performance.
2. Enhanced Security: Switches can isolate traffic between devices, making it harder for unauthorized users to intercept data.
3. Increased Bandwidth: Switches provide dedicated bandwidth to each port, allowing multiple devices to communicate simultaneously without degrading performance.
Reason about the benefits and tradeoffs between switches and routers.
Switches operate at the data link layer and are used for connecting devices within the same network, providing high-speed data transfer. Routers operate at the network layer and are used to connect different networks, enabling communication between them. The tradeoff is that switches are faster for local traffic, while routers are necessary for inter-network communication, but they introduce latency.
Explain the concept of VLANs.
VLANs, or Virtual Local Area Networks, are a network segmentation method that allows multiple distinct networks to coexist on the same physical infrastructure. By grouping devices into VLANs, network administrators can improve security, reduce broadcast traffic, and manage network resources more effectively.
Explain the concept of VLAN trunking and where that might be used.
VLAN trunking is a method used to carry multiple VLANs over a single physical link between switches. This is useful in scenarios where multiple VLANs need to communicate across the same network infrastructure, such as in large organizations where different departments may have their own VLANs but still need to share resources.
How is forwarding in a lebel switched router done?
The router looks at the label instead of the IP address. It then uses a label forwarding table to quickly forward the packet to the next hop.
How are MPLS labels chosen and distributed in a real network?
They are distributed using protocols like RSVP-TE or LDP.
Name two benefits of MPLS networks.
Traffic engineering: You can force traffic to take non-standard paths for efficiency or policy reasons.
Faster forwarding: MPLS avoids long IP address lookups which are slow.
Give the three tier hierarchy of a typical data center network.
Access layer: Connect hosts
Aggregation layer: Connect access switches
Core layer: Connects aggregation switches
What is the purpose of load balancer in a data center network?
Distributes incoming traffic among multiple servers to prevent overloading and improve performance.
What are some limitations of a hierarchical datacenter architecture?
The aggregation and core layers are bottlenecks, there is also a scalability issue.
What are some solutions to the limits of a hierarchical datacenter architecture?
Implement RoCE on the link layer, use ECN on the transport layer.
What does a modern datacenter look like?
They use SDN and typically place related services as close as possible.