Send a link to your students to track their progress
232 Terms
1
New cards
Transport Layer services
provide logical communication between application processes running on different hosts
2
New cards
Sender
breaks application messages into segments and passes them to network layer
3
New cards
Receiver
reassembles segments into messages and passes them to application layer
4
New cards
2 transport protocols for Internet applications
UDP and TCP
5
New cards
Network layer
logical communication between hosts
6
New cards
Transport layer
logical communication between processes
7
New cards
Sender actions
- is passed an application-layer message - determines segment header fields' values - creates segment - passes segment to IP
8
New cards
Receiver actions
- receives segment to IP - checks header values - extracts application-layer message - demultiplexes messages up to application via socket
9
New cards
Transmission Control Protocol (TCP)
- reliable and in-order delivery - congestion control - flow control - connection setup
10
New cards
User Datagram Protocol (UDP)
- unreliable and unordered segment delivery - no-frills/bare bones extension of "best effort" IP - connectionless --> each segment handled independently of others
11
New cards
UDP Pros
- no connection establishment (which can add RTT delay) - simple --> no connection state between sender/receiver - small header size - no congestion control --> can go as fast as desired, can function with congestion - checksum helps with reliability
12
New cards
UDP Cons
- no delivery guarantee - no bandwidth guarantee
13
New cards
UDP used for
- streaming multimedia apps - DNS - SNMP - HTTP/3
14
New cards
Reliable transfer over UDP
add reliability and congestion control at application layer (ex: HTTP/3)
15
New cards
UDP sender actions
- is passed an application-layer message - determines segment header fields' values - creates UDP segment - passes segment to IP
16
New cards
UDP receiver actions
- receives segment from IP - checks UDP checksum header value - extracts application layer message - demultiplexes message up to application via socket
17
New cards
UDP checksum
detects errors in transmitted segment (ex: flipped bits)
18
New cards
Internet checksum
- detects errors in transmitted segment - adds segment content together using one's complement sum - weak protection --> errors can occur and produce no change in checksum
19
New cards
checksum sender
- treats contents of UDP segment as sequence of 16 bit integers (includes header files and IP addresses) - checksum value put into UDP checksum field
20
New cards
checksum receiver
- computes checksum of received segment - checks if computed checksum equals checksum field value --> if not equal, error detected
21
New cards
Multiplexing at sender
handle data from multiple sockets, add transport header
22
New cards
Demultiplexing at receiver
use header info to deliver received segments to correct socket
23
New cards
Demultiplexing details
host receives IP datagram, uses IP addresses and port numbers to direct segment to appropriate socket
24
New cards
IP datagram
- each datagram has a source and destination IP address - each datagram carries 1 transport-layer segment - each segment has source and destination port number
25
New cards
Connectionless demultiplexing
when host receives UDP segment: - checks destination and port number in segment - directs segment to socket with that port number
26
New cards
IP/UDP datagrams directed to same socket at receiving host when
the destination port numbers are the same, the source IP address and/or source port numbers are different
27
New cards
Connection-oriented demultiplexing
- receiver uses all 4 values (4-tuple) to direct segment to appropriate socket - server can support simultaneous TCP sockets
28
New cards
TCP socket
each identified by own 4-tuple, each associated with a different connecting client
29
New cards
4-tuple
identifies a TCP socket 1.) source IP address 2.) source port number 3.) destination IP address 4.) destination port number
30
New cards
UDP vs. TCP demultiplexing
- UDP: uses destination port number only - TCP: uses 4-tuple
31
New cards
multiplexing and demultiplexing
based on segment and datagram header fields, happen at all layers
32
New cards
Reliable data transfer
- complexity depends on characteristics of unreliable channel (can lose, corrupt, or reorder data) - sender and receiver don't know each other's "state" unless communicated via message
33
New cards
Channels with bit errors
channel may flip bits in packet, checksum used to detect errors
34
New cards
Acknowledgements (ACKs)
Receiver explicitly tells sender that packet was received OK
35
New cards
Negative acknowledgements (NAKs)
receiver explicitly tells sender that packet had errors
36
New cards
stop and wait
sender sends 1 packet, then waits for receiver response
37
New cards
duplicates
- can be caused when ACK/NAK corrupted - sender retransmits current packet - sender adds sequence number to each packet - receiver discards duplicate packet
38
New cards
NAK-free protocol
- receiver sends ACK for last packet received OK - receiver must explicitly send sequence number of packet being ACKed - duplicate ACK at sender --> retransmit current packet
39
New cards
Channels with errors and loss
sender waits "reasonable" amount of time for ACK - retransmits if no ACK received - if delayed packet/ACK --> retransmission is a duplicate, receiver must specify sequence number of packet being ACKed
40
New cards
timeout
countdown timer to interrupt after "reasonable" amount of time
41
New cards
utilization
fraction of time the sender is busy sending U = (L/R) / (RTT + L/R)
42
New cards
pipelining
- sender allows multiple "in-flight" (yet-to-be-acknowledged) packets - range of sequence numbers must inc - buffering at sender/receiver
43
New cards
Go-Back-N sender
- window of up to N consecutive transmitted (but unACKed) packets - cumulative ACK - timer for oldest in-flight packet - timeout(n): retransmits packet n and all higher sequence number packets in window
44
New cards
cumulative ACK
- ACK(n): ACKs all packets up to/including sequence number n - on receiving ACK(n) --> move window forward to start at n+1
45
New cards
Go-Back-N receiver
- ACK-only - on receipt of out-of-order packet --> can discard or buffer, re-ACKs packet with highest in-order sequence
46
New cards
ACK-only
- always sends ACK for correctly received packet so far with highest in-order sequence number - may generate duplicate ACKs
47
New cards
Selective Repeat
- receiver individually acknowledges all correctly received packets - receiver buffers packets as needed for in-order delivery - sender times-out/retransmits individually for unACKed packets
48
New cards
sender window
- N consecutive sequence numbers - limits sequence number of sent and unACKed packets
49
New cards
Selective Repeat sender
- if the next available sequence number is in the window, then send packet - timeout(n): resend packet n and restart timer - ACK(n) in [sendbase, sendbase+N] --> mark packet as received; if n is the smallest unACKed packet, then advance window base to next unACKed sequence number
50
New cards
Selective Repeat receiver
- packet n in [rcvbase, rcvbase+N-1] --> send ACK(n); if out of order, then buffer; if in order, then deliver and advance window to next not-yet-received packet - packet n in [rcvbase-N, rcvbase-1] --> ACK(n) - otherwise ignore
51
New cards
TCP Overview
- point to point: 1 sender and 1 receiver - reliable and in-order byte stream - full duplex data: bidirectional data flow in the same connection - cumulative ACKs - pipelining: TCP congestion and flow control set window size - connection oriented: handshaking - flow controlled: sender won't overwhelm receiver
52
New cards
TCP sequence number
byte stream "number" of first byte in segment's data
53
New cards
TCP acknowledgements
- sequence number of next byte expected from other side - cumulative ACK
54
New cards
TCP timeout value
- longer than RTT but RTT varies - if too short: premature timeout, unnecessary retransmissions - if too long: slow reaction to segment loss
55
New cards
SampleRTT
- measured time from segment transmission until ACK receipt - used to estimate RTT
56
New cards
EstimatedRTT formula
(1-a)(EstimatedRTT)+ a(SampleRTT) a = 0.125
57
New cards
TimeoutInterval formula
EstimatedRTT + 4*DevRTT
58
New cards
DevRTT
- exponential weighted moving average of SampleRTT destination from EstimatedRTT - formula: DevRTT = (1-b)(DevRTT) + b(|SampleRTT-EstimatedRTT|) b = 0.25
59
New cards
TCP Sender: data received from application
- create segment with sequence number - start timer for oldest unACKed segment (if not already running)
60
New cards
TCP Sender: timeout
- retransmit segment that caused timeout - restart timeout
61
New cards
TCP Sender: ACK recevied
if ACK acknowledges previously unACKed segments --> update what's know to be ACKed, start timer if there are still unACKed segments
62
New cards
TCP fast retransmit
- if sender receives 3 ACKs for same data ("triple duplicate ACKs"), resend unACKed segment with smallest sequence number - likely that unACKed segment was lost, so don't wait for timeout
63
New cards
TCP flow control
receiver controls sender --> sender won't overflow receiver's buffer by transmitting too much/too fast
64
New cards
rwnd
- field in TCP header - "receiver window" - represents free buffer space
65
New cards
RcvBuffer
- represents buffered data and free buffer space - size set via socket options (default is 4096 bytes)
66
New cards
TCP handshake
sender and receiver agree to establish connection and agree on connection parameters (ex: starting sequence numbers)
67
New cards
Closing a TCP connection
- client and server send TCP agreement with FIN bit = 1 - respond to received FIN bit with ACK - simultaneous FIN bits can be handled
68
New cards
congestion
too many sources sending too much data too fast for network to handle
69
New cards
congestion results
- long delay --> queueing in router buffers - packet loss --> buffer overflow at routers
70
New cards
Flow Control vs. Congestion Control
- flow control: 1 sender too fast for 1 receiver - congestion control: too many senders all sending too fast
71
New cards
end-end congestion control
-no explicit feedback from network -congestion inferred from end-system observed loss/delay -approach taken by TCP
72
New cards
network-assisted congestion control
- routers provide direct feedback to sending/receiving hosts with flows passing thru congested router - may indicate congestion level or explicitly set sending rate - ex: TCP ECN, ATM, DECbit protocols
73
New cards
Additive Increase Multiplicative Decrease (AIMD)
- used for TCP congestion control - senders can inc sending rate until packet loss (congestion) occurs, then dec sending rate on loss event - sawtooth behavior
74
New cards
Additive Increase
inc sending rate by 1 max segment size every RTT (until loss detected)
75
New cards
Multiplicative Decrease
- cut sending rate in half at each loss event - loss detected by triple duplicative ACK
76
New cards
AIMD Pros
- distributed and asynchronous behavior - optimizes congested flow rates network-wide - has desirable stability properties
77
New cards
TCP sending behavior
send cwnd bytes, wait RTT for ACKs, then send more bytes
78
New cards
TCP rate
- slow at first, increases exponentially until 1st loss event - cwnd = 1 max segment size (MSS), doubles every RTT - switches to linear increase when cwnd gets to half of its value before timeout
79
New cards
TCP rate formula
cwnd/RTT (bytes/sec)
80
New cards
cwnd
- "congestion window" - dynamically adjusted in response to observed network congestion LastByteSent - LastByteAcked
81
New cards
Explicit Congestion Notification (ECN)
- field that notifies the receiver that there's congestion on network - 2 bits on IP header marked by network router
82
New cards
TCP fairness
- if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K - TCP is fair under idealized assumptions --> same RTT, fixed number of sessions only in congestion avoidance
83
New cards
UDP fairness
- multimedia apps often use UDP --> don't want rate throttled by TCP congestion control; send audio/video at constant rate; tolerate packet loss - no "internet police" for congestion control
84
New cards
Fairness and parallel TCP connections
application can open multiple parallel connections between 2 hosts (ex: browsers)
85
New cards
sender
encapsulates segments into datagrams and passes them to link layer
86
New cards
receiver
delivers segments to transport layer protocol
87
New cards
hosts and routers
network layer protocols in every Internet device
88
New cards
routers
- examine header files in all IP datagrams passing thru - moves datagrams from input ports to output ports to transfer datagrams along end-end path
89
New cards
2 key network layer functions
1.) forwarding 2.) routing
90
New cards
forwarding
- move packets from router's input link to appropriate router output link - ex: process of getting thru single interchange
91
New cards
routing
- determine route taken by packets from source to destination - routing algorithms - ex: process of planning trip from source to destination
92
New cards
data plane
- local, per-router function - determines how datagram arriving on router input port is forwarded to router output port
93
New cards
control plane
- network-wide logic - determines how datagram is routed among routers along end-end path from source host to destination host
94
New cards
2 control plane approaches
1.) traditional routing algorithms 2.) software-defined networking (SDN)
95
New cards
traditional routing algorithms
implemented in routers
96
New cards
software-defined networking (SDN)
implemented in (remote) servers
97
New cards
per-router control plane
individual routing algorithm components in each and every router interact in the control plane
98
New cards
SDN control plane
remote controller computes and installs forwarding tables in routers
99
New cards
datagram transport "channel"
- from sender to receiver - guaranteed delivery - guaranteed delivery within less than 40 msec delay - in-order datagram delivery - guaranteed min bandwidth to flow - security
100
New cards
Internet "best effort" service model
no guarantees on: - successful datagram delivery to destination - timing or order of delivery - bandwidth available to end-end flow