1/62
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Definition of a Computer Network
A large number of separate but interconnected computers working together to do a job
Definition of a Network Protocol
The definition of the format and order of messages send and received among network entities. Specifies the actions that must be taken on message transmission and receipt
Why is hardware and protocols necessary for networks?
Hardware is necessary for the physical work that is done with the data, protocols are necessary to ensure that data is shared correctly and efficiently for this work to get done.
How do modern day networks provide lots of access?
Data centers combine machines together to increase efficiency
Internet structure
Network edge: clients and servers (often in data centers for the latter)
Access networks: wired and wireless communication links
Network core: interconnected routers (the true network of networks)
The internet is an interconnected network of networks
Different types of physical media for transmission
Cables (coaxial, fiber glass, twisted pairs)
Wireless radio (terrestial microwave, WiFi, Satellite)
Physical wires are effective, but need physical connection
Wireless are more flexible, but have other environmental factors (reflection, interference)
Packet-switching and store and forward systems
Packet switching: hosts break application-layer messages into packets which are forwarded from one router to the next
Store and forward: entire packet must arrive at router before it can be transmitted on next link
Introduces delay
Two key functions of local forwarding and global routing
Local forwarding: move arriving packets from router’s input link to appropriate router output link
Global routing: determine source to destination paths taken by packet using a routing algorithm
Packet switching vs Circuit switching
Packet switching: Data is broken into packets and then sent across connections to routers. Allows for more users and is efficient, but risks packet loss
Circuit switching: Dedicated pathway for connection between specific server and client. Consistent and more guaranteed performance, but more likely to idle and be less efficient
How is the internet structured overall?
The internet is a network of networks. Internet service providers create connections to specific network edge clients and connect easily within their own network. If required to leave their own network, ISPs have connections such as IXPs and peering links to connect.
Sources of packet delay + loss
Packets enter a buffer at a router and possibly have to wait before they are sent if outgoing wire is less efficient than incoming packets. If arrival rate exceeds the output link capacity, packets are lost as they cannot be stored in the buffer
Packet throughput and propogation delay
Packet throughput: rate (bits/time) at which bits are being sent from sender to receiver (either tracked instantaneous or average)
Propogation delay: Delay due to length of physical link between networks
Bottleneck link
Link on end-end path that constrains end-end throughput
Why is layering necessary?
The internet is a very complex system and having explicit structure to systems eases some of that confusion. Modularization eases maintenance as changes only affect portions of the overall system
Interface between layers
The way 2 layers communicate with one another to get their job done
How is a protocol used between the same layers of sending and receiving?
A common protocol called the “layer n protocol” moves between 2 layers to talk to one another
Encapsulation and Layering
The addition of headers and trailers on each layer that allows for proper transmission and processing of data while still maintaining the original data
OSI Model Layers and their functionality
Application: Provides application services
Presentation: Encrypts, encodes, and compresses usable data
Session: Establishes, manages, and ends session between end users
Transport: Transmits data using TCP & UDP (and more)
Network: Assigns global addresses and finds best routes between different networks
Data Link: Assigns local addresses and interfaces + delivers data
Physical: Encodes signals, cabling and connectors
TCP/IP Layers and their Functionality
Link layer: Lowest layer, describes what links need to do to support internet layer
Internet layer: Permits hosts to inject packets into any networks that travel independently. Defines packet format and internet protocol (IP)
Transport layer: Uses 2 end-to-end transport protocols TCP and UDP
Application layer: handles all higher level protocols
TCP/IP model vs OSI Model
OSI model was designed poorly, had poor implementation, and was poorly timed with TCP/IP coming out.
TCP/IP model is not well distinguished or described, but was overall better than the OSI model
Key system calls for socket programming
socket() creates a socket for the process to connect to a port
bind() connects a socket to a port
listen() used by server program to listen for any clients
connect() used by client program to connect to listening server
send() and recv() are used to send data between through buffers
Client-server paradigm
Servers act as an always on host with a consistent IP address
Clients communicate with servers intermittently with dynamic IP addresses and do not directly communicate with one another
Peer-peer architecture
There are no consistent servers and instead arbitrary end systems communicate. There are no consistent IP addresses and it is self scaleable as new peers request new services while providing services
How do web applications work over HTTP protocol?
The hyper text transfer protocol is used by clients to request, receive, and display we objects. It is used by servers to send those objects when they are requested.
Non-persistent vs persistent HTTP
Non-persistent HTTP sends at most one object per TCP connection and requires multiple connections for multiple objects. Persistent HTTP sends multiple objects over a single TCP connection.
How is round trip time measured with HTTP
RTT is the time for a small packet of data to travel from a client to a server and back. Measured from time of sent to time of first receipt
How are cookies used?
Cookies allow for the usually “stateless” HTTP to have websites recall users. Cookies are identifiers of users which are sent in the HTTP requests that are stored in server backends to recall specifics for the user.
Use of proxy servers and web caching
Proxy servers are used to prevent as many requests to origin servers. Proxy servers hold web caches that can be used to return the object to the original requester without needing to request origin server. They act as a middleman and a both a client and a server.
Content Distribution Networks
Rather than a single mega server, multiple copies are stored across multiple geographically distributed sites. The prevent single points of failures and increases speed of response rate
Goals of HTTP/2 Deployment
Decrease delay in multi-object HTTP requests by mixing their packets in with each other to let smaller objects through
SMTP protocol
Simple mail transfer protocol, used by email to send messages. Built on the basis of TCP with direct transfer
Major components of E-mail
User agents, mail servers, SMTP protocol.
Mail servers contain mailboxes for messages and mail queues for outgoing messages. Mail servers act as both clients and servers
IMAP Protocol
Internet mail access protocol, used by final user to access email from the mail server and control the files that are stored on the server
Domain Name Server
A distributed network of that translates hostname to IP address. It is an application layer protocol. There are a mix of organization specific DNS’s as well as top-level domain servers that hold large things (.edu, .gov, etc)
Local domain name server
A server not a part of the DNS hierarchy, but most internet service providers use it as a proxy. Keeps a cache of recent name-to-address translation pairs to increase speed
Authoritative DNS server
Organization’s own DNS server providing IP mapping to their own hosts, could be ran locally or by provider
Resource Records
Stored in DNS distributed database
RR Format:
(name, value, type, ttl)
Resource Record types
type=A, name is hostname & value is ip address
type=NS, name is domain & value is hostname of authoritative DNS
type=CNAME, name is alias name & value is canonical name
type=MX, value is name a mailserver associated with name
Why does DNS use UDP
UDP is faster and has less overhead compared to TCP. This is ideal for a service that needs to work as fast and as often as DNS
How does video streaming avoid client-side buffering?
Built in network-added delay to create consistency with video playing
Main challenges of video streaming
Server-to-client bandwidth varies over time leading to odd inconsistencies and pocket loss/delay can lead to either delayed play out or poor video quality
Scenario 1: Utopia/Unrestricted Simplex Protocol
Data transmission is one direction, no transmission errors, sender/receiver can process all data, data is constantly sent. Really great, but not realistic
Scenario 2: Simplex stop-and-wait protocol
Receiver cannot process infinite data, thus we send then data one frame at a time and wait for an acknowledgement. Data transmission is one directional. Very slow, but shows the fundamentals
Scenario 3: Simplex protocol for a noisy channel
Data/ACKs can be lost in the channel and thus it is now necessary for a time out system to be included to resend data. A simple 1-bit system can be used since only 2 successive frames are concerned. More accurate to real life
Scenario 4: One bit sliding window protocol
Communication is two-way and ACKs are now numbered to prevent early time outs. It is a fully functional system and can recover from garbled frames/ACKs but is not best utilization
Scenario 5: Addition of pipelining
Increase ACK bit size to something like 3 bits, more data can be sent at a time which makes the system overall more efficient. Maximimum in-flight packets is 2n-1 where n=# of bits. More efficient than previous, but if there is an error a lot of frames can be dropped
Scenario 6: Divide window size between sender and receiver
The number of bits that can be accepted on receiver side is higher, safer from errors and losing data unnecessarily. There is a tradeoff between bandwith and data link layer buffer space.
Total windows <= 2n
Sliding window protocol
A method to track which data needs to be sent/received over a data transfer by labeling with bits and moving the “window” forward as data is properly received in order. Used in scenarios 4, 5, and 6
Calculation for expected channel usage
Total time utilizing the channel/(Total time utilizing the channel + idle time) * 100
General Issues of Transport Layer
User oriented: programmers communicate with it
Negotiation of quality and types of service
Guarantee service
Addressing (knowing what to talk to)
Storage capacity of subnet
Dynamic flow control
Congestion control
Connection establishment
UDP
User Datagram Protocol, a barebones method of connecting processes and ports which does not promise anything and has little overhead. It uses IP to help connect to other programs across the internet by handling the addressing
TCP
Transmission Control Protocol promises a lot of features, such as point-to-point communication, a reliable in-order byte stream, and other important connections. It uses IP to help connect to other programs across the internet by handling the addressing
How are ports used in UDP and TCP
Ports are used as consistent points of access into machines that processes can request to without having to worry about PID or other identifiers. They are like the mailboxes of computers
Sliding window protocol with TCP
TCP communicates with cumulative ACKs to show byte it is expecting next as well as a changing window size to show how much data it is able to take in on the next communication between machines.
TCP Header Fields
Source port number
Destination port number
Sequence number
Acknowledgement number
Application data (variable length)
Internet checksum
TCP options (variable length)
receive window (for flow control)
Connection set-up and termination for TCP
TCP uses a 2 handshake method to guarantee congestion control and and security authentication. There are 2 kinds of terminations, either graceful with a 4 part end to the product or an abrupt disconnection which only 1 message
Retransmission timers for TCP
Retransmission timers are used to predict the RTT of a connection to most efficiently calculate the timeout timer for a product.
Congestion window
A window used by a sender over TCP to track appropriate send window
Flow control window
A window used by the receiver to state how much data they can take in, senders over TCP cannot send more data then this window
slow-start
TCP sends very little data at first, but exponentially increases as it can send more and more data. Eventually, this changes to an additive growth
AIMD
Additive increase, multiplicative decrease. Sending size is increased by 1 segment size until loss is detected, then once loss is detected, it cuts the sending rate in half. This creates a saw tooth-like pattern. This system increases stability
Motivation to develop QUIC
Speed as compared to HTTP, it is built on top of UDP and tries to promise a lot of similarities to HTTP while being much faster
QUIC vs HTTP
Both are application layer protocols, QUIC is on UDP while HTTP is on TCP. QUIC adopts many similar rules to HTTP and TCP (congestion control, connection establishment, etc.) but tries to innovate and be faster