FINALS- Distributed Computing

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/47

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

48 Terms

1
New cards

Virtual Processes

Built in software, on top of physical processors.

2
New cards

Processor

Provides a SET OF INTRUCTIONS along with the capability of automatically executing a series of those intructions

3
New cards

Thread

A MINIMAL SOFTWARE PROCESSOR in whose context- a series of instructions can be executed. Saving a thread context implies stopping the current execution and saving all the data needed to continue the execution of a later stage.

4
New cards

Process

A SOFTWARE PROCESSOR — in whose context — ONE OR MORE THREADS MAY BE EXECUTED. Executing a thread means executing a series of instructions in the context of the thread.

5
New cards

Introduction to threads

  • Thread is the unit of execution within a process performing a task.

  • A process can have single or multiple threads.

  • When a process starts, memory and resources are allocated which are shared by each thread

  • in a SINGLE-THREADED process, both the process and thread are same.

  • a SINGLE-THREADED process can perform one task at a time

  • A MULTI-THREADED process can perform multiple tasks at the same time.

6
New cards

Introduction to threads - How do you think can we make a processor truly multiple tasks parallelly?

  • In relation to hardware, add more CPUs ( servers with multiple CPU sockets)

  • All the modern processors are multi-core processors, meaning, a single physical processor will have more than one CPU in it.

  • MULTI-CORE PROCESSORS are capable of running more than one process or thread at the same time. Example, a quad-core processor has 4 CPU cores, it can tun 4 processes or threads at the same time in parallel (PARALLELISM)

7
New cards

Contexts

Allow pausing and resuming without losing data

8
New cards

Processor Context

The minimal collection of value STORED IN THE REGISTERS OF A PROCESSOR USED FOR THE EXECUTION OF A SERIES OF INTRUCTION. (eg. stack pointer, addressing registers, program counter.)

9
New cards

Thread Context

The minimal collection of value STORED IN REGISTERS AND MEMORY, USED FOR THE EXECUTION OF A SERIES OF INSTRUCTIONS. (eg, processor context, thread state — running/ waiting/ suspended)

10
New cards

Process Context

The minimal collection of values STORED IN REGISTERS AND MEMORY, USED FOR THE EXECUTION OF A THREAD. (eg. thread context, but now also at least memory management information such as Memory Management Unit (MMU) register values.)

11
New cards

Observations

  1. Threads share the same address space. THREAD CONTECT SWITCHING can be done entirely independent of the operating system.

  2. Process switching is generally (somewhat) MORE EXPENSIVE as it involves getting the OS in the loop. i.e., trapping to the kernel.

  3. Creating and destroying THREADS is much CHEAPER than doing so for processes.

12
New cards

THREADS

Lightweight units of execution that allow efficient multitasking within a process

13
New cards

PROCESSES

Havier, independent units of execution that require more overhead for creation, switching and destruction.

14
New cards

WHY USE THREADS?

  • AVOID NEEDLESS BLOCKING- a single-threaded process will block when doing I/O; in a MULTI-THREADED PROCESS, the operation system can SWITCH THE CPU TO ANOTHER THREAD IN THE PROCESS.

  • EXPLOIT PARALLELISM - The THREADS IN A MULTI-THREADED PROCESS CAN BE SCHEDULE TO RUN IN PARALLEL on a multiprocessor or multicore processor.

  • AVOID PROCESS SWITCHING- structure large applications NOT AS A COLLECTION OF PRCESSES, but through MULTIPLE THREADS.

15
New cards

AVOID PROCESS SWITCHING

  • avoid EXPENSIVE context switching

  • Treade-offs:

    • Threads use the same address space: MORE PRONE TO ERRORS

    • No support from OS/HW to protect threads using each other’s memory.

    • Threads context switching may be faster than process context

16
New cards

Threads And Operating Systems

Main Issue:

Should an OS kernel provide threads, or should they be implemented as user-level package?

User-space Solution

  • Threads are managed by a LIBRARY/USER-LEVEL PACKAGE that the kernel:

    • All operations can be completely handled WITHIN A SINGLE PROCESS → Implementations can be extremely efficient.

    • All services provided by the kernel are done ON BEHALF OF THE PROCESS IN WHICH A THREAD RESIDES → If the kernel decides to block a thread, the entire process will be blocked.

    • Threads are used when there are lots of external events: THREADS BLOCK ON A PER-EVENT BASIS → If the kernel can’t distinguish threads, how can it support signaling events to them?

Kernel solution’

  • to HAVE THE KERNEL CONTAIN THE IMPLEMENTATION OF A THREAD PACKAGE. This means that all operations return as system calls.

    • Operations that block a thread are no longer a problem: the KERNEL SCHEDULES ANOTHER AVAILABLE THREAD within the same process.

    • Handling external events is simple: The KERNEL(WHICH CATCHES ALL EVENTS) schedules the thread associated with the event.

    • The problem is (or used to be) the LOSS OF EFFICIENCY due to the fact that each thread operation required a trap (system call that causes the CPU to switch from user mode to kernel mode) to the kernel.

**** CONCLUSION-BUT: Try to MIX USER-LEVEL and KERNEL-LEVEL THREADS into a single concept, however, performance gain has not turned out to outweigh the increased complexity.

17
New cards

Lightweight Processes

Basic Idea

  • Introduce a TWO-LEVEL THREADING APPROACH: LIGHTWEIGHT PROCESSES that can execute.

Principle Operation

  • User-level thread does a system call → THE LWP THAT IS EXECUTING THAT THREAD BLOCKS. The thread remains BOUND to the LWP.

  • The kernel CAN SCHEDUEL ANOTHER LWP HAVING A RUNNABLE THREAD BOUND TO IT. Note: this thread can switch to ANY other runnable thread currently in the user.

  • a thread calls a blocking user-level operation → fo a context switch to a runnable thread, (then bound to the same LWP).

  • When there are no threads to schedule, an LWP may remain idel, and may even be removed( destroyed) by the kernel.

<p>Basic Idea</p><ul><li><p>Introduce a TWO-LEVEL THREADING APPROACH: LIGHTWEIGHT PROCESSES that can execute.</p><img src="https://knowt-user-attachments.s3.amazonaws.com/614fc38b-9ab8-4aeb-9747-a22a3712a757.png" data-width="75%" data-align="center" alt=""></li></ul><p>Principle Operation</p><ul><li><p>User-level thread does a system call → THE LWP THAT IS EXECUTING THAT THREAD BLOCKS. The thread remains BOUND to the LWP.</p></li><li><p>The kernel CAN SCHEDUEL ANOTHER LWP HAVING A RUNNABLE THREAD BOUND TO IT. Note: this thread can switch to ANY other runnable thread currently in the user.</p></li><li><p>a thread calls a blocking user-level operation → fo a context switch to a  runnable thread, (then bound to the same LWP).</p></li><li><p>When there are no threads to schedule, an LWP may remain idel, and may even be removed( destroyed) by the kernel.</p></li></ul><p></p>
18
New cards

Using Threads At the Client Side

Multithreaded Web Client

  • Hidinh network latencies:

    • The web browser scans an incoming HTML page and finds that MORE FILES NEED TO BE FETCHED.

  • EACH FILE IS FETCHED BY A SEPARATE THREAD. each doing a (blocking) HTTP request.

  • As files come in, the browser displays them.

Multiple request-response calls to other machines (RPC)

  • A client does several calls at the same time, each one by a different thread.

  • it then waits until all results have been returned

  • Note: If calls are to different servers, we may have a linear speed-up.

<p>Multithreaded Web Client</p><ul><li><p>Hidinh network latencies:</p><ul><li><p>The web browser scans an incoming HTML page and finds that MORE FILES NEED TO BE FETCHED.</p></li></ul></li><li><p>EACH FILE IS FETCHED BY A SEPARATE THREAD. each doing a (blocking) HTTP request. </p></li><li><p>As files come in, the browser displays them.</p><img src="https://knowt-user-attachments.s3.amazonaws.com/fe182007-90c4-4b4f-8f88-448b4a3f8272.png" data-width="100%" data-align="center" alt=""><p></p></li></ul><p>Multiple request-response calls to other machines (RPC)</p><ul><li><p>A client does several calls at the same time, each one by a different thread.</p></li><li><p>it then waits until all results have been returned</p></li><li><p>Note: If calls are to different servers, we may have a linear speed-up.</p></li></ul><p></p>
19
New cards

Using threads at the Server Side

Improve Performance

  • Starting a thread is cheaper than starting a new process.

  • Having a single -threaded server prohibits simple scale-up to a MULTIPROCESS SYSTEM.

  • As with clients: HIDE NETWORK LATENCY by reacting to the next request while the previous one is being replied.

Better Structure

  • Most servers have high I?O demands, Using simple, WELL-UNDERSTOOD BLOCKING CALLS simplifies the overall structure.

  • Multithreaded programs tend to be smaller and easier to understand due to SIMPLIFIED FLOW OF CONTROL.

20
New cards

Why Multithreading is Popular:

ORGANIZATION

<img src="https://knowt-user-attachments.s3.amazonaws.com/7fb86ebc-fa72-4d30-837f-81b161f09aeb.png" data-width="100%" data-align="center" alt=""><p></p>
21
New cards

VIRTUALIZATION

Observation | Virtualization is important:

  • Hardware CHANGES FASTER than software

  • Ease of PORTABILITY and close migration

  • ISOLATION of failing or attacked components

Principles: Mimicking Interfaces

  • Simulating HARWARE OR SOFTWARE interfaces in a virtual environment.

<img src="https://knowt-user-attachments.s3.amazonaws.com/879b691f-2a91-4c59-93ba-9f9ce1be6234.png" data-width="100%" data-align="center" alt=""><p>Observation | Virtualization is important:</p><ul><li><p>Hardware CHANGES FASTER than software</p></li><li><p>Ease of PORTABILITY and close migration</p></li><li><p>ISOLATION of failing or attacked components</p></li></ul><p></p><p>Principles: Mimicking Interfaces</p><ul><li><p>Simulating HARWARE OR SOFTWARE interfaces in a virtual environment.</p></li></ul><p></p>
22
New cards

Mimicking Interfaces

Interfaces at three different levels

  • Instruction set architecture: the set of MACHINE INSTRUCTIONS, with two subsets:

    • PRIVILEGED instructions: allowed to be executed only by the operating system.

    • GENERAL instructions: can be executed by any program.

  • SYSTEM CALLS as offered by an operating system.

  • LIBRARY CALL- known application programming interface (API)

23
New cards

WAYS OF VIRTUALIZATION

Differences:

a) Platform-independent; a separate set of instructions - an interpreter/emulator, RUNNING ATOP AN OS

b) Has DIRECT ACCESS TO HARDWARE; Low-level instructions, along with bare-bones MINIMAL OS INSTRUCTIONS.

c) RUNS ON TOP OF AN EXISTING OS; may be slower than native VMM due to an EXTRA OS LAYER.

<img src="https://knowt-user-attachments.s3.amazonaws.com/e2adccbe-f092-46eb-859f-c57ab43a53d3.png" data-width="100%" data-align="center" alt=""><p>Differences:</p><p>a)  Platform-independent; a separate set of instructions - an interpreter/emulator, RUNNING ATOP AN OS</p><p>b)  Has DIRECT ACCESS TO HARDWARE; Low-level instructions, along with bare-bones MINIMAL OS INSTRUCTIONS.</p><p>c) RUNS ON TOP OF AN EXISTING OS; may be slower than native VMM due to an EXTRA OS LAYER.</p><p></p>
24
New cards

ZOOMING INTO VMs: PERFORMANCE

Refining the Organization

  • PRIVILEGED INSTRUCTION: if and only if executed in user mode, if causes a TRAP TO THE OS (switch control from user to kernel mode for the OS to perform a PRIVILEGED OPERATION on behalf of the user program)

  • NONPRIVILEGED INSTRUCTIONS: the rest

  • SPECIAL INSTRUCTIONS

    • Control-sensitive instruction: may affect the configuration of a machine, may trap (e.g. one affecting relocation register or interrupt table.)

    • Behavior-sensitive Instruction: effect is partially determined by the system context (e.g., POPF sets an iterrupt-enable flag, but only in system mode)

<p>Refining the Organization</p><img src="https://knowt-user-attachments.s3.amazonaws.com/19c13654-7805-405c-9486-7a5c9aa38cfb.png" data-width="100%" data-align="center" alt=""><ul><li><p>PRIVILEGED INSTRUCTION:  if and only if executed in user mode, if causes a TRAP TO THE OS (switch control from user to kernel mode for the OS to perform a PRIVILEGED OPERATION on behalf of the user program)</p></li><li><p>NONPRIVILEGED INSTRUCTIONS: the rest</p></li><li><p>SPECIAL INSTRUCTIONS</p><ul><li><p>Control-sensitive instruction: may affect the configuration of a machine, may trap (e.g. one affecting relocation register or interrupt table.)</p></li><li><p>Behavior-sensitive Instruction: effect is partially determined by the system context  (e.g., POPF sets an iterrupt-enable flag, but only in system mode)</p></li></ul></li></ul><p></p>
25
New cards

Condition for Virtualization

Necessary Condition

  • For any convention computer, a virtual machine monitor may be constructed if the set of SENSITIVE INSTRUCTIONS for the computer is a SUBSET OF THE SET OF PRIVILEGED INSTRUCTIONS.

Problems: THE CONDITION IS ALWAYS SATISFIED

  • There may be sensitive instructions that are executed in user mode without causing a trap to the operating system.

Solutions

  • EMULATE all instructions

  • WRAP NONPRIVILLEGED SENSITIVE INSTRUCTIONS to divert control to VMM

  • PARAVISUALIZATION: modify guest OS, either by preventing nonprivileged sensitive instructions, or making them nonsensitive (i.e, changing the context).

    • Guest OS users make hypercalls for privileged operations.

26
New cards

Virtual Machines And Cloud Computing

THREE TYPES OF CLOUD SERVICES

  1. INFRASTUCTURE-AS-A-SERVICE(IaaS) : Covering the basic INFRASTRUCTURE.

    • Instead of renting out physical machines, a cloud provider will rent out VM ( or VMM) that may possibly be SHARING A PHYSICAL MACHINE WITH OTHER CUSTOMERS → ALMOST COMPLETE ISOLATION BETWEEN CUSTOMERS (although performance isolation may not be reached)

  2. PLATFORM-AS-A-SERVICE (PaaS): covering SYSTEM-LEVEL services

  3. SOFTWARE-AS-A-SERVICE (SaaS): containing actual APPLICATIONS

<p>THREE TYPES OF CLOUD SERVICES</p><ol><li><p>INFRASTUCTURE-AS-A-SERVICE(IaaS) : Covering the basic INFRASTRUCTURE.</p><ul><li><p>Instead of renting out physical machines, a cloud provider will rent out VM ( or  VMM) that may possibly be SHARING A PHYSICAL MACHINE WITH OTHER CUSTOMERS → ALMOST COMPLETE ISOLATION BETWEEN CUSTOMERS (although performance isolation may not be reached)</p></li></ul></li><li><p>PLATFORM-AS-A-SERVICE (PaaS): covering SYSTEM-LEVEL services</p></li><li><p>SOFTWARE-AS-A-SERVICE (SaaS): containing actual APPLICATIONS</p></li></ol><img src="https://knowt-user-attachments.s3.amazonaws.com/6e18e58b-463a-461c-ac9f-784c334d2333.png" data-width="100%" data-align="center" alt=""><p></p>
27
New cards

Client-Server Interaction

Distinguish application-level and middleware-level solutions

APPLICATION LEVEL

  • Each new application must implement ist OWN PROTOCOL LOGIC, leading to higher development effort and less reuse

MIDDLEWARE LEVEL

  • Promotes code reuse, interoperability, and simplifies development by ABSTRACTING NETWORK details from the application

<p>Distinguish application-level and middleware-level solutions</p><img src="https://knowt-user-attachments.s3.amazonaws.com/8e244ba1-f0c8-49f1-81eb-bf6212efbebd.png" data-width="100%" data-align="center" alt=""><p>APPLICATION LEVEL</p><ul><li><p>Each new application must implement ist OWN PROTOCOL LOGIC, leading to higher development effort and less reuse</p></li></ul><p>MIDDLEWARE LEVEL</p><ul><li><p>Promotes code reuse, interoperability, and simplifies development by ABSTRACTING NETWORK details from the application</p></li></ul><p></p>
28
New cards

ex: The X Window System

Basic Organization

  1. The client app (on a remote machine) uses XLib to communicate.

  2. It sends GUI commands over the X protocol to the X server.

  3. the X server renders graphics and sends back user input (keyboard/mouse) events.

X client and server

  • The application acts as a client to the X-kernel, the latter running as a server on the client’s machine.

<p>Basic Organization</p><img src="https://knowt-user-attachments.s3.amazonaws.com/508b92a5-2424-4166-a7f6-03d763350c4b.png" data-width="100%" data-align="center" alt=""><ol><li><p>The client app (on a remote machine) uses XLib to communicate.</p></li><li><p>It sends GUI commands over the X protocol to the X server.</p></li><li><p>the X server renders graphics and sends back user input  (keyboard/mouse) events.<br></p></li></ol><p>X client and server</p><ul><li><p>The application acts as a client to the X-kernel, the latter running as a server on the client’s machine.</p></li></ul><p></p>
29
New cards

Improving X

Practical Observation

  • There is a often NO CLEAR SEPARATION between application logic and user-interface commands (mixed)

  • Application tend to operate in a tightly SYNCHRONOUS manner with X kernel (affects performance -waits for responses)

Alternative Approaches

  • Let applications control the display completely, up to the pixel level (e.g., VNC)

  • Provide only a few high-level display operations ( dependent on local video drivers), allowing more efficient displays operations.

30
New cards

Client-side software

Generally tailored for distribution transparency

  • Access transparency: Conceal resource access using client-side stubs for RPCs

  • Location/ migration transparency: conceal resource location and let client-side software keep track of the actual location

Replication transparency: multiple invocations handled by the client stub:

  • Failure transparency: can often be placed only at the client ( mask server and communication failures).

<p>Generally tailored for distribution transparency</p><ul><li><p>Access transparency: Conceal resource access using client-side stubs for RPCs</p></li><li><p>Location/ migration transparency: conceal resource location and let client-side software keep track of the actual location</p></li></ul><p>Replication transparency: multiple invocations handled by the client stub: </p><img src="https://knowt-user-attachments.s3.amazonaws.com/013dd383-0cc5-4cc2-a74e-70ea6b1307cf.png" data-width="100%" data-align="center" alt=""><ul><li><p>Failure transparency: can often be placed only at the client ( mask server and communication failures).</p></li></ul><p></p>
31
New cards

SERVER: GENERAL ORGANIZATION

Basic model

  • A process IMPLEMENTING A SECIFIC SERVICE on behalf of a collection of client. It waits for an incoming request from a client and subsequently ENSURE THAT THE REQUEST IS TAKEN CARE OF, after which it waits for the next incoming request.

32
New cards

CONCURRENT SERVERS

ITERATIVE SERVE

  • The server handles the request before attending to the next request.

CONCURRENT SERVER

  • Uses a dispatcher, which picks up an incoming request that is then passed on to a SEPARATE THREAD/PROCESS. it can handle multiple request in parallel ( at the same time).

Observation

  • Concurrent servers are the norm: they can EASILY HANDLE MULTIPLE REQUESTS, notably in the presence of blocking operations ( to disks or other servers).

33
New cards

Contracting a server

observation:

Most services are tied to a specific port

DYNAMICALLY ASSIGNING TO AN ENDPOINT

knowt flashcard image

Deamon Registry

  1. The client asks the daemon on the server machine for an available endpoint

  2. The daemon provides the client with the endpoint, and the client uses it to request service from the appropriate server.

  3. The server registers its endpoint with the daemon ( which maintains an endpoint table).

Super- Server Model

  1. The client send a request for service to a super-server

  2. The super server dynamically creates or activates a specific server.

  3. The specific server takes over and continues servicing the client directly

DIAGRAM (A) USES A DEMON to manage and register server endpoints, where the SERVER IS ALWAYS RUNNING AND the CLIENT REQUEST ENDPOINT DYNAMICALLY. This approach suits PERSISTENT SERVICES but can be resource-intensive.

In contrast, DIAGRAM (b) employs a SUPER-SERVER that activates or spawns a specific server only when a client request arrives, making it more resource-efficient and scalable-ideal for ON-DEMAND or multi-service environments.

<p>observation:</p><p>Most services are tied to a specific port</p><p></p><p>DYNAMICALLY ASSIGNING TO AN ENDPOINT</p><img src="https://knowt-user-attachments.s3.amazonaws.com/4fad68d5-39ad-437a-aca1-218fdd18bf35.png" data-width="100%" data-align="center" alt="knowt flashcard image"><p>Deamon Registry</p><ol><li><p>The client asks the daemon on the server machine for an available endpoint</p></li><li><p>The daemon provides the client with the endpoint, and the client uses it to request service from the appropriate server.</p></li><li><p>The server registers its endpoint with the daemon ( which maintains an endpoint table).</p></li></ol><p></p><img src="https://knowt-user-attachments.s3.amazonaws.com/2ac24864-9974-4bdb-9daa-b23cc3b6fee2.png" data-width="100%" data-align="center" alt=""><p>Super- Server Model</p><ol><li><p>The client send a request for service to a super-server</p></li><li><p>The super server dynamically creates or activates a specific server.</p></li><li><p>The specific server takes over and continues servicing the client directly</p></li></ol><p></p><p>DIAGRAM (A) USES A DEMON to manage and register server endpoints, where the SERVER IS ALWAYS RUNNING AND the CLIENT REQUEST ENDPOINT DYNAMICALLY. This approach suits PERSISTENT SERVICES but can be resource-intensive.</p><p>In contrast, DIAGRAM (b) employs a SUPER-SERVER that activates or spawns a specific server only when a client request arrives, making it more resource-efficient and scalable-ideal for ON-DEMAND or multi-service environments.</p>
34
New cards

OUT-OF-BAND COMUNICATION

ISSUE:

  • Is it possible to interrupt a server once it has accepted (or is in the process of accepting) a service request?

SOLUTION 1: USE A SEPARATE PORT FOR URGENT DATA

  • server has a separate thread/process for urgent messages

  • urgent message come in → associated request in put on bold

  • NOTE: we require OS support priority-based scheduling

SOLUTION 2: USE FACULITIES OF THE TRANSPORT LAYER

  • example: TCP ALLOWS FOR URGENT MESSAGES in same connection

  • Urgent massages can be caught using OS signaling techniques.

35
New cards

SERVER AND STATE

STATELESS SERVES

  • NEVER KEEP ACCURATE INFORMATION about the status of a client after having handled a request:

    • DON’T record whether a file has been opened (simple close it again after access)

    • DON’T promise to invalidate a client’s cache

    • DON’T Keep track of your clients

  • CONSEQUENCE

    • Clients and servers are completely independent

    • State inconsistencies due to client or server crashes are reduced

    • Possible loss of performance because, for example, a server cannot anticipate client behavior (think of prefetching file blocks)

STATEFUL SERVERS

  • Keeps TRACK OF THE STATUS of its clients:

    • Record that a file has been opened, so that prefetching can be done.

    • knows which data a client has cached, and allows clients to keep local copies of shared data.

  • Obsercation

    • The performance of a stateful server can be extremely high, provided clients are allowed to keep local copies. As it turns out, reliability is often not a major problem. However, this may be harder to scale.

COMPARISON

SATELESS SERVERS:

  • DO NOT RETAIN ANY INFORMATION about client interactions between requests, treating each request as independent and self-contained

  • highly scalable, easier to manage, and fault-tolerant —ideal for RESTful APIs and services, like DNS, could be inconsistent.

STATEFUL SERVERS:

  • MAINTAIN SESSION INFORMATION across request, useful for more personalized and content-aware interactions, such as in online banking, shopping carts, or gaming.

  • harder to scale and less resilient to failures, often requiring more complex infrastructure to manage session state.

36
New cards

THREE DIFFERENT TIERS

COMMON ORGANIZATION

Per tier:

  1. Client requests first hit a logical switch or load balancer. The switch distributes the requests among multiple servers.

  2. Each server process business logic, performs computations, or prepares requests for the backend.

  3. Handles data persistence, retrieval and updates.

Crucial Element

The FIRST TIER generally responds to passing requests to an appropriate server: REQUEST DISPATCHING

<p>COMMON ORGANIZATION</p><img src="https://knowt-user-attachments.s3.amazonaws.com/02ef10f8-1b6a-4274-aa08-af2fd779f61d.png" data-width="100%" data-align="center" alt=""><p>Per tier: </p><ol><li><p>Client requests first hit a logical switch or load balancer. The switch distributes the requests among multiple servers.</p></li><li><p>Each server process business logic, performs computations, or prepares requests for the backend.</p></li><li><p>Handles data persistence, retrieval and updates.</p></li></ol><p></p><p>Crucial Element</p><p>The FIRST TIER generally responds to passing requests to an appropriate server: REQUEST DISPATCHING</p><p></p>
37
New cards

REQUEST HANDLING

Observation:

  • Having the first tier handle all communication from/to the cluster MAY LEAD TO A BOTTLENECK. Imagine only one load balancer.

Solution: TCP HANDOFF

With the TCP handoff:

  • The selected SERVER TAKES OVER the connection and continues communication with the client.

  • The server processes the request and SENDS A RESPONSE DIRECTLY to the client.

  • From the client’s perspective, it feels like a SINGLE CONTINUOUS CONNECTION.

<p>Observation: </p><ul><li><p>Having the first tier handle all communication from/to the cluster MAY LEAD TO A BOTTLENECK. Imagine only one load balancer.</p></li></ul><p></p><p>Solution: TCP HANDOFF</p><img src="https://knowt-user-attachments.s3.amazonaws.com/b64286d9-fb49-4e9b-95ed-915e059236d7.png" data-width="100%" data-align="center" alt=""><p></p><p>With the TCP handoff:</p><ul><li><p>The selected SERVER TAKES OVER the connection and continues communication with the client.</p></li><li><p>The server processes the request and SENDS A RESPONSE DIRECTLY to the client.</p></li><li><p>From the client’s perspective, it feels like a SINGLE CONTINUOUS CONNECTION.</p></li></ul><p></p>
38
New cards

SERVER CLUSTERS

The front end may easily get overloaded: Special measures may be needed

  • TRANSPORT-LAYER Switching: Front end simply passes the TCP request to one of the servers, taking some PERFORMANCE METRIC into account.

  • CONTENT-AWARE DISTRIBUTION: front end reads the content of the request and then selects the BEST SERVER.

<p>The front end may easily get overloaded: Special measures may be needed</p><ul><li><p>TRANSPORT-LAYER Switching: Front end simply passes the TCP request to one of the servers, taking some PERFORMANCE METRIC into account.</p></li><li><p>CONTENT-AWARE DISTRIBUTION: front end reads the content of the request and then selects the BEST SERVER.</p></li></ul><img src="https://knowt-user-attachments.s3.amazonaws.com/86daf898-31d9-4bd7-9039-6b9a3b76730e.png" data-width="100%" data-align="center" alt=""><p></p>
39
New cards

WHEN SERVERS ARE SPREAD ACROSS THE INTERNET

Observation:

Spreading servers across the internet may introduce Administrative problems. these can be largely circumvented by using data centers from a single cloud provider.

Request dispatching: IF THE LOCALITY IS IMPORTANT

Common approach: use DNS

  1. Clients looks up specific service through DNS— client’s IP address is part of the request

  2. DNS server keeps tracking of replica servers for the requested service, and RETURND THE ADDRESS OF MOST LOCAL OR NEAREST SERVERS.

CLIENT TRANSPARENCY

to keep the client unaware of distribution, le the DNS resolve act on behalf of the client. the problem is that the resolver may actually be far from local to the actual client.

40
New cards

DISTRIBUTED SERVERS WOTH STABLE IPv6 ADDRESS(es)

Transparency through Mobile IP

ROUTE OPTIMIZATION can be used to make different clients believe they are COMMUNICATING WITH A SINGLE SERVER, where, in fact, each client is communicating with a different member node of the distributed server

  • When a distributed server’s access point forwards a request from client C1 to serve node S1 (with care-of address CA1), it INCLUDES ENOUGH INFORMATION FOR S1 TO BEGIN A ROUTE OPTIMIZATION PROCESS.

  • This process makes C1 believe that CA1 is the server’s current location, allowing C1 to store the pair (HA,CA1) for future communication. The ACCESS POINT AND THE HOME AGENT TUNNEL MOST OF THE TRAFFIC, ENSURING THE HOME AGENT DOES NOT DETECT A CHANGE IN THE CARE-OF ADDRESS

  • As a result, the HOME AGENT continues to communication with the original access point, maintaining SESSION CONTINUITY.

<p>Transparency through Mobile IP</p><img src="https://knowt-user-attachments.s3.amazonaws.com/828b2f6f-007e-46a9-8300-c96dee1e270a.png" data-width="100%" data-align="center" alt=""><p>ROUTE OPTIMIZATION can be used to make different clients believe they are COMMUNICATING WITH A SINGLE SERVER, where, in fact, each client is communicating with a different member node of the distributed server</p><p></p><ul><li><p>When a distributed server’s access point forwards a request from client C1 to serve node S1 (with care-of address CA1), it INCLUDES ENOUGH INFORMATION FOR S1 TO BEGIN A ROUTE OPTIMIZATION PROCESS.</p></li><li><p>This process makes C1 believe that CA1 is the server’s current location, allowing C1 to store the pair (HA,CA1) for future communication. The ACCESS POINT AND THE HOME AGENT TUNNEL MOST OF THE TRAFFIC, ENSURING THE HOME AGENT DOES NOT DETECT A CHANGE IN THE CARE-OF ADDRESS</p></li><li><p>As a result, the HOME AGENT continues to communication with the original access point, maintaining SESSION CONTINUITY.</p></li></ul><p></p>
41
New cards

DISTRIBUTED SERVER: ADDRESSING DETAILS

Essence: Client having MobilelPv6 can transparently set up a connection to any peer

  • Client C set up connection to IPv6 HOME ADDRESS(HA)

  • HA is maintained by a (network-level) home agent, which hands off the connection to a registered CARE-OF ADDRESS CA.

  • C can then apply ROUTE APTIMIZATION by directly forwarding packets to address CA (i.e., without the handoff through the home agent).

Collaborative distributed systems

Origin server maintain a home address, but HAND OFF CONNECTIONS TO THE ADDRESS OF COLLABORATING PEER→ ORIGIN SERVER AND PEER APPEAR AS ONE SERVER.

42
New cards

EXAMPLE: PLANET LAB

Essence:

Different organizations contributes machines, which they subsequently share for various experiments.

PlanetLab was a global research network that allowed researchers to test new protocols and services on a real-world, wide-area network. Ot consisted of hundred of nodes(servers) hosted by universities and research institutions around the world.

PROBLEM:

We need to ensure that different distribution do not get into each other’s way → Virtualization.

Basic Organization

Vserver

  • Independent and protected environment with its own libraries, server versions, and so on. Distributed applications are assigned a COLLECTION OF Vservers distributed across multiple machines.

PlanetLab VServers and Slices

Essence

  • Each Vserver operates in its own environment.

  • Linux enhancements include proper adjustment of process.

  • Teo process if DIFFERENT VSERVERS MAY HAVE SAME USER ID< BUT DOES NOT IMPLY THE SAME USER.

Separation Leads to slices:

<p>Essence:</p><p>Different organizations contributes machines, which they subsequently share for various experiments.</p><p></p><p>PlanetLab was a global research network that allowed researchers to test new protocols and services on a real-world, wide-area network. Ot consisted of hundred of nodes(servers) hosted by universities and research institutions around the world. </p><p></p><p>PROBLEM:</p><p>We need to ensure that different distribution do not get into each other’s way → Virtualization.</p><p></p><p>Basic Organization</p><img src="https://knowt-user-attachments.s3.amazonaws.com/c655d0f2-1f75-40eb-90ef-305825da3862.png" data-width="100%" data-align="center" alt=""><p>Vserver</p><ul><li><p>Independent and protected environment with its own libraries, server versions, and so on. Distributed applications are assigned a COLLECTION OF Vservers distributed across multiple machines.</p></li></ul><p></p><p>PlanetLab VServers and Slices</p><p>Essence</p><ul><li><p>Each Vserver operates in its own environment.</p></li><li><p>Linux enhancements include proper adjustment of process.</p></li><li><p>Teo process if DIFFERENT VSERVERS MAY HAVE SAME USER ID&lt; BUT DOES NOT IMPLY THE SAME USER.</p></li></ul><p></p><p>Separation Leads to slices:</p><img src="https://knowt-user-attachments.s3.amazonaws.com/078e6bf3-214d-4945-b0c0-efbb582ea180.png" data-width="100%" data-align="center" alt=""><p></p>
43
New cards

REASON TO MIGRATE CODE

Load Distribution

  • Ensuring that servers in a data center are sufficiently loaded(e.g., to prevent waste of energy)

  • Minimizing communication by ensuring that COMPUTATIONS ARE CLOSE TO WHERE THE DATA IS (think of mobile computing).

Flexibility: moving code to a client when needed

Code migration is the process of moving executable code from one machine to another in a distributed system to improve performance, efficiency, or flexibility.

Avoids pre-installing software and increases dynamic configuration.

<p>Load Distribution</p><ul><li><p>Ensuring that servers in a data center are sufficiently loaded(e.g., to prevent waste of energy)</p></li><li><p>Minimizing communication by ensuring that COMPUTATIONS ARE CLOSE TO WHERE THE DATA IS  (think of mobile computing).</p></li></ul><p></p><p>Flexibility: moving code to a client when needed</p><img src="https://knowt-user-attachments.s3.amazonaws.com/2c8bf5d5-e436-474c-935c-aa7cc813b4b3.png" data-width="100%" data-align="center" alt=""><p>Code migration is the process of moving executable code from one machine to another in a distributed system to improve performance, efficiency, or flexibility.</p><p>Avoids pre-installing software and increases dynamic configuration.</p><p></p>
44
New cards

Models For code migration

client-server

  • The client sends a request to a server. The server processes the request locally and sends back the result.

  • CODE STAYES PUT, only data moves between client and server

Remote Evaluation

  • The client sends code to the server to be executed there. Useful when the server has more data or resources.

  • CODE MOVES FROM CLIENT TO SERVER.

Code on Demand

  • The server SENDS CODE TO THE CLIENT, where it’s executed. Often used when clients need dynamic behavior or updates.

  • CODE MOVES FROM SERVER TO CLIENT.

Mobile Agents

  • A MOBILE AGENT (code + execution state + data) moves from host to host, executing part of its task at each.

  • CODE AND STATE MOVE BETWEEN SYSTEMS.

<img src="https://knowt-user-attachments.s3.amazonaws.com/663c4e1a-a838-4b97-88a4-ffcf5c38c5aa.png" data-width="100%" data-align="center" alt=""><p>client-server</p><ul><li><p>The client sends a request to a server. The server processes the request locally and sends back the result.</p></li><li><p>CODE STAYES PUT, only data moves between client and server</p></li></ul><p></p><p>Remote Evaluation</p><ul><li><p>The client sends code to the server to be executed there. Useful when the server has more data or resources.</p></li><li><p>CODE MOVES FROM CLIENT TO SERVER.</p><img src="https://knowt-user-attachments.s3.amazonaws.com/3ab4cc8a-037d-4a80-868d-8ff82d8c9904.png" data-width="100%" data-align="center" alt=""></li></ul><p></p><p>Code on Demand</p><ul><li><p>The server SENDS CODE TO THE CLIENT, where it’s executed. Often used when clients need dynamic behavior or updates.</p></li><li><p>CODE MOVES FROM SERVER TO CLIENT.</p></li></ul><p></p><p>Mobile Agents</p><ul><li><p>A MOBILE AGENT (code + execution state + data) moves from host to host, executing part of its task at each.</p></li><li><p>CODE AND STATE MOVE BETWEEN SYSTEMS.</p></li></ul><p></p>
45
New cards

STRING and WEAK MOBILTY

Object Components in Code Migration

  • Code segment: contains the actual code

  • Data segment: contains the state

  • Execution state: contains the context of the thread executing the object’s code

Weak mobility: Move code and data segment ( and reboot execution)

  • Relatively simple. especially if the code is portable

  • Distinguish code shipping(push) from code fetching(pull)

Strong mobility: Move component, including execution state"

  • Migration: move the entire object from one machine to the other.

  • Cloning: start a clone, and set it in the same execution state.

46
New cards

Migration in HETEROGENEOUS SYSTEM

Main Problem:

  • The target machine may not be SUITABLE TO EXECUTE THE MIGRATE CODE

  • The definition of process/thread/processor context is HIGHLY DEPENDENT ON LOCAL HARDWARE, OPERATING SYSTEM AND RUNTIME SYSTEM.

Only Solution: ABSTRACT MACHINE IMPLEMENTED ON DIFFRENT PLATFORMS

  • Interpreted language, effectively having their own VM

  • Virtual machine monitors

  • Migrate entire virtual machine including (OS and processes)

47
New cards

Migrating a virtual machine

Migrating images: three alternatives

  1. pushing memory pages ( unit of memory management) to the new machine and RESENDING THE ONES THAT ARE LATER MODIFIED during the migration process.

  2. STOPPING the current virtual machine; MIGRATING memory, and processes start on the new virtual machine

  3. Letting the new virtual machine PULL IN NEW PAGES AS NEEDED: processes start on the new virtual machine immediately and copy memory pages on demand.

<p>Migrating images: three alternatives</p><ol><li><p>pushing memory pages ( unit of memory management) to the new machine and RESENDING THE ONES THAT ARE LATER MODIFIED during the migration process.</p></li><li><p>STOPPING the current virtual machine; MIGRATING memory, and processes start on the new virtual machine </p></li><li><p>Letting the new virtual machine PULL IN NEW PAGES AS NEEDED: processes start on  the new virtual machine immediately and copy memory pages on demand.</p></li></ol><img src="https://knowt-user-attachments.s3.amazonaws.com/d6ce1901-6bc0-44ac-a5b3-276fea4db454.png" data-width="100%" data-align="center" alt=""><p></p>
48
New cards

Performance of migrating virtual machine

Problem

A complete migration may actually take tens of seconds. we also need to realize that during the migration, a service will be completely unavailable for multiple seconds.

Measurements regarding response times during VM migrations

<p>Problem</p><p>A complete migration may actually take tens of seconds. we also need to realize that during the migration, a service will be completely unavailable for multiple seconds.</p><p></p><p>Measurements regarding response times during VM migrations</p><img src="https://knowt-user-attachments.s3.amazonaws.com/750a140b-d997-4fad-a6bc-20dc37cb3336.png" data-width="100%" data-align="center" alt=""><p></p>