GFS, Parallel Processing Platforms, Net Virtualization

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/113

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

114 Terms

1
New cards

what is a distributed file system

allows access to files from multiple hosts sharing via a computer network using a protocol

2
New cards

what are the desirable characteristics/properties

network transparency

location transparency

location independence (file mobility)

fault tolerance

scalability

user mobility

3
New cards

explain location transparency

file name does not reveal the file’s physical storage location.

most DFSs provide this. client based mounting.

4
New cards

explain location independence

file name does not need to be changed when the file’s physical storage location changes.

file migration(AFS/GFS supports this). server based flat naming.

5
New cards

how does UNIX semantics deal with shared files

every operation on a file is instantly visible to all processes

6
New cards

how does session semantics deal with shared files

no changes are visible to other processes until the file is closed

7
New cards

how does immutable files deal with shared files

no updates are possible; simplifies sharing and replication (read only)

8
New cards

how does transaction deal with shared files

all changes occur atomically.

which means all changes occur and finish completely or none are.

9
New cards

what is the goal of a local procedure call in terms of execution?

to execute exactly one call

10
New cards

why might a remote procedure call (RPC) be executed 0 times?

the server crashed or the server process died before executing server code

11
New cards

what does it mean if an RPC is executed 1 time?

the RPC works as intended - everything went well

12
New cards

is achieving “exactly once” semantics easy in RPC

No, its difficult due to excess latency or lost reply from server, client retransmitted

13
New cards

what are the main two types of RPC semantics

at least once and at most onceh

14
New cards

what does at least once RPC semantics mean

the call is guaranteed to be executed one or more times

15
New cards

what kind of functions are safe with at least one semantics

idempotent functions - they can run any number of times without harm

16
New cards

what is an example of a suitable app for at least once semantics

logging, read operations, or updating a cache

17
New cards

what does at most once RPC semantics mean

the call is executed either once or not at all

18
New cards

what is an example of a suitable app for at most once semantics

financial transactions or creating user accounts (non-idempotent operations)

19
New cards

what are google’s key needs in its search system?

crawl the web, store it all on one big disk, and process user searches on one big CPU

20
New cards

why can a single PC handle google’s needs

it doesnt have enough storage or CPU power

21
New cards

why not just use a custom parallel supercomputer

it is too expensive and not easily available

22
New cards

How can we build a supercomputer using cheaper hardware?

Use a cluster of many cheap PCs, each with its own disk and CPU.

23
New cards

What are the benefits of using a PC cluster?

High total storage and spread search processing across many CPUs

24
New cards

What is Ivy and how does it share data?

Ivy uses shared virtual memory with fine-grained consistency at the load/store level.

25
New cards

What is NFS and what does it aim to do?

Network File System (NFS) shares a file system from one server to many clients, mimicking UNIX file system behavior.

26
New cards

What consistency model does NFS use for performance?

Close-to-open consistency.

27
New cards

What is GFS and why was it created?

Google File System (GFS) is a file system designed specifically to handle data sharing in clusters for Google’s workloads.

28
New cards
29
New cards

How many PCs are typically in a Google cluster?

Hundreds to thousands.

30
New cards

What kind of hardware does Google use in its clusters?

Cheap, commodity parts.

31
New cards

What are common types of failures in Google’s clusters?

App bugs, OS bugs, human error, disk/memory/network/power/connector failures.

32
New cards

What is essential for handling frequent failures in Google’s clusters?

Monitoring, fault tolerance, and automatic recovery.

33
New cards

What is a key design goal of the Google File System (GFS)?

Detect, tolerate, and recover from failures automatically.

34
New cards

What types of data operations is GFS optimized for?

  • Large files (≥ 100 MB)

  • Large, streaming reads (≥ 1 MB), usually read once

  • Large, sequential appends, written once

35
New cards

How does GFS handle concurrent appends by multiple clients?

It supports atomic appends without requiring client synchronization.

36
New cards

What are the main components of the GFS architecture?

One master server (with backup replicas) and many chunk servers (hundreds to thousands).

37
New cards

How is data divided in GFS?

Into 64 MB chunks, each with a 64-bit globally unique ID.

38
New cards

How are chunk servers organized in the network?

Spread across racks; intra-rack bandwidth is faster than inter-rack.

39
New cards

Can many clients access GFS at the same time?

Yes, clients can access the same or different files concurrently in the same cluster.

40
New cards

what does GFS master server do?

holds all metadata: namespace (directory hierarchy) - access control info (per file) - mapping from files to chunks - current locations of chunks (chunk servers)

manages chunk leases to chunk servers

garbage collects orphaned chunks

41
New cards

where is metadata stored?

in RAM; this makes for fast operations on file system metadata

42
New cards

what is GFS chunk servers role

stores 64MB chunks on local disks using standard Linux file system (each with a version number and checksum).

read/writes are specified by chunk handle and byte range

chunks are replicated on configurable number of chunkservers

CANNOT cache file data.

43
New cards

what is GFS clients role?

issues control (metadata) requests to master server

issues data requests directly to chunkservers

caches metadata

CANNOT cache data [no consistency difficulties among clients, streaming reads (read once) and appends write (write once) don’t benefit from caching at client]

44
New cards

is GFS a file system in traditional sense?

no, its a user space library that applications link to for storage access

45
New cards

does GFS mimic UNIX semantics

no it only partially does

46
New cards

what are the main API functions by GFS

open, delete, read, write, snapshot, and append

47
New cards

what does the function snapshot do

quickly create copy of file

48
New cards

what does the function append do

at least once, possibly with gaps and/or inconsistencies among clients

49
New cards

how does the client read

client sends master —> read(file name, chunk index)

master replies —> chunk ID, chunk version num, locations of replicas

client sends closest chunk server w/replica —>read(chunkID, byte range)

  • closest determine by IP address on simple rack-based network topology

chunkserver replies with data

50
New cards

how does the client write

some chunk serve is the primary chunk, master grants lease to primary for typically 60 secs. leases renewed using periodic heartbeat messages between masters and chunkservers.

client asks server for primary and secondary replicas for each chunk.

client sends data to replicas in daisy chain

51
New cards

what happens after replicas are daisy chained/pipelined

all replicas acknowledge data write to chunk server. client sends write request to primary.

primary assigns serial number to write request, providing order. it forwards write request with same serial number to secondaries.

secondaries all reply to primary after completing write.

primary replies to client.

52
New cards

what is GFS consistency model

any changes to namespace (metadata) are atomic

  • done by a single master server. this master uses log to define global total order of namespace changing operations

data changes are more complicated

  • consistent - file region all clients see as same, regardless of replicas they read from

  • defined - after data mutation, all clients see that entire mutation

53
New cards

what are app and record append semantics

apps should include checksums in record they write using record append

  • reader can identify padding / record fragments using checksums

if app cannot tolerate duplicate records, it should include unique ID in record

  • reader can use unique ID to filter dupes

54
New cards

how does the master log

it has all metadata info. it logs all client requests to disk sequentially. replicates log entries to remote backup servers.

it only replies to client after log entries are safe on disk on self and backups.

55
New cards

how do chunk leases and version numbers work

if there is no outstanding lease when client requests write, master grants new one.

chunks have version numbers

  • stored on disk as master and chunkserver

  • each time master grants new lease, it increments version and informs all replicas

master can revoke it (if client request rename or snapshot)

56
New cards

what is master reboots

replays log from disk to recover namespace info and file to chunk id mapping.

asks chunk server which chunk they hold to recover chunk ID to chunk server mapping.

if chunk server has older chunk, its stale.

if chunk server has new chunk, it adopts version number.

57
New cards

how was GFS a success

it is used actively by google to support search service and other applications.

  • high availability and recoverabilty on cheap hardware

  • high throughput by decoupling control and data

  • supports massive data sets and concurrent appends

58
New cards

how has GFS fallen short

semantics not transparent to apps - must verify file contents to avoid inconsistent regions and repeated appends (at least once semantics)

performance not good for all apps - assumes read once and write once, no client caching

59
New cards

what was wrong with GFS architecture

master - one machine not large enough, single bottleneck for metadata operations, need better HA

predicitable performance and no guarantees of latency

60
New cards

what replaces GFS

master replaced by colossus and chunkserver replaced by D

61
New cards

what was the file system for big table

append only, single write, multi reader, no snapshot or rename, directories not neccessary. so where to put meta data?

62
New cards

what are colossus components

client - most complex part

  • lots of functions such as software RAID, application encoding chosen

curators - foundation, its scalable metadata service

  • can scale out horizontally, built on NoSQL DB like BigTable, allows scaling 100x over GFS

D servers - simple network attached disks

custodians - background storage manager

  • handles such as disk space balancing and RAID construction

  • ensures durability, availability, and system efficiency

data - hot data and cold data

  • very efficient storage organization, mixing flash and spinning disks

  • just enough flash to push I/O density, and just enough disks to fill them

  • flash to serve really hot data and low latency

63
New cards

what is WORM

write once read many. this inspired MapReduce programming model.

different than transactional or the customer order data.

ex. web logs, web crawlers data, or healthcare and patient information

64
New cards

what was the motivation for MapReduce

process lots of data, a single machine cannot serve all the data

you need a distributed system to store and process in parallel

65
New cards

what is the difficultly in parallel programming

threading it hard. facilitating communication between nodes. scaling to more machines. and handling machine failures.

66
New cards

what does MapReduce provide

automatic parallelization, distribution

i/o scheduling - load balancing and network/data transfer optimization

fault tolerance - handling machine failures

67
New cards

what are typical problems MapReduce solved

read a lot of data

map - extract something you care about from each record

shuffle and sort

reduce - aggregate, summarize, filter, or transform

write the results

<p>read a lot of data</p><p>map - extract something you care about from each record</p><p>shuffle and sort</p><p>reduce - aggregate, summarize, filter, or transform</p><p>write the results</p>
68
New cards

when to use mappers and reducers

need to handle more data - add more

no need for multithreaded code - the mappers and reducers are single threaded and deterministic meaning they allow for restarting of failed jobs

they run independently

69
New cards

how does mapper take input and output

reads in input pair and outputs a pair

input <key,value>

output <K’,V’>

70
New cards

how does reducer work with output

accepts the mapper output, and aggregates values on the key

ex. <store, 1><store, 1><store, 1> then the output would be <store, 3>

71
New cards

what kind failures can occur in mapreduce

this is a norm in commodity hardware.

  • worker failure - detect failures via periodic heartbeats, re-execute in progress map/reduce tasks

  • master failure - single point of failure, resume from execution log

72
New cards

summarize map reduce

programming paradigm for data-intensive computing with a distributed and parallel execution model. simple to program

73
New cards

what were map reduce two major limitations

difficulty of programming directly in it and performance bottlenecks, or batch not fitting the use cases. overall did not compose well for large applications

74
New cards

what was Sparks goal

to generalize mapReduce to support new apps within the same engine.

extend the mapReduce model to better support two common classes on analytics apps:

  • iterative algos (ML, graphs)

  • interactive data mining

enhance programmability:

  • integrate into scala

  • allow interactive use from scala interpreter

75
New cards

what were two additions to the previous model spark expressed

fast data sharing and general DAGs. this allowed for an approach more efficient for the engine and simpler for end users.

76
New cards

what are the key points about spark

handles batch, interactive, and real time within a single framework

native integration with java, python, scala

programming at a higher level of abstraction

more general - map/reduce is just one set of supported constructs

77
New cards

what is the benefit of acyclic data flow

stable storage to stable storage. runtime can decide where to run tasks and can automatically recover from failures.

78
New cards

what is the set back of acyclic data flow

inefficient for applications that repeatedly reuse a working set of data:

  • iterative algos (ML, graphs) and interactive data mining tools (R, Excel, Python)

current frameworks, apps reload data from stable storage on each query.

79
New cards

what was the solution to acyclic data flow

RDD Resilient distributed datasets. this allowed apps to keep working sets in memory for efficient reuses. retain the attractive qualities of map reduce (fault tolerance, data locality, scalability). support a wide range of applications.

80
New cards

what was the programming model for RDD

immutable, partitioned collections of objects.

created through parallel transformations (map, filter, groupBy, join) on data in stable storage.

can be cached for efficient reuse.

81
New cards

what actions are available on RDD

count, reduce, collect, save, etc.

82
New cards

what frameworks were build on spark

pregel on spark (bagel) - google message passing model for graph computation, 200 lines of code

hive on spark(shark) - 3000 lines of code.

83
New cards

how is spark implemented

runs on apache mesos to share resources with hadoop and other apps

can read from any hadoop input source (HDFS)

84
New cards

how does spark schedule

pipelines functions within a stage. cache aware work reuse and locality. partitioning aware to avoid shuffles

85
New cards

what did interactive spark require

scala interpreter modified so spark could be used in command line.

modified wrapper code generation so that each line typed has references to objects for its dependencies, and distributed generated classes over the network.

86
New cards

summarize spark

provides a simple efficient and powerful programming model for a wide range of apps

87
New cards

what are the layers in internet protocol

application - supporting network apps (HTTPS,IMAP,SMTP,DNS)

transport - process-process data transfer (TCP, UDP)

network - routing of datagrams from source to destination (IP, routing)

link - data transfer between neighboring network elements (ethernet, WiFI)

physical - bits on the wire

88
New cards

what is network virtualization

the process of combining hardware and software resources and network functionality into a single, software based admin entity, a virtual network (VN)

89
New cards

what does network virtualization enable

the coexistence of multiple VNs on the same physical substrate network. they share physical resources whilst being isolated from one another.

90
New cards

what does a physical substrate network consist of

physical nodes - servers and switches

physical links

91
New cards

what does a virtual network consist of

a collection of virtual nodes and virtual links with a subset of underlying physical resources

92
New cards

why would one prefer network virtualization

to separate different admin areas

to develop and test out new protocols

to adopt new protocols or techniques - multi provide nature of the internet, restricts the updates and deployment of new network tech

to support multi-tenant environments of cloud computing over widely spread data centers -

  • sharing computing, storage, and networking resources for different applications

  • incremental deployment of emerging applications

to support new/emerging applications (over cloud)

93
New cards

what are the goals of network virtualization

gives each app/tenant its own virtual network with its own: topology, bandwidth, broadcast domain

and delegate admin for virtual networks

94
New cards

how to implement a VN

VPN - virtual private network where you can connect multiple sites using tunnels over the public network. L3/L2/L1 technologies

  • VLAN virtual local area network - L2 network, partitioned to create multiple distinct broadcast domains, port level VLAN, or mark packets with tags

active and programmable networks - support co-existing networks through programmability. customized network functionality through programmable interfaces to enable access to switches/routers. (software defined network) L2,L3,L4

overlay networks - app layer virtual networks that are built upon existing network using tunneling/encapsulation. implement new services without changes in infrastructure.

95
New cards

how do static VLANS work

they are port based. switch port statically defines the VLAN a host its a part of

96
New cards

how do dynamic VLANS work

Mac-based. mac address defines which VLAN a host is a part of. typically configured by network admin.

97
New cards

how are VLANs tagged

access port - port connected to end host, assigned to a single VLAN

trunk port - port that sends and received tagged frames on multiple VLAN networks

  1. VLAN tagging at trunk link

  2. tag added to layer 2 frame at ingess switch

  3. tag stripped at egress switch

  4. tag defines on which VLAN the packet is routed

<p>access port - port connected to end host, assigned to a single VLAN</p><p>trunk port - port that sends and received tagged frames on multiple VLAN networks</p><ol><li><p>VLAN tagging at trunk link</p></li><li><p>tag added to layer 2 frame at ingess switch</p></li><li><p>tag stripped at egress switch</p></li><li><p>tag defines on which VLAN the packet is routed</p></li></ol><p></p>
98
New cards

what are problems with VLANs

requires separate configuration of every network node, static config. requires admin, number of VLANS limited.

99
New cards

what is the purpose of software defined networks

to overcome network ossification, essentially the rigidity of internet protocols and techniques.

also to separate data plane and control plane

  • routing decision is computed in a controller

  • SDN routers/switches only forward packets based on routing decision made by controllers

  • SDN controller and routers are communicated through secure connections

100
New cards

explain forwarding

this happens on the data plane, its a network function of moving packets from router’s input to appropriate router output