Cloud Computing Ch 4

0.0(0)
studied byStudied by 7 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/47

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

48 Terms

1
New cards

Serverless Computing

an execution model for cloud computing environments where the cloud provider executes a piece of code (a function) by dynamically allocating resources.

aka Function as a Service

difference from server-based computing is that in serverless a layer of abstraction is added on top of cloud infrastructure such that the application developers do not need to provision and manage the underlying infrastructure required for execution of code.

2
New cards

Popular FaaS offering from AWS

AWS Lambda

3
New cards

In FaaS, functions are triggered by

events

When a function is triggered by an event, the cloud provider launches a container (e.g. Docker) and executes the function within the container.

4
New cards

Monolithic Application

All functionality in a single process that runs on a server. Scaling is done by replicating monolith app on multiple servers.

5
New cards

Microservices-based Application

A separate microservice is used for each functionality (comprising multiple functions). Each microservice runs on a server or container. Scaling is done by distributing and replicating the microservices across multipe servers/containers

6
New cards

Serverless Application

Microservices broken down into functions, where each function is deployed separately in a serverless platform or FaaS. Scaling is taken care by the platform.

7
New cards

Cloud Services for Implementing Serverless Applications

Service Static App → S3

REST API → API Gateway

Execute Code on Demand → Lambda

Store and Retrieve Data → DynamoDB

Authenticate Users → Cognito

Translate URL → Route 53

8
New cards

Pros of Serverless

Low operational cost

Low maintenance

Scalability

Availability and fault tolerance

9
New cards

Cons of Serverless

No control over the infrastructure

Time limits (to execute the function)

Vendor lock-in

Not suitable for all cases

10
New cards

Master Data

data that doesn’t change much

11
New cards

Transactional Data

data that changes often

12
New cards

What is the Amazon data warehouse

Amazon Redshift

13
New cards

Messaging Queues (AQS SQS)

can be used between the data producers and data consumers for asynchronous processing or load leveling.

14
New cards

Queues are useful for

push-pull messaging where the producers push data to the queues, and the consumers pull the data from the queues

15
New cards

Load Leveling

a queue-based, passive strategy for smoothing out traffic spikes by buffering or deferring requests during peak times and processing them later during lulls

16
New cards

Load balancing

a real-time, proactive approach to distributing incoming traffic across multiple servers to prevent any single server from becoming overwhelmed and to maximize resource utilization

17
New cards

Multiple consumers can make the system more robust as there is no ___

single point of failure

18
New cards

Load balancing between consumers improves the system performance as multiple consumers can

process messages in parallel

19
New cards

Priority queue pattern is useful when __

you want to process messages differently depending on their priority.

20
New cards

Command Pattern

Purpose: The command pattern focuses on a single instruction or task executed by a specific service. It is used when you want to send a message or command to a single microservice or function to perform an action.

Execution: The command is sent to one target (e.g., a Lambda function in AWS, Azure Function, etc.), and only that specific service handles the task.

Example: A user uploads a profile picture, and the service (like an AWS Lambda function) resizes the image and stores it in a storage service (e.g., S3). Only one service is responsible for this task.

Scaling: While the service is automatically scalable in serverless environments, the command pattern deals with one-to-one communication, where only one service is involved in processing each command.

21
New cards

Command Pattern is useful when you want to

decouple a sender or client who invokes a certain operation from a receiver or worker who performs the operation.

e.g. a single lambda function (command) can be invoked by different event sources which can then invoke multiple lambda functions

22
New cards

Fan-Out Pattern

Purpose: The fan-out pattern is used when you want to distribute a single event or message to multiple services or functions for parallel execution. It is useful when the same event needs to trigger multiple actions simultaneously.

Execution: An event triggers multiple serverless functions or services, each performing a different task in parallel. It often uses messaging services like SNS (Simple Notification Service) or EventBridge in AWS to broadcast an event to multiple subscribers.

Example: A user uploads a video, and the event triggers different services: one service processes the video, another generates thumbnails, another sends a notification to the user, and another updates metadata in the database.

Scaling: The fan-out pattern inherently supports parallelism and one-to-many communication, where one event is fanned out to multiple services, each scaling independently to handle its specific task.

23
New cards

The Fan out pattern is useful when

you want to perform multiple actions or invoke multiple services or functions, while the event source supports only a single target.

24
New cards

Data Source vs Data Sink

A data source is where data originates, such as a user input, sensor, or database, acting as the starting point in a data flow.

A data sink is the destination for that data, where it is stored, processed, or displayed, serving as the endpoint in the data flow.

The key difference is the direction of data movement: sources emit or provide data, while sinks receive or consume it

25
New cards

Pipes and Filters pattern

pipes/queues are the connections between the filters/workers (processing components).

26
New cards

Benefits of Pipes and Filters Pattern

each task can be scaled independently, parallel processing of tasks, more reliable/fault-tolerant because even if some workers fail, the entire pipeline doesn’t fail.

27
New cards

AWS Lambda

a serverless offering from Amazon Web Services (AWS). Lambda is a compute service that lets you run code without provisioning or managing servers.

28
New cards

In the push model..

an AWS service (such as S3) publishes events which invoke the lambda functions

29
New cards

In the pull model

AWS Lambda polls the event source and then invokes the Lambda function when records are detected on that source. The pull model works for poll-based event sources (Kinesis, DynamoDB streams, and SQS queues KNOW THESE THREE)

30
New cards

In AWS Lambda, you can have at the most ___ concurrent executions across all the functions in your account (account concurrency limit)

1000

31
New cards

Function concurrency limit

concurrency for a function out of the unreserved account concurrency limit

32
New cards

For event sources that aren’t poll-based, you can estimate the number of concurrent invocations of your Lambda functions using the formula

Events/Requests per second * Function Duration

33
New cards

For poll-based event sources that are stream based (Kinesis or DynamoDB streams) the concurrency is equal to the number of

active shards

34
New cards

shard

a term used in the context of databases and distributed systems to describe a horizontal partition of data in a database or data stream

35
New cards

Database Sharding

a technique used to distribute the data across multiple servers or instances to improve performance, scalability, and reliability.

36
New cards

For poll-based event sources that are not stream based (SQS queues)

each message batch can be considered a single concurrent event

37
New cards

AWS Lambda has a timeout limit of …

5 minutes

38
New cards

Horizontal vs Vertical Sharding

Horizontal sharding splits a database table by rows, distributing different subsets of rows across various shards

Vertical sharding splits by columns, distributing different sets of columns into separate shards

39
New cards

Timeout makes the serverless computing model more suitable for

real-time or short running operations rather than long-running batch operations

40
New cards

A container helps in

isolating the execution of a function from other functions

41
New cards

when a function is invoked for the first time, or after a long time

a container is created, the execution environment is initialized, and the function code is loaded

The container is reused for subsequent invocations of the same function that happen within a certain period

The functions should be designed to be stateless

42
New cards

Any application state should be stored in

external resources such as a DB, cloud storage, or cache

43
New cards

Stateless communication

a method where each request is independent and does not rely on past interactions or stored server information

44
New cards

Cold Start / Cold Function

When a function has not been executed for a long time or is being executed for the first time, a new container has to be created, and the execution environment has to be initialized

45
New cards

Cold start can result in a

higher latency, as a new container has to be initialized

46
New cards

Warm Function

The cloud provider may reuse the container for subsequent invocations of the same functions within a short period

47
New cards

Warm function takes __ time to execute than a cold start

less

48
New cards

You can keep functions warm by

invoking them periodically