Chapter 12: Cloud App Monitoring and Benchmarking

0.0(0)
studied byStudied by 2 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/46

flashcard set

Earn XP

Description and Tags

Fall 2025

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

47 Terms

1
New cards

Why is benchmarking important for cloud applications?

It helps determine proper provisioning and capacity planning

2
New cards

Benchmarking helps organizations identify:

Under-utilized or over-provisioned resources

3
New cards

Market readiness of an application depends on:

Simulating all workload types the application may experience

4
New cards

Comparing alternative deployment architectures through benchmarking helps to:

Choose the most cost-effective design

5
New cards

Trace collection involves:

Logging real workload events such as user requests and timestamps

6
New cards

Workload modeling uses:

Mathematical models to generate synthetic workloads

7
New cards

The Workload Specification Language (WSL) is used to:

Specify workload attributes in a structured way

8
New cards

Synthetic workloads must:

Be representative of real workloads

9
New cards

The empirical approach to workload generation:

Replays sampled real traces

10
New cards

A major disadvantage of the empirical approach is:

Real traces do not generalize well to different systems

11
New cards

The analytical approach allows:

Variability of workload characteristics by adjusting model parameters

12
New cards

The analytical approach is preferred because it allows:

Testing sensitivity to parameters like session length

13
New cards

In user emulation, each emulated user:

Is a separate thread alternating between requests and idle time

14
New cards

A disadvantage of user emulation is:

Inability to control exact request arrival times

15
New cards

Aggregate workload generation allows:

Specifying exact request arrival timestamps

16
New cards

Aggregate workload generation cannot be used when:

Request dependencies must be satisfied

17
New cards

An inter-request dependency exists when:

The current request depends on the previous request

18
New cards

A data dependency exists when:

The next request needs input from the previous response

19
New cards

A session is defined as:

A set of successive requests from a user

20
New cards

Think time is:

Time between successive requests in a session

21
New cards

Session length refers to:

Number of requests in a session

22
New cards

Workload mix defines:

Transitions between pages and the proportion of visits

23
New cards

Response time is:

Time between request submission and response received

24
New cards

Throughput measures:

Requests per second served

25
New cards

Baseline tests are used to:

Collect performance metrics for comparison after tuning

26
New cards

Load tests measure performance at:

Production-level user and workload levels

27
New cards

Stress tests are designed to:

Push the application to the point of failure

28
New cards

Soak tests help identify:

Long-term stability issues over extended workload durations

29
New cards

Accuracy of a benchmarking methodology refers to:

How well synthetic workloads match real workloads

30
New cards

Flexibility in workload generation means:

You have fine-grained control over characteristics like think time

31
New cards

Wide application coverage means the methodology:

Works across different workload types and architectures

32
New cards

Deployment prototyping helps developers:

Choose the most cost-effective deployment architecture

33
New cards

Deployment refinement may involve:

Vertical or horizontal scaling

34
New cards

The traditional approach to workload capture involves:

Manually recording virtual user scripts

35
New cards

A drawback of the traditional approach is:

It cannot generate realistic synthetic workloads easily

36
New cards

The fully automated workflow uses:

Real traces to build workload and benchmark models

37
New cards

Statistical analysis of traces is used to:

Identify the distributions for workload attributes

38
New cards

In the case study, throughput saturates because:

The database CPU becomes a bottleneck

39
New cards

Throughput increases until around:

40 req/sec

40
New cards

Network out saturation for the DB server occurs around:

200 KB/s

41
New cards

Amazon CloudWatch is used for:

Monitoring AWS resources and applications

42
New cards

A CloudWatch metric represents:

A monitored variable like CPUUtilization

43
New cards

CloudWatch basic monitoring collects metrics every:

5 minutes

44
New cards

Detailed monitoring for EC2 collects metrics every:

1 minute

45
New cards

CloudWatch alarms are used to:

Trigger automated actions based on metric thresholds

46
New cards

CloudWatch Dashboards allow users to:

View customized metrics and alarms

47
New cards

CloudWatch Logs are used to:

Monitor, store, and access log files