1/46
Fall 2025
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No study sessions yet.
Why is benchmarking important for cloud applications?
It helps determine proper provisioning and capacity planning
Benchmarking helps organizations identify:
Under-utilized or over-provisioned resources
Market readiness of an application depends on:
Simulating all workload types the application may experience
Comparing alternative deployment architectures through benchmarking helps to:
Choose the most cost-effective design
Trace collection involves:
Logging real workload events such as user requests and timestamps
Workload modeling uses:
Mathematical models to generate synthetic workloads
The Workload Specification Language (WSL) is used to:
Specify workload attributes in a structured way
Synthetic workloads must:
Be representative of real workloads
The empirical approach to workload generation:
Replays sampled real traces
A major disadvantage of the empirical approach is:
Real traces do not generalize well to different systems
The analytical approach allows:
Variability of workload characteristics by adjusting model parameters
The analytical approach is preferred because it allows:
Testing sensitivity to parameters like session length
In user emulation, each emulated user:
Is a separate thread alternating between requests and idle time
A disadvantage of user emulation is:
Inability to control exact request arrival times
Aggregate workload generation allows:
Specifying exact request arrival timestamps
Aggregate workload generation cannot be used when:
Request dependencies must be satisfied
An inter-request dependency exists when:
The current request depends on the previous request
A data dependency exists when:
The next request needs input from the previous response
A session is defined as:
A set of successive requests from a user
Think time is:
Time between successive requests in a session
Session length refers to:
Number of requests in a session
Workload mix defines:
Transitions between pages and the proportion of visits
Response time is:
Time between request submission and response received
Throughput measures:
Requests per second served
Baseline tests are used to:
Collect performance metrics for comparison after tuning
Load tests measure performance at:
Production-level user and workload levels
Stress tests are designed to:
Push the application to the point of failure
Soak tests help identify:
Long-term stability issues over extended workload durations
Accuracy of a benchmarking methodology refers to:
How well synthetic workloads match real workloads
Flexibility in workload generation means:
You have fine-grained control over characteristics like think time
Wide application coverage means the methodology:
Works across different workload types and architectures
Deployment prototyping helps developers:
Choose the most cost-effective deployment architecture
Deployment refinement may involve:
Vertical or horizontal scaling
The traditional approach to workload capture involves:
Manually recording virtual user scripts
A drawback of the traditional approach is:
It cannot generate realistic synthetic workloads easily
The fully automated workflow uses:
Real traces to build workload and benchmark models
Statistical analysis of traces is used to:
Identify the distributions for workload attributes
In the case study, throughput saturates because:
The database CPU becomes a bottleneck
Throughput increases until around:
40 req/sec
Network out saturation for the DB server occurs around:
200 KB/s
Amazon CloudWatch is used for:
Monitoring AWS resources and applications
A CloudWatch metric represents:
A monitored variable like CPUUtilization
CloudWatch basic monitoring collects metrics every:
5 minutes
Detailed monitoring for EC2 collects metrics every:
1 minute
CloudWatch alarms are used to:
Trigger automated actions based on metric thresholds
CloudWatch Dashboards allow users to:
View customized metrics and alarms
CloudWatch Logs are used to:
Monitor, store, and access log files