System Design Trade-Offs

0.0(0)
studied byStudied by 0 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/6

flashcard set

Earn XP

Description and Tags

Q&A flashcards to review the main system design trade-offs: polling, database modeling, messaging, concurrency, and performance vs. latency.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

7 Terms

1
New cards

What are the main system design trade-offs?

  • Polling Schemes (Periodic, Long Polling, SSE, WebSockets)

  • Database Modeling Schemes (Relational, NoSQL, Column-Oriented, Graph)

  • Messaging & Queues (Kafka, RabbitMQ, SQS, etc.)

2
New cards

How many major trade-offs are we remembering

5 (Polling Schemes, Database Modeling Schemes, Messaging & Queues, Concurrency & Locking, Performance vs. Latency).

3
New cards

What are the 4 polling schemes discussed?

  • Periodic polling

  • Long polling

  • Server-Sent Events (SSE)

  • WebSockets

4
New cards

What are the 5 concurrency/locking approaches?

  • Conditional Writes - Dynamo DB feature to ensure atomic writes go through only if certain condition is satisfied.

  • Database locks — useful when lock duration is short.

    • Limitation: only valid if the locking period is on the order of seconds.

  • Status table — a separate table or column tracks the status of operations.

  • Redis lock (Redlock) — distributed locking using Redis.

  • Optimistic concurrency control — uses version/timestamp; retries if conflicts occur.

5
New cards

What’s the difference between performance-oriented vs latency-oriented design?

  • Performance-Oriented: Optimized for throughput, can accept latency.

  • Latency-Oriented: Optimized for responsiveness, sacrifices throughput.

6
New cards

How do you shard data for latency vs to avoid hot shards?

  • Same-shard for latency: put related records together in one shard so queries are faster.

    • Con: this can create hot shards (too much load on one shard).

  • Uniform/random distribution: spread records evenly across shards.

    • Pro: avoids hot shards.

    • Con: queries may need to pull from multiple shards, which can increase latency.

7
New cards

What are different messaging queues and tradeoffs

Amazon SQS: Simple queue with visiblity timeout.

Redis Pub/Sub: Fire and Forget approach useful for chat applications and other notification systems

Kafka: Queue with record history useful for auditability and replayability.