CGS Final Exam

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/113

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 11:21 AM on 5/4/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

114 Terms

1
New cards

NoSQL

A generation of database management systems not based on the traditional relational model.

2
New cards

What "NoSQL" actually stands for

"Not Only SQL"

3
New cards

Five NoSQL characteristics

(1) Not based on the relational model, (2) Support distributed architectures, (3) Provide fault tolerance and high scalability/availability, (4) Support large amounts of sparse data, (5) Geared toward performance over consistency.

4
New cards

Four main categories of NoSQL databases

Key-Value, Document, Graph, Column-Oriented.

5
New cards

Key-Value (KV) database

A NoSQL model that stores data as key-value pairs in which the value is unintelligible to the DBMS.

6
New cards

Bucket (KV database)

A logical grouping of keys, similar to a table; a key can appear only once within a bucket.

7
New cards

Three KV operations

Get (retrieve value by key), Store (write value to key, replacing any existing value), Delete (remove the key-value pair).

8
New cards

Examples of Key-Value databases

Dynamo, Riak, Redis, Voldemort.

9
New cards

Document database

A NoSQL model that stores key-value pairs in which the value is a tag-encoded document (XML, JSON, BSON), and the DBMS understands the document's content.

10
New cards

Collection (document database)

The grouping container for key-value pairs, analogous to a bucket in KV databases.

11
New cards

Key difference between KV and Document databases

Document DBMSs understand and can query the value's internal structure; KV DBMSs do not.

12
New cards

Examples of Document databases

MongoDB, CouchDB, OrientDB, RavenDB.

13
New cards

Graph database

A NoSQL database that uses graph theory to store entity instances and the relationships between them, represented as nodes and edges.

14
New cards

Node (graph DB)

A single entity instance.

15
New cards

Edge (graph DB)

A relationship between nodes.

16
New cards

Property (graph DB)

An attribute describing a node or an edge.

17
New cards

How is graph data physically stored

Often in structures like an adjacency matrix or as key-value pairs, even though it is visualized as nodes and edges.

18
New cards

Examples of Graph databases

Neo4j, ArangoDB, GraphBase, Aerospike.

19
New cards

Hadoop

A Java-based framework (not a database) for distributing and processing very large data sets across clusters of computers.

20
New cards

Two most important parts of Hadoop

HDFS (Hadoop Distributed File System) and MapReduce.

21
New cards

HDFS

A highly distributed, fault-tolerant file storage system designed to manage large amounts of data at high speed; a low-level distributed file system used directly for storage.

22
New cards

Four HDFS assumptions

(1) High volume (terabyte+ files), (2) Write-once, read-many (no edits after close), (3) Streaming access (process whole files as a stream), (4) Fault tolerance (replicate data across many machines).

23
New cards

Client node (HDFS)

A node that makes requests to the file system.

24
New cards

Name node (HDFS)

The node that stores metadata about which blocks belong to which files and which data nodes hold them.

25
New cards

Data node (HDFS)

A node that stores the actual file data blocks.

26
New cards

Block report

A report sent every 6 hours from a data node to the name node listing which blocks it holds.

27
New cards

Heartbeat

A signal sent every 3 seconds from a data node to the name node to confirm it is still available.

28
New cards

What happens when a name node stops receiving heartbeats from a data node

It excludes that data node from future read/write lists and may instruct other nodes to replicate the missing data.

29
New cards

MapReduce

A divide-and-conquer parallel processing technique: split a large data block into sub-blocks, compute intermediate results, then summarize into one final answer.

30
New cards

Mapper

A program that performs the Map function

31
New cards

Reducer

A program that performs the Reduce function

32
New cards

Big Data

A term describing data sets so large, fast, or varied that traditional RDBMSs cannot handle them efficiently.

33
New cards

The 3 Vs

Volume, Velocity, Variety.

34
New cards

Volume

A characteristic of Big Data describing the quantity of data to be stored.

35
New cards

Velocity

A characteristic of Big Data describing the speed at which data enters the system and the speed at which it must be processed.

36
New cards

Variety

A characteristic of Big Data describing variations in the structure of the data being stored.

37
New cards

Scaling up

Handling data growth by migrating to a more powerful single system (more CPUs, more storage on one machine).

38
New cards

Scaling out

Handling data growth by distributing storage across a cluster of commodity servers; the dominant approach for Big Data.

39
New cards

Why RDBMSs are ill-suited for clusters

Distributing an RDBMS requires heavy communication and coordination among nodes, with significant performance cost.

40
New cards

Stream processing

Processing data as it enters the system to decide what to keep and what to discard before storage (focuses on inputs).

41
New cards

Feedback loop processing

Analyzing stored data to produce actionable results (focuses on outputs).

42
New cards

Structured data

Data that conforms to a predefined data model (e.g., relational tables).

43
New cards

Unstructured data

Data that does not conform to a predefined data model (e.g., images, video, audio).

44
New cards

BLOB (Binary Large Object)

An RDBMS data type for storing unstructured objects as a single atomic value; semantic content is opaque to the DBMS.

45
New cards

Variability

Big Data characteristic where the same data values may have different meanings or interpretations over time.

46
New cards

Veracity

Big Data characteristic regarding the trustworthiness/quality of the data.

47
New cards

Value

Big Data characteristic regarding the degree to which data can provide meaningful insights.

48
New cards

Visualization

The ability to graphically present data in a way that makes it understandable to users.

49
New cards

Concurrency control

A DBMS feature that coordinates simultaneous execution of transactions in a multi-user system while preserving data integrity.

50
New cards

Which ACID property does concurrency control mostly preserve

Isolation.

51
New cards

The three concurrency control problems

Lost update, uncommitted data (dirty read), inconsistent retrieval.

52
New cards

Lost update

A concurrency problem in which a data update is overwritten and lost during concurrent execution of transactions.

53
New cards

Uncommitted data (dirty read)

A concurrency problem in which a transaction reads data written by another transaction that later rolls back.

54
New cards

Inconsistent retrieval

A concurrency problem in which a transaction uses an aggregate function on data while other transactions are updating that data, producing incorrect aggregate results.

55
New cards

Scheduler

A DBMS component that establishes the order in which concurrent transaction operations are executed, interleaving them to ensure serializability.

56
New cards

Serializable schedule

A schedule of operations whose interleaved execution yields the same result as some serial execution.

57
New cards

Lock

A device that guarantees unique use of a data item for a particular transaction operation.

58
New cards

Pessimistic locking

Use of locks based on the assumption that conflicts between transactions will occur.

59
New cards

Lock manager

A DBMS component responsible for assigning and releasing locks.

60
New cards

Lock granularity

The level at which locks are applied: database, table, page, row, or field (broadest to most fine-grained).

61
New cards

Database-level lock

A lock that restricts database access to the lock owner; only one user at a time can use the database.

62
New cards

Table-level lock

A lock that allows only one transaction at a time to access a given table.

63
New cards

Page-level lock

A lock that restricts access to a disk page (a section of disk).

64
New cards

Row-level lock

A lock that allows concurrent transactions to access different rows of the same table, even if those rows live on the same page.

65
New cards

Field-level lock

A lock that allows concurrent transactions to access the same row but different fields; most flexible, highest overhead.

66
New cards

Trade-off as lock granularity gets finer

More concurrency, but higher overhead cost.

67
New cards

Binary lock

A lock with only two states: locked and unlocked.

68
New cards

Exclusive lock

A lock issued when a transaction requests permission to update a data item and no other locks are held on it.

69
New cards

Shared lock

A lock issued when a transaction requests permission to read a data item and no exclusive lock is held on it by another transaction.

70
New cards

Deadlock

A condition in which two or more transactions wait indefinitely for each other to release locks (also called a deadly embrace).

71
New cards

Deadlock prevention

A transaction requesting a lock is aborted if there is any chance of deadlock.

72
New cards

Deadlock detection

The DBMS periodically tests the database for deadlocks; if found, one transaction is aborted.

73
New cards

Deadlock avoidance

Transactions must obtain every lock they will need before being allowed to execute.

74
New cards

Transaction

A sequence of database requests that accesses the database; a logical unit of work that either entirely completes or is aborted.

75
New cards

Consistent database state

A state in which all data integrity constraints are satisfied.

76
New cards

Rollback

Reverting the database to its previous consistent state because a transaction failed or was explicitly aborted.

77
New cards

Atomicity (the A in ACID)

All parts of a transaction are treated as a single, indivisible logical unit

78
New cards

Consistency (the C in ACID)

Data integrity constraints are satisfied; transactions must start and end in consistent states.

79
New cards

Isolation (the I in ACID)

A data item used by one transaction is not available to other transactions until the first one ends.

80
New cards

Durability (the D in ACID)

Once a transaction is committed, its changes cannot be undone or lost, even after a system failure.

81
New cards

Serializability

The selected order of concurrent transaction operations produces the same final database state as some serial execution would have produced.

82
New cards

COMMIT

Permanently records all changes made by the transaction and ends the transaction.

83
New cards

ROLLBACK

Aborts all changes made by the transaction and reverts the database to its previous state.

84
New cards

START TRANSACTION (MySQL)

Explicitly begins a transaction; required in MySQL where transactions are not always implicit.

85
New cards

Implicit COMMIT

When the SQL command set ends successfully, all changes are recorded automatically as if COMMIT were issued.

86
New cards

Implicit ROLLBACK

When the SQL command set terminates abnormally, changes are aborted automatically as if ROLLBACK were issued.

87
New cards

Transaction log

A DBMS feature that keeps track of all transaction operations that update the database, used for recovery from rollbacks, abnormal termination, or system failure.

88
New cards

Six things stored in the transaction log

(1) Begin marker for each transaction, (2) Operation type (INSERT/UPDATE/DELETE), (3) Names of affected objects, (4) Before-and-after values for updated fields, (5) Pointers to previous and next log entries, (6) End/COMMIT marker.

89
New cards

Embedded SQL

SQL statements contained within an application written in a host programming language such as C, C++, Java, or ASP.NET.

90
New cards

Host language

Any programming language that contains embedded SQL.

91
New cards

Steps to build an embedded-SQL program

(1) Programmer writes embedded SQL inside host code, (2) Pre-processor transforms it into DBMS- and language-specific procedure calls, (3) Host compiler compiles the program, (4) Linker produces the executable plus an "access plan" module.

92
New cards

Access plan

The compiled module containing the instructions needed to run embedded SQL code at runtime.

93
New cards

Main weakness of embedded SQL

Executables can be decompiled, exposing table names and dictionary structure; SQL errors are not caught at compile time and may surface at run-time.

94
New cards

Stored procedure

Business logic stored on the database server in the form of SQL code (or a DBMS-specific procedural language) that can be called by applications.

95
New cards

Two main advantages of stored procedures

(1) Reduce network traffic and improve performance (SQL is not transmitted across the network), (2) Reduce code duplication, lowering errors and maintenance cost.

96
New cards

Stored procedure syntax (MySQL)

CREATE PROCEDURE name(parameter_list) BEGIN SQL_statements; END;

97
New cards

IN parameter

A value supplied by the caller into the stored procedure.

98
New cards

OUT parameter

A value returned from the stored procedure to the caller.

99
New cards

How to invoke a stored procedure manually

Use CALL procedure_name(arg1, arg2, …);

100
New cards

Why prefer stored procedures over inline SQL strings in app code

Centralizes business logic, improves security (less SQL injection surface), reduces network traffic, easier to maintain.