1/45
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Conceptual Design
The first stage of database design, creating a software/hardware-independent model of the database structure.
Logical Design
Maps the conceptual model to a specific data model (e.g., relational tables).
Physical Design
Determines data storage organization and access methods for performance, integrity, and security.
Conceptual Design Steps: 1. DATA ANALYSIS AND REQUIREMENT
The first step in conceptual design is to discover the characteristics of the data elements
Business rule
Is a brief and precise description of a policy, procedure, or principle within a specific organization’s environment.
Conceptual Design Steps: 2. Entity Relationship Modeling and Normalization
The process of defining business rules and developing the conceptual model using ER diagrams
Conceptual Design Steps: 3. Data Model Verification
In this step, the ER model must be verified against the proposed system processes to corroborate that they can be supported by the database model.
Module
It is an information system component that handles a specific business function, such as inventory, orders, or payroll.
Conceptual Design Steps: 4. Distributed Database Design
It is step where the database data and processes will be distributed across the system, portions of a database, known as database fragments, may reside in several physical locations.
Database Fragment
It is a subset of a database that is stored at a given location. This may be a subset of rows or columns from one or multiple tables.
Conceptual Design Steps: 5. DBMS Software Selection
It is where the factors that affect the purchasing decision vary from company to company.
Physical Design Steps: 1. Define Data Storage Organization
In order to define data storage organization, you must determine the volume of data to be managed and the data usage patterns.
Physical Design Steps: 2. Define Integrity and Security Measures
Once the physical organization of the tables, indexes, and views are defined, the database is ready for the end users.
Physical Design Steps: 3. Determine Performance Measure
It deals with fine-tuning the DBMS and queries to ensure that they will meet end-user performance requirements.
Transaction
It is any action that reads from or writes to a database. A sequence of database requests that accesses the database.
ACID Properties: Atomicity
This requires that all operations (SQL requests) of a transaction be completed; if not, the transaction is aborted.
ACID Properties: Consistency
This indicates the permanence of the database’s consistent state.
ACID Properties: Isolation
It means that the data used during the execution of a transaction cannot be used by a second transaction until the first one is completed.
ACID Properties: Durability
This ensures that once transaction changes are done and committed, they cannot be undone or lost, even in the event of a system failure.
A COMMIT statement is reached
This statement is where all changes are permanently recorded within the database. It automatically ends the SQL transaction.
A ROLLBACK statement is reached
This statement is where all changes are aborted and the database is rolled back to its previous consistent state.
Transaction Log
It is what the DBMS uses to keep track of all transactions that update the database.
Concurrency Control
It is a DBMS feature that coordinates the simultaneous execution of transactions in a multiprocessing database system while preserving data integrity.
Objective of Concurrency Control
It is to ensure the serializability of transactions in a multiuser database environment.
Lost Update
This problem occurs when two concurrent transactions, T1 and T2, are updating the same data element and one of the updates is lost (overwritten by the other transaction).
Uncommitted Data
This occurs when two transactions, T1 and T2, are executed concurrently and the first transaction (T1) is rolled back after the second transaction (T2) has already accessed the uncommitted data—thus violating the isolation property of transactions.
Inconsistent Retrieval
This occur when a transaction accesses data before and after one or more other transactions finish working with such data.
Scheduler
It is a special DBMS process that establishes the order in which the operations are executed within concurrent transactions.
Scheduler’s Job
It is to create a serializable schedule of a transaction’s operations, in which the interleaved execution of the transactions (T1, T2, T3, etc.) yields the same results as if the transactions were executed in serial order (one after another).
Lock
It guarantees exclusive use of data item to a current transaction.
Pessimistic Locking
The use of locks based on the assumption that conflict between transactions.
Lock Manager
This handles all lock information and is responsible for assigning and policing the locks used by the transactions.
Exclusive Lock
It is issued when a transaction requests permission to update a data item and no locks are held on that data item by any other transaction.
Shared Lock
This lock allows other read only transactions to access the database.
Mutual Exclusive Rule
It is a condition in which only one transaction at a time can own an exclusive lock on the same object.
Time Stamping
This approach to scheduling concurrent transactions assigns a global, unique time stamp to each transaction.
Uniqueness
This ensures that no equal time stamp values can exist.
Monotonicity
This ensures that time stamp values always increase.
Wait/Die Scheme
A concurrency control scheme in which an older transaction must wait for the younger transaction to complete and release the locks before requesting the locks itself. Otherwise, the newer transaction dies and is rescheduled.
Wound/Wait Scheme
A concurrency control scheme in which an older transaction can request the lock, pre-empt the younger transaction, and reschedule it. Otherwise, the newer transaction waits until the older transaction finishes.
Optimistic Approach
It is based on the assumption that the majority of database operations do not conflict.
Database Recovery
This restores a database from a given state (usually inconsistent) to a previously consistent state.
Write-Ahead-Log Protocol
This recovery process protocol ensures that transaction logs are always written before any database data is actually updated.
Redundant Transaction Logs
This recovery process concept has several copies of the transaction log, it ensure that a physical disk failure will not impair the DBMS’s ability to recover data
Database Buffer
This recovery process concept are temporary storage areas in primary memory used to speed up disk operations.
Database Checkpoints
This recovery process concept are operations in which the DBMS writes all of its updated buffers in memory (also known as dirty buffers) to disk.