1/55
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
QA Principles Overview
A set of six key principles guiding software quality management: Partition, Visibility, Feedback, Sensitivity, Redundancy, and Restriction
Partition Principle
A strategy of breaking complex tasks into smaller, manageable, and independently solvable parts
Partition Principle Strategy
Decomposing complicated problems into smaller, independent components; Enables more focused and manageable problem-solving approaches
Visibility Principle
The practice of making project progress, quality, and status measurable and transparent
Visibility Metrics
Quantifiable measures used to track and evaluate the progress, quality, and effectiveness of software development and testing processes
Feedback Principle
A process of learning from past experiences and systematically applying lessons to improve software development processes
Feedback Implementation
Systematic approach to documenting and learning from past errors; Creating mechanisms to capture, analyze, and apply lessons learned from previous software development experiences
Sensitivity Principle
An approach to designing systems and tests that make faults more likely to cause observable failures
Sensitivity in Testing
Designing test strategies that maximize the likelihood of revealing potential issues; Creating test conditions that are more likely to expose hidden faults and vulnerabilities
Redundancy Principle
The practice of creating multiple, independent verification methods to ensure consistency and catch potential errors
Redundancy Strategies
Implementing multiple, independent verification techniques; Developing overlapping checks and balances to reduce the risk of undetected errors
Restriction Principle
The technique of imposing constraints that simplify complex problems
Restriction Implementation
Creating specific guidelines and limitations to reduce complexity; Developing frameworks that narrow the scope of potential solutions to make problem-solving more efficient
Basic Concepts of QA
Software quality management is a comprehensive approach to ensuring that software meets specified requirements and user expectations
QA Comprehensive Approach
Holistic method of ensuring software quality throughout the development process; Involves multiple strategies and techniques to meet and exceed software requirements
Software Process Model Evolution
A historical progression of software development methodologies; Moves from Waterfall (sequential, testing late) to V-Model (earlier test integration) to Spiral Model (continuous quality assurance)
Software Development Methodology History
The changing landscape of software development approaches; Tracing the evolution of methods that increasingly integrate quality assurance throughout the development cycle
Validation vs Verification
Two complementary approaches to software quality: Verification asks "Are we building the machine right?" (internal checks), Validation asks "Are we building the right machine?" (user perspective)
Validation and Verification Techniques
Comprehensive strategies for ensuring software quality from different perspectives; Methods to confirm both internal correctness and external user requirements
Mistake
A human error that leads to an incorrect result or action in the software development process
Human Error in Software Development
Unintentional actions or decisions that can negatively impact software development; Understanding the sources and potential mitigations of human-induced errors
Fault/Bug
An inherent defect or imperfection in the source code that can potentially cause incorrect program behavior
Software Defect Characteristics
Detailed examination of code-level imperfections that can lead to program malfunctions; Understanding the nature and potential impact of structural errors in software
Error
The runtime manifestation of a fault, causing the software to deviate from its expected behavior
Runtime Error Dynamics
The process by which code faults translate into observable incorrect behavior; Exploring how internal software states can become compromised during execution
Failure
The external, observable incorrect behavior of a system that fails to meet its specified requirements
Software Failure Manifestation
The external symptoms of internal software errors; How technical issues translate into user-observable problems
Test Case
A specific set of input conditions, execution steps, and expected results designed to verify a particular functionality of a software application
Test Case Design Principles
Strategies for creating comprehensive and effective test scenarios; Methods for developing test cases that thoroughly validate software functionality
Test Oracle
A mechanism that helps determine whether a software test has passed or failed
Test Validation Mechanism
Techniques for comparing actual test results with expected outcomes; Methods to objectively determine the success or failure of software tests
Exhaustive Testing Limitation
The impossibility of testing every possible input scenario due to the infinite complexity of software input spaces
Testing Resource Constraints
Understanding the practical limitations of comprehensive software testing; Recognizing the economic and time-based challenges of exhaustive test coverage
Black Box Testing
A testing methodology that examines software functionality without knowledge of internal code structure
External Functionality Testing
Approach to software quality assurance that evaluates systems based on their observable behavior; Testing methods that do not require internal code knowledge
White Box Testing
A testing approach that analyzes the internal structure, logic, and code paths of a software application
Internal Code Structure Testing
Detailed examination of software's internal logic and implementation; Strategies for testing all potential code paths and decision points
Equivalence Partitioning
A test design technique that divides input data into equivalent partitions
Input Space Division Strategy
Method of categorizing input data to optimize testing efficiency; Techniques for creating representative test cases across different input domains
Exploratory Testing
A dynamic testing approach where testers simultaneously learn, design, and execute tests
Dynamic Testing Methodology
Flexible approach to software testing that allows real-time adaptation; Testing strategy that combines learning, design, and execution simultaneously
Acceptance Testing Types
A collection of testing methodologies to validate software against user requirements
User Requirement Validation Methods
Techniques for ensuring software meets specific user needs; Different approaches to confirming that software fulfills its intended purpose
Combinatorial Testing
A systematic testing method that examines interactions between multiple input parameters
Multi-Parameter Interaction Testing
Approach to identifying complex software bugs through comprehensive input combination analysis; Methods for uncovering issues that appear only in specific scenario intersections
Test Driven Development (TDD)
A software development methodology where tests are written before the actual implementation code
Proactive Testing Methodology
Approach to software development that prioritizes test creation before code implementation; Systematic method for ensuring code meets predefined requirements
Behavior Driven Development (BDD)
An extension of TDD that focuses on the behavioral outcomes of software
Behavioral Specification Techniques
Methods for describing software requirements using natural language and behavior-focused scenarios; Approach to defining software functionality through user-centric language
Random Testing
A testing approach where test inputs are selected uniformly and randomly without a specific strategy, potentially covering a wide range of inputs but potentially missing critical edge cases
Systematic Testing
A methodical testing approach that deliberately selects test inputs based on specific criteria, focusing on representative values, boundary conditions, and regions more likely to contain faults
Random Testing Advantages
Helps avoid designer bias by treating all inputs equally and potentially uncovering unexpected issues through unpredictable input selection
Random Testing Limitations
May miss important edge cases, has low probability of finding rare but critical bugs, and does not strategically target potential problem areas
Systematic Testing Advantages
Strategically targets potential fault regions, focuses on boundary values and representative inputs, and provides more structured and deliberate test coverage
Systematic Testing Techniques
Includes equivalence partitioning, boundary value analysis, decision table testing, and state transition testing to methodically explore input spaces
Testing Input Selection Strategies
Different methodologies for choosing test inputs; Comparing approaches that range from uniform random selection to strategically targeted testing