1/71
Chapters 10-14
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Testing Activities
Activities that should be incorporated into the beginning of software development to produce reliable software.
SDLC (Software Development Life Cycle)
A framework that outlines the stages of software development, including Waterfall, V-model, Spiral, Prototyping, XP, and Scrum.
Test Action Goal
The specific outcomes expected from testing, such as evaluating requirements or validating designs.
Test Design Goal
The aim of preparing for testing by developing test criteria and plans based on earlier phases of development.
Integration Testing
Testing that focuses on finding errors from unexpected interactions among components during integration.
User Involvement
Engaging users in system and usability testing to ensure the software meets their needs.
Regression Testing
Testing performed to confirm that functionality remains unaffected after changes in the software.
Test Influence Goal
The objective of using test artifacts to influence future development stages.
Usability Testing
Evaluation of the user interface's effectiveness and user satisfaction, typically involving user participation.
Quality Management
The practice of maintaining high quality in the software development process by integrating testing activities early.
Test actions
Testing the product or artifact created at that stage
Test design
Using the development artifacts of that stage or testing artifacts from a previous stage to prepare to test the final software
Test influence
Using development or test artifacts to influence future development stagesby providing feedback and insights on quality and performance.
Test Influence Goal
Test scenarios and plans build to show the system meet requirementsand help identify potential issues or improvements.
System Testing
Compare the software system to its original objective
Acceptance Testing
Ensures that the completed system satisfied the customers needs and should be done with their involvement
Usability Testing
Evaluates the suer interface of the software
IEEE 829-2008 standard
A document describing the scope, approach, resources, and schedule of intended test activities.
Master Test Plan (MTP)
Provides and overall test planning and test management document for multiple levels of test
Level Test Plan (LTP)
Describes testing at a particular level (defined in Ch.1)
LTP defines
the items being tested • the features to be testing • the testing tasks to be performed • who is responsible for each task • any risks associated with the testing
Level Test Plan Template
• Document Identifier • Scope • References • Level in the overall sequence • Test classes and overall test conditions
Level Test Plan example Template (Details)
• Test items and their identifiers • Test traceability • Features to be tested • Features not to be tested • Approach • Items pass/fail criteria • Suspension criteria and resumption requirements • Test deliverables
Level Test Plan example Template (Management)
• Planned activities and tasks; test progression • Environment and infrastructure • Responsibilities and authority • Interfaces among the parties involved • Training • Schedule, estimates and costs • Risks and contingencies
Level Test Plan example Template (General)
• Quality assurance procedures • Metrics • Test coverage • Glossary • Document change procedures history
Test Implementation (Integration)
Moving from abstract test cases to real execution on code, ensuring compiled code, no collateral damage, repeatable processes, and timely completion.
Integration Testing
Testing for incompatibilities and interfaces between software components, often with incomplete systems.
Integration Order
The sequence in which components are integrated, influenced by dependencies like method use, inheritance, or aggregation.
CITO (Class Integration Test Order)
The process of determining the order in which classes should be integrated and tested, simplified if there are no dependency cycles (using dependency graphs and topological sorting).
Dependency Graph
A graphical representation of component dependencies used to assist in determining integration order.
Test Double
A software component that partially implements functionality and replaces a real component during testing to simulate its behavior.
Scaffolding
Extra software components used during testing, including test drivers and test doubles.
Test Driver
A software component or tool that controls or calls another software component during testing (e.g., JUnit framework).
Test Stub
A skeletal implementation of a software component that replaces a called component during testing, typically returning constant or simple values.
Mock
A special type of test double that verifies whether the class under test made the correct calls, focusing on interaction verification rather than output correctness.
Interaction-Based Testing
Testing that defines success by whether objects communicate correctly with each other rather than verifying the final outcome.
Seam
A variable or point in code that allows a test to change program behavior without altering the actual component, enabling easier testing with test doubles.
Enabling Point
The place in a component where a seam can be manipulated to change behavior for testing purposes.
Regression Testing
The process of re-testing software after modifications to ensure that existing functionality remains unaffected.
Reasons for Software Change
Corrective, Perfective, Adaptive, Preventive, Adding Features
Corrective
Fixing bugs
Perfective
Improving Performance or maintainability
Adaptive
Adjusting to new environments
Preventive
Reducing future risks and problems
Adding Features
Expanding functionality.
Overview of Regression Testing
It forms the majority of testing efforts in commercial software and is essential because even small changes can cause distant problems.
Automation in Regression Testing
Regression tests should be automated using tools like JUnit for Java, capture/replay tools for GUIs, and scripts that handle test execution and result comparison.
Version Control for Tests
Tests and scripts can be version-controlled to track changes and maintain consistency.
Goldilocks Problem (in Regression Testing)
Choosing the right number of tests: Too many tests - slow down dev. To few tests - miss important issues. Regression tests should run daily, typically overnight
Test Doubles for Speed
Using test doubles can make regression testing faster by avoiding slow or unpredictable real components.
Importance of Regression Testing
Customers are more likely to tolerate new bugs than to encounter unresolved old ones, making regression testing critical.
Handling Regression Test Failures
When a regression test fails, determine if the failure is due to software faults or issues with the test itself.
Optimizing Regression Tests
Carefully add or remove tests to maintain efficient execution times without sacrificing coverage.
Principles of System Testing: Regression Testing
Regression testing identifies new faults and ensures that new versions still perform previous functions correctly.
Regression Testing Steps
1.Insert the new code. 2. Test functions known to be affected by the changes. 3. Perform actual regression tests on essential functions. 4. Continue broader functional testing
Test Oracle
A mechanism for determining whether a test has passed or failed by encoding the expected results.
Overview of Test Oracles
In automated testing, expected results must be explicitly encoded (e.g., JUnit assertions), unlike manual testing where a human tester judges correctness at runtime.
Too Much vs. Too Little Checking
Too much checking increases unnecessary costs. Too little checking risks missing failures.
What Should Be Checked (Manual Testing)
Human testers observe behavior and judge correctness based on requirements, sometimes aided by adding print statements.
Automation Challenge
Automated tests must know expected behavior ahead of time and encode it, making testing more precise but more rigid.
Outputs that may be checked
Screen outputs, files, databases, messages and signals, return statements, parameters, and non-local variables
Test Oracle Strategy
A rule or set of rules that specify which parts of a program’s output state should be checked during testing.
Precision (Test Oracle Strategy)
How much of the output state is checked: Higher precision checks more output details.
Frequency (Test Oracle Strategy)
When and how often output states are checked- Low frequency: check only after execution ends. High frequency: check outputs early and often during execution.
Null Oracle Strategy
A minimal, low-cost strategy where only crashes and exceptions are checked, but it is not very effective for revealing software faults.
Guideline 1: Always Check Some Output
Checking at least some output is necessary to meaningfully validate tests, beyond just detecting crashes.
Guideline 2: Check the Right Output
Good assertions check outputs directly affected by the test. Poor assertions check unrelated outputs.
Unit Test Level Checking
Checking method return values and parameters is usually sufficient.
System Test Level Checking
Checking directly visible outputs (e.g., screen output) is often enough, though outputs to files and databases can also be important but harder to check.
Guideline 3: Low Precision is Okay
It is acceptable not to check a lot of outputs; checking the right outputs is more important.
Guideline 4: Low Frequency is Okay
It is sufficient to check the output state once at the end of execution for most failures.
Driver
A software component or tool that controls the execution of a software component during testing. It manages calling the software under test, feeding it inputs, and capturing its outputs.