Software Verification and QA

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/71

flashcard set

Earn XP

Description and Tags

Chapters 10-14

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

72 Terms

1
New cards

Testing Activities

Activities that should be incorporated into the beginning of software development to produce reliable software.

2
New cards

SDLC (Software Development Life Cycle)

A framework that outlines the stages of software development, including Waterfall, V-model, Spiral, Prototyping, XP, and Scrum.

3
New cards

Test Action Goal

The specific outcomes expected from testing, such as evaluating requirements or validating designs.

4
New cards

Test Design Goal

The aim of preparing for testing by developing test criteria and plans based on earlier phases of development.

5
New cards

Integration Testing

Testing that focuses on finding errors from unexpected interactions among components during integration.

6
New cards

User Involvement

Engaging users in system and usability testing to ensure the software meets their needs.

7
New cards

Regression Testing

Testing performed to confirm that functionality remains unaffected after changes in the software.

8
New cards

Test Influence Goal

The objective of using test artifacts to influence future development stages.

9
New cards

Usability Testing

Evaluation of the user interface's effectiveness and user satisfaction, typically involving user participation.

10
New cards

Quality Management

The practice of maintaining high quality in the software development process by integrating testing activities early.

11
New cards

Test actions

Testing the product or artifact created at that stage

12
New cards

Test design

Using the development artifacts of that stage or testing artifacts from a previous stage to prepare to test the final software

13
New cards

Test influence

Using development or test artifacts to influence future development stagesby providing feedback and insights on quality and performance.

14
New cards

Test Influence Goal

Test scenarios and plans build to show the system meet requirementsand help identify potential issues or improvements.

15
New cards

System Testing

Compare the software system to its original objective

16
New cards

Acceptance Testing

Ensures that the completed system satisfied the customers needs and should be done with their involvement

17
New cards

Usability Testing

Evaluates the suer interface of the software

18
New cards

IEEE 829-2008 standard

A document describing the scope, approach, resources, and schedule of intended test activities.

19
New cards

Master Test Plan (MTP)

Provides and overall test planning and test management document for multiple levels of test

20
New cards

Level Test Plan (LTP)

Describes testing at a particular level (defined in Ch.1)

21
New cards

LTP defines

the items being tested • the features to be testing • the testing tasks to be performed • who is responsible for each task • any risks associated with the testing

22
New cards

Level Test Plan Template

• Document Identifier • Scope • References • Level in the overall sequence • Test classes and overall test conditions

23
New cards

Level Test Plan example Template (Details)

• Test items and their identifiers • Test traceability • Features to be tested • Features not to be tested • Approach • Items pass/fail criteria • Suspension criteria and resumption requirements • Test deliverables

24
New cards

Level Test Plan example Template (Management)

• Planned activities and tasks; test progression • Environment and infrastructure • Responsibilities and authority • Interfaces among the parties involved • Training • Schedule, estimates and costs • Risks and contingencies

25
New cards

Level Test Plan example Template (General)

• Quality assurance procedures • Metrics • Test coverage • Glossary • Document change procedures history

26
New cards

Test Implementation (Integration)

Moving from abstract test cases to real execution on code, ensuring compiled code, no collateral damage, repeatable processes, and timely completion.

27
New cards

Integration Testing

Testing for incompatibilities and interfaces between software components, often with incomplete systems.

28
New cards

Integration Order

The sequence in which components are integrated, influenced by dependencies like method use, inheritance, or aggregation.

29
New cards

CITO (Class Integration Test Order)

The process of determining the order in which classes should be integrated and tested, simplified if there are no dependency cycles (using dependency graphs and topological sorting).

30
New cards

Dependency Graph

A graphical representation of component dependencies used to assist in determining integration order.

31
New cards

Test Double

A software component that partially implements functionality and replaces a real component during testing to simulate its behavior.

32
New cards

Scaffolding

Extra software components used during testing, including test drivers and test doubles.

33
New cards

Test Driver

A software component or tool that controls or calls another software component during testing (e.g., JUnit framework).

34
New cards

Test Stub

A skeletal implementation of a software component that replaces a called component during testing, typically returning constant or simple values.

35
New cards

Mock

A special type of test double that verifies whether the class under test made the correct calls, focusing on interaction verification rather than output correctness.

36
New cards

Interaction-Based Testing

Testing that defines success by whether objects communicate correctly with each other rather than verifying the final outcome.

37
New cards

Seam

A variable or point in code that allows a test to change program behavior without altering the actual component, enabling easier testing with test doubles.

38
New cards

Enabling Point

The place in a component where a seam can be manipulated to change behavior for testing purposes.

39
New cards

Regression Testing

The process of re-testing software after modifications to ensure that existing functionality remains unaffected.

40
New cards

Reasons for Software Change

Corrective, Perfective, Adaptive, Preventive, Adding Features

41
New cards

Corrective

Fixing bugs

42
New cards

Perfective

Improving Performance or maintainability

43
New cards

Adaptive

Adjusting to new environments

44
New cards

Preventive

Reducing future risks and problems

45
New cards

Adding Features

Expanding functionality.

46
New cards

Overview of Regression Testing

It forms the majority of testing efforts in commercial software and is essential because even small changes can cause distant problems.

47
New cards

Automation in Regression Testing

Regression tests should be automated using tools like JUnit for Java, capture/replay tools for GUIs, and scripts that handle test execution and result comparison.

48
New cards

Version Control for Tests

Tests and scripts can be version-controlled to track changes and maintain consistency.

49
New cards

Goldilocks Problem (in Regression Testing)

Choosing the right number of tests: Too many tests - slow down dev. To few tests - miss important issues. Regression tests should run daily, typically overnight

50
New cards

Test Doubles for Speed

Using test doubles can make regression testing faster by avoiding slow or unpredictable real components.

51
New cards

Importance of Regression Testing

Customers are more likely to tolerate new bugs than to encounter unresolved old ones, making regression testing critical.

52
New cards

Handling Regression Test Failures

When a regression test fails, determine if the failure is due to software faults or issues with the test itself.

53
New cards

Optimizing Regression Tests

Carefully add or remove tests to maintain efficient execution times without sacrificing coverage.

54
New cards

Principles of System Testing: Regression Testing

Regression testing identifies new faults and ensures that new versions still perform previous functions correctly.

55
New cards

Regression Testing Steps

1.Insert the new code. 2. Test functions known to be affected by the changes. 3. Perform actual regression tests on essential functions. 4. Continue broader functional testing

56
New cards

Test Oracle

A mechanism for determining whether a test has passed or failed by encoding the expected results.

57
New cards

Overview of Test Oracles

In automated testing, expected results must be explicitly encoded (e.g., JUnit assertions), unlike manual testing where a human tester judges correctness at runtime.

58
New cards

Too Much vs. Too Little Checking

Too much checking increases unnecessary costs. Too little checking risks missing failures.

59
New cards

What Should Be Checked (Manual Testing)

Human testers observe behavior and judge correctness based on requirements, sometimes aided by adding print statements.

60
New cards

Automation Challenge

Automated tests must know expected behavior ahead of time and encode it, making testing more precise but more rigid.

61
New cards

Outputs that may be checked

Screen outputs, files, databases, messages and signals, return statements, parameters, and non-local variables

62
New cards

Test Oracle Strategy

A rule or set of rules that specify which parts of a program’s output state should be checked during testing.

63
New cards

Precision (Test Oracle Strategy)

How much of the output state is checked: Higher precision checks more output details.

64
New cards

Frequency (Test Oracle Strategy)

When and how often output states are checked- Low frequency: check only after execution ends. High frequency: check outputs early and often during execution.

65
New cards

Null Oracle Strategy

A minimal, low-cost strategy where only crashes and exceptions are checked, but it is not very effective for revealing software faults.

66
New cards

Guideline 1: Always Check Some Output

Checking at least some output is necessary to meaningfully validate tests, beyond just detecting crashes.

67
New cards

Guideline 2: Check the Right Output

Good assertions check outputs directly affected by the test. Poor assertions check unrelated outputs.

68
New cards

Unit Test Level Checking

Checking method return values and parameters is usually sufficient.

69
New cards

System Test Level Checking

Checking directly visible outputs (e.g., screen output) is often enough, though outputs to files and databases can also be important but harder to check.

70
New cards

Guideline 3: Low Precision is Okay

It is acceptable not to check a lot of outputs; checking the right outputs is more important.

71
New cards

Guideline 4: Low Frequency is Okay

It is sufficient to check the output state once at the end of execution for most failures.

72
New cards

Driver

A software component or tool that controls the execution of a software component during testing. It manages calling the software under test, feeding it inputs, and capturing its outputs.