ISTQB Faundation Cyllsabus 3.1

0.0(0)
studied byStudied by 3 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/118

flashcard set

Earn XP

Description and Tags

Glossary

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

119 Terms

1
New cards
Testing
The process consisting of ali life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
2
New cards
Verification
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
3
New cards
Validation
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
4
New cards
Test objective
A reason or purpose for designing and executing a test.
5
New cards
Test object
The component or system to be tested.
6
New cards
Debugging
The process of finding, analyzing and removing the causes of failures in software.
7
New cards
Quality assurance
Part of quality management focused on providing confidence that quality requirements will be fulfilled.
8
New cards
Quality
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
9
New cards
Error (mistake)
A human action that produces an incorrect result.
10
New cards
Defect (bug, fault)
An imperfection or deficiency in a work product where it does not meet its requirements or specifications.
11
New cards
Failure
An event in which a component or system does not perform a required function within specified limits.
12
New cards
Root cause
A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.
13
New cards
Testing shows the presence of defects, not their absence
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness.
14
New cards
Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques and priorities should be used to focus test efforts.
15
New cards
Early testing saves time and money
To find defects early, both static and dynamic test activities should be started as early as possible in the software development life cycle. Early testing is sometimes referred to as 'shift left'. Testing early in the software development life cycle helps reduce or eliminate costly changes (see Chapter 3, Section 3.1).
16
New cards
Defects cluster together
A small number of modules usually contains most of the defects discovered during pre-release testing, or they are responsible for most of the operational failures. Predicted defect clusters, and the actual observed defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort (as mentioned in Principle 2).
17
New cards
Beware of the pesticide paradox
If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data are changed and new tests need to be written. (Testsare no longer effective at finding defects, just as pesticides are no longer effective at killing insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a beneficial outcome, which is the relatively low number of regression defects
18
New cards
Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently to testing in a sequential life cycle project (see Chapter 2, Section 2.1).
19
New cards
Absence-of-errors is a fallacy
Some organizations expect that testers can run all possible tests and find all possible defects, but Principles2 and 1, respectively,tell us that this is impossible. Further, it is a fallacy to expect that just finding and fixing a large number of defects will ensure the success of a system. For example, thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfil the users' needs and expectations or that is inferior compared to other competing systems.
20
New cards
Coverage
The degree to which specified coverage items have been determined to have been exercised by a test suite expressed as a percentage.
21
New cards
Test basis
The body of knowledge used as the basis for test analysis and design.
22
New cards
Test planning
The activity of establishing or updating a test plan.
23
New cards
Test plan
Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities. (Note that we have included the definition of test plan here, even though it is not listed in the Syllabusas a term that you need to know for this chapter; otherwise the definition of test planning is not very informative.)
24
New cards
Test monitoring
A test management activity that involves checking the status of testing activities, identifying any variancesfrom the planned or expected status and reporting status to stakeholders.
25
New cards
Test control
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
26
New cards
Test design
The activity of deriving and specifying test cases from test conditions.
27
New cards
Test case
A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.
28
New cards
Test data
Data created or selected to satisfy the execution preconditions and inputs to execute one or more test cases.
29
New cards
Test implementation
A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap-up activities post execution.
30
New cards
Test suite (test case suite, test set)
A set of test cases or test procedures to be executed in a specific test cycle.
31
New cards
Test execution schedule
A schedule for the execution of test suites within a test cycle.
32
New cards
Test execution
The process of running a test on the component or system under test, producing actual result(s).
33
New cards
Test ware
Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing.
34
New cards
Test completion
The activity that makes test assets available for later use, leaves test environments in a satisfactory condition and communicates the results of testing to relevant stakeholders.
35
New cards
Test oracle (oracle)
A source to determine expected results to compare with the actual result of the system under test.
36
New cards
Traceability
The degree to which a relationship can be established between two or more work products.
37
New cards
Sequential development model
Sequential development model
A type of development life cycle model in which a complete system is developed in a linear way of several discrete and successive phases with no overlap between them.
38
New cards
Test level (test stage)
A specific instantiation of a test process.
39
New cards
Commercial off-theshelf (COTS) (offthe- shelf software)
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
40
New cards
Test objective
A reason or purpose for designing and executing a test.
41
New cards
Test basis
The body of knowledge used as the basis for test analysis and design.
42
New cards
Test case
A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.
43
New cards
Test object
The component or system to be tested. Seealso: test item.
44
New cards
Test environment (test bed, test rig)
An environment containing hardware, instrumentation, simulators, software tools and other support elements needed to conduct a test.
45
New cards
Component testing (module testing, unit testing)
The testing of individual hardware or software components.
46
New cards
Integration testing
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems
47
New cards
Component integration testing (link testing)
Testing performed to expose defects in the interfaces and interactions between integrated components.
48
New cards
System integration testing
Testing the combination and interaction of systems.
49
New cards
System testing
Testing an integrated system to verify that it meets specified requirements. (Note that the ISTQB definition implies that systemtesting is only about verification of specified requirements. In practice, system testing is often also about validation that the system is suitable for its intended users, as well as verifying against any type of requirement.)
50
New cards
Acceptance testing
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.
51
New cards
User acceptance testing
Acceptance testing conducted in a real or simulated operational environment by intended users focusing on their needs, requirements and business processes.
52
New cards
Operational acceptance testing (production acceptance testing)
Operational testing in the acceptance test phase, typically performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspects, for example recoverability, resource-behaviour, installability and technical compliance.
53
New cards
Contractual acceptance testing
Acceptance testing conducted to verify whether a system satisfies its contractual requirements.
54
New cards
Regulatory acceptance testing
Acceptance testing conducted to verify whether a system conforms to relevant laws, policies and regulations.
55
New cards
Alpha testing
Simulated or actual operational testing conducted in the developer's test environment, by roles outside the development organization.
56
New cards
Beta testing (field testing)
Simulated or actual operational testing conducted at an external site, by roles outside the development organization.
57
New cards
Test type
A group of test activities based on specific test objectives aimed at specific characteristics of a component or system.
58
New cards
Functional testing
Testing conducted to evaluate the compliance of a component or system with functional requirements.
59
New cards
Non-functional testing
Testing conducted to evaluate the compliance of a component or system with non-functional requirements.
60
New cards
White-box testing (clear-box testing, code-based testing, glass-box testing, logiccoverage testing, logicdriven testing, structural testing, structure-based testing)
Testing based on an analysis of the internal structure-based of the component or system.
61
New cards
Confirmation testing (re-testing)
Dynamic testing conducted after fixing defects with the objective to confirm that failures caused by those defects do not occur anymore.
62
New cards
Regression testing
Testing of a previously tested component or system following modification to ensure that defects have not been introduced or have been uncovered in unchanged areas of the software as a result of the changes made.
63
New cards
Maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
64
New cards
Impact analysis
The identification of all work products affected by a change, including an estimate of the resources needed to accomplish the change.
65
New cards
Static analysis
The process of evaluating a component or system without executing it, based on its form, structure, content or documentation.
66
New cards
Static testing
Testing a work product without code being executed.
67
New cards
Dynamic testing
Testing that involves the execution of the software of a component or system.
68
New cards
Review
A type of static testing during which a work product or process is evaluated by one or more individuals to detect issuesand to provide improvements.
69
New cards
Informal review
A type of review without a formal (documented) procedure
70
New cards
Formal review
A form of review that follows a defined process with a formally documented output.
71
New cards
Walkthrough (Structured walkthrough)
A type of review in which an author leads members of the review through a work product and the members ask questions and make comments about possible issues.
72
New cards
Technical review
A formal review type by a team of technicallyqualified personnel that examines the suitability of a work product for its intended use and identifies discrepancies from specifications and standards.
73
New cards
Inspection
A type of formal review to identify issues in a work product, which provides measurement to improve the review process and the software development process
74
New cards
Ad hoc reviewing
A review technique carried out by independent reviewers informally, without a structured process
75
New cards
Checklist-based reviewing
A review technique guided by a list of questions or required attributes.
76
New cards
Scenario-based reviewing
A review technique where the review is guided by determining the ability of the work product to address specific scenarios.
77
New cards
Role-based reviewing
A review technique where reviewers evaluate a work product from the perspective of different stakeholder roles.
78
New cards
Perspective-based reading (Perspectivebased reviewing)
A review technique whereby reviewers evaluate the work product from different viewpoints.
79
New cards
Test technique (test case design technique, test specification technique, test design technique)
A procedure used to derive and/or select test cases.
80
New cards
Black-box test technique (black-box technique, specificationbased technique, specification-based test technique)
A procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system, without reference to its internal structure.
81
New cards
White-box test technique (structural test technique, structure-based test technique, structurebased technique, white-box technique)
A procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
82
New cards
Coverage (test coverage)
The degree to which specified coverage items have been determined to have been exercised by a test suite, expressed as a percentage.
83
New cards
Experience-based test technique (experiencebased technique)
A procedure to derive and/or select test cases based on the tester's experience, knowledge and intuition.
84
New cards
Equivalence partitioning (partition testing)
A black-box test technique in which test cases are designed to exercise equivalence partitions by using one representative member of each partition.
85
New cards
Equivalence partition (equivalence class)
A portion of the value domain of a data element related to the test object for which all values are expected to be treated the same based on the specification.
86
New cards
Boundary value analysis
A black-box test technique in which test cases are designed based on boundary values.
87
New cards
Decision table testing
A black-box test technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.
88
New cards
State transition testing (finite state testing)
A black-box test technique using a state transition diagram or state table to derive test casesto evaluate whether the test item successfully executes valid transitions and blocks invalid transitions.
89
New cards
Use case testing (scenario testing, user scenario testing)
A black-box test technique in which test cases are designed to execute scenarios of use cases.
90
New cards
Statement coverage
The percentage of executable statements that have been exercised by a test suite.
91
New cards
Decision coverage
The coverage of decision outcomes. (Note: this is the Glossary definition at publication, but a fuller definition would be: The percentage of decision outcomes that have been exercised by a test suite.)
92
New cards
Error guessing
A test technique in which tests are derived on the basis of the tester's knowledge of past failures, or general knowledge of failure modes.
93
New cards
Exploratory testing
An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.
94
New cards
Checklist-based testing
An experiencebased test technique whereby the experienced tester usesa highlevel list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.
95
New cards
Test manager
The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.
96
New cards
Test plan
Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities.
97
New cards
Test strategy (organizational test strategy)
Documentation that expressesthe generic requirements for testing one or more projects run within an organization, providing detail on how testing is to be performed, and is aligned with the test policy.
98
New cards
Test approach
The implementation of the test strategy for a specific project.
99
New cards
Entry criteria (definition of ready)
The set of conditions for officially starting a defined task.
100
New cards
Exit criteria (completion criteria, test completion criteria, definition of done)
The set of conditions for officially completing a defined task.