ISTQB Practice

0.0(0)
studied byStudied by 1 person
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/32

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

33 Terms

1
New cards

Failure

occurs when software does not meet the user’s needs or expectations. can result from bugs, errors, defects, or faults.

2
New cards

Fault

a defect or an error in the software that, when executed, causes a failure.

3
New cards

Bug

The actual result does not align with the expected result

4
New cards

Reason for Errors

time pressure, human fallibility, inexperienced staff, miscommunication, complexity, complex interfaces, new technologies.

5
New cards

Dynamic Testing

Shows failures caused by defects. Aims to find and fix as many defects as possible before the software release

6
New cards

Validation

Checks whether the system will meet user and other stakeholder needs in its operational environments (doing the right thing)

7
New cards

Verification

Checks whether the system meets specified requirements (doing the thing right)

8
New cards

Static Testing

Does not involve executing the component or system being tested, but includes reviews and static analysis. Takes place early in the SDLC.

9
New cards

Evaluating work products (requirements, user stories, designs, and code)

useful for finding problems early on in the development process before wasting time and money.

10
New cards

Trigger Failures and finding defects

running test cases testers hope to accidentally break the system and reveal bugs so that defects in the software are identified.

11
New cards

Ensure required coverage of a test object

goal is to make sure no untested areas could cause problems in the production environment such as critical functionalities or code paths

12
New cards

Reducing the risk of inadequate software quality

finding bugs early in development and ensuring software meets expectations.

13
New cards

Verifying whether specified requirements have been fulfilled

testers can verify if the software accomplishes its goal by comparing its actual behavior to the expected behavior specified in the requirements

14
New cards

Providing information to stakeholders

allows to make better decisions about release readiness, project timelines, resource allocation, and risk management when they have access to reports on the quality and status of the software

15
New cards

Building confidence in the quality of the test object

users and stakeholders are more likely to have faith in the reliability of the software when it has undergone thorough testing and had been shown to behave consistently

16
New cards

Validating the test object

testing endured the system performs as expected and satisfies the needs of its users

17
New cards

Testing shows the presence of defects, not their absence

Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, testing is not a proof of correctness.

18
New cards

Exhaustive testing is impossible

Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities should be used to focus test efforts.

19
New cards

Early testing saves time and money

Testing early in the software development lifecycle helps reduce or eliminate costly changes. Sometimes referred to as shift left

20
New cards

Defects cluster together

A small number of modules usually contains most of the defects discovered during pre-release testing or is responsible for most of the operational failures.

21
New cards

Beware of the pesticide paradox

If the same tests are repeated over and over again, eventually these tests no longer find any new defects. To detect new defects, existing tests and test data may need changing, and new tests may need to be written.

22
New cards

Testing is context-dependent

example: safety-critical industrial control software is tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done differently than testing in a sequential software development lifecycle project

23
New cards

Absence-of-errors is a fallacy

thoroughly testing all specified requirements and fixing all defects found could still produce a system that is difficult to use, that does not fulfill the users’ needs and expectations, or that is inferior compared to other competing systems.

24
New cards

Debugging

finds, analyzes, and removes the causes of failures in the software

25
New cards

Reviews, Static Analysis, and Dynamic Testing all have the same objective of

Identifying defects

26
New cards

Flow of the phases of a Formal Review

Planning, Review Initiation, Individual review, Communication & Analysis, Fixing & Reporting.

27
New cards

A Walkthrough is led by the

Author

28
New cards

An Inspection is led by the

Trained Moderator

29
New cards

Scribe

A Person who documents all the issues, problems and open points that were identified during a formal review.

30
New cards

Static Testing prevents defects in design and coding by

uncovering omissions, inaccuracies, inconsistencies, ambiguities, and redundancies in requirements.

31
New cards

Early and Frequent Stakeholder helps in identifying and correcting

misunderstandings about requirements before significant work has been done which prevents costly rework and ensures the final product meets stakeholder’s needs.

32
New cards

Typically examined using static testing techniques like reviews or static analysis.

Source code documentation, such as comments, design specifications, and coding standards

33
New cards

Traceability

ability to track business requirements across the development lifecycle, including testing.