CTFL v4.0

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/66

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

67 Terms

1
New cards

FL-1.1.1 Identify typical test objectives

2
New cards

FL-1.1.2 Differentiate testing from debugging

3
New cards

FL-1.2.1 Exemplify why testing is necessary

4
New cards

FL-1.2.2 Recall the relation between testing and quality assurance

5
New cards

FL-1.2.3 Distinguish between root cause, error, defect, and failure

6
New cards

FL-1.3.1 Explain the seven testing principles

7
New cards

FL-1.4.1 Explain the different test activities and related tasks

8
New cards

FL-1.4.2 Explain the impact of context on the test process

9
New cards

FL-1.4.3 Differentiate the testware that supports the test activities

10
New cards

FL-1.4.4 Explain the value of maintaining traceability

11
New cards

FL-1.4.5 Compare the different roles in testing

12
New cards

FL-1.5.1 Give examples of the generic skills required for testing

13
New cards

FL-1.5.2 Recall the advantages of the whole team approach

14
New cards

FL-1.5.3 Distinguish the benefits and drawbacks of independence of testing

15
New cards

FL-2.1.1 Explain the impact of the chosen software development lifecycle on testing

16
New cards

FL-2.1.2 Recall good testing practices that apply to all software development lifecycles

17
New cards

FL-2.1.3 Recall the examples of test-first approaches to development

18
New cards

FL-2.1.4 Summarize how DevOps might have an impact on testing

19
New cards

FL-2.1.5 Explain shift left

20
New cards

FL-2.1.6 Explain how retrospectives can be used as a mechanism for process improvement

21
New cards

FL-2.2.1 Distinguish the different test levels

22
New cards

FL-2.2.2 Distinguish the different test types

23
New cards

FL-2.2.3 Distinguish confirmation testing from regression testing

24
New cards

FL-2.3.1 Summarize maintenance testing and its triggers

25
New cards

FL-3.1.1 Recognize types of work products that can be examined by static testing

26
New cards

FL-3.1.2 Explain the value of static testing

27
New cards

FL-3.1.3 Compare and contrast static testing and dynamic testing

28
New cards

FL-3.2.1 Identify the benefits of early and frequent stakeholder feedback

29
New cards

FL-3.2.2 Summarize the activities of the review process

30
New cards

FL-3.2.3 Recall which responsibilities are assigned to the principal roles when performing reviews

31
New cards

FL-3.2.4 Compare and contrast the different review types

32
New cards

FL-3.2.5 Recall the factors that contribute to a successful review

33
New cards

FL-4.1.1 Distinguish black-box test techniques, white-box test techniques and experience-based test techniques

34
New cards

FL-4.2.1 Use equivalence partitioning to derive test cases

35
New cards

FL-4.2.2 Use boundary value analysis to derive test cases

36
New cards

FL-4.2.3 Use decision table testing to derive test cases

37
New cards

FL-4.2.4 Use state transition testing to derive test cases

38
New cards

FL-4.3.1 Explain statement testing

39
New cards

FL-4.3.2 Explain branch testing

40
New cards

FL-4.3.3 Explain the value of white-box testing

41
New cards

FL-4.4.1 Explain error guessing

42
New cards

FL-4.4.2 Explain exploratory testing

43
New cards

FL-4.4.3 Explain checklist-based testing

44
New cards

FL-4.5.1 Explain how to write user stories in collaboration with developers and business representatives

45
New cards

FL-4.5.2 Classify the different options for writing acceptance criteria

46
New cards

FL-4.5.3 Use acceptance test-driven development (ATDD) to derive test cases

47
New cards

FL-5.1.1 Exemplify the purpose and content of a test plan

48
New cards

FL-5.1.2 Recognize how a tester adds value to iteration and release planning

49
New cards

FL-5.1.3 Compare and contrast entry criteria and exit criteria

50
New cards

FL-5.1.4 Use estimation techniques to calculate the required test effort

51
New cards

FL-5.1.5 Apply test case prioritization

52
New cards

FL-5.1.6 Recall the concepts of the test pyramid

53
New cards

FL-5.1.7 Summarize the testing quadrants and their relationships with test levels and test types

54
New cards

FL-5.2.1 Identify risk level by using risk likelihood and risk impact

55
New cards

FL-5.2.2 Distinguish between project risks and product risks

56
New cards

FL-5.2.3 Explain how product risk analysis may influence thoroughness and test scope

57
New cards

FL-5.2.4 Explain what measures can be taken in response to analyzed product risks

58
New cards

FL-5.3.1 Recall metrics used for testing

  • Project progress metrics (e.g., task completion, resource usage, test effort)

  • Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)

  • Product quality metrics (e.g., availability, response time, mean time to failure)

  • Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)

  • Risk metrics (e.g., residual risk level)

  • Coverage metrics (e.g., requirements coverage, code coverage)

  • Cost metrics (e.g., cost of testing, organizational cost of quality)

59
New cards

FL-5.3.2 Summarize test progress reports

Test progress reports support the ongoing test control and must provide enough information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan or changed circumstances.

During test monitoring and test control, the test team generates test progress reports for stakeholders to keep them informed.

Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:

  • Testing period

  • Test progress (e.g., ahead or behind schedule), including any notable deviations

  • Impediments for testing, and their workarounds

  • Test metrics

  • New and changed risks within testing period

  • Testing planned for the next period

60
New cards

FL-5.3.2 Summarize the content for test reports

Test completion reports summarize a specific test activity (e.g., test level, test cycle, iteration) and can give information for subsequent testing.

A test completion report is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met. This report uses test progress reports and other data.

Typical test completion reports include:

  • Test summary

  • Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit criteria)

  • Deviations from the test plan (e.g., differences from the planned test schedule, duration, and effort).

  • Testing impediments and workarounds

  • Test metrics based on test progress reports

  • Unmitigated risks, defects not fixed

  • Lessons learned that are relevant to the testing

61
New cards

FL-5.3.3 Exemplify how to communicate the status of testing

  • Verbal communication with team members and other stakeholders

  • Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)

  • Electronic communication channels (e.g., email, chat)

  • Online documentation

  • Formal test reports

62
New cards

FL-5.4.1 Summarize how configuration management supports testing

  • Configuration management (CM) provides a discipline for identifying, controlling, and tracking work products such as test plans, test strategies, test conditions, test cases, test scripts, test results, test logs, and test reports as configuration items.

  • Configuration management keeps a record of changed configuration items when a new baseline is created. It is possible to revert to a previous baseline to reproduce previous test results.

  • All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process

  • All identified documentation and software items are referenced unambiguously in testware

63
New cards

FL-5.5.1 Defect report objectives

  • Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue

  • Provide a means of tracking the quality of the work product

  • Provide ideas for improvement of the development and test process

64
New cards

FL-5.5.1 Defect report layout

  • Unique identifier

  • Title with a short summary of the anomaly being reported

  • Date when the anomaly was observed, issuing organization, and author, including their role

  • Identification of the test object and test environment

  • Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)

  • Description of the failure to enable reproduction and resolution including the test steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings

  • Expected results and actual results

  • Severity of the defect (degree of impact) on the interests of stakeholders or requirements

  • Priority to fix

  • Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)

  • References (e.g., to the test case)

65
New cards

FL-6.1.1 Explain how different types of test tools support testing

  • Test management tools – increase the test process efficiency by facilitating management of the SDLC, requirements, tests, defects, configuration

  • Static testing tools – support the tester in performing reviews and static analysis

  • Test design and test implementation tools – facilitate generation of test cases, test data and test procedures

  • Test execution and test coverage tools – facilitate automated test execution and coverage measurement

  • Non-functional testing tools – allow the tester to perform non-functional testing that is difficult or impossible to perform manually

  • DevOps tools – support the DevOps delivery pipeline, workflow tracking, automated build process(es), CI/CD • Collaboration tools – facilitate communication

  • Tools supporting scalability and deployment standardization (e.g., virtual machines, containerization tools)

  • Any other tool that assists in testing (e.g., a spreadsheet is a test tool in the context of testing)

66
New cards

FL-6.2.1 Recall the benefits of test automation

  • Time saved by reducing repetitive manual work

  • Prevention of simple human errors through greater consistency and repeatability

  • More objective assessment

  • Measures that are too complicated for humans to determine

  • Easier access to information about testing to support test management and test reporting

  • Reduced test execution times to provide earlier defect detection, faster feedback, and faster time to market

  • More time for testers to design new, deeper, and more effective tests

67
New cards

FL-6.2.1 Recall the risks of test automation

  • Unrealistic expectations about the benefits of a tool

  • Inaccurate estimations of time, costs, effort required to introduce a tool, maintain test scripts and change the existing manual test process

  • Using a test tool when manual testing is more appropriate

  • Relying on a tool too much

  • The dependency on the tool vendor which may go out of business, retire the tool, sell the tool to a different vendor or provide poor support

  • Using an open-source software which may be abandoned, meaning that no further updates are available, or its internal components may require quite frequent updates as a further development

  • The automation tool is not compatible with the development platform

  • Choosing an unsuitable tool that did not comply with the regulatory requirements and/or safety standards