1/66
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
FL-1.1.1 Identify typical test objectives
FL-1.1.2 Differentiate testing from debugging
FL-1.2.1 Exemplify why testing is necessary
FL-1.2.2 Recall the relation between testing and quality assurance
FL-1.2.3 Distinguish between root cause, error, defect, and failure
FL-1.3.1 Explain the seven testing principles
FL-1.4.1 Explain the different test activities and related tasks
FL-1.4.2 Explain the impact of context on the test process
FL-1.4.3 Differentiate the testware that supports the test activities
FL-1.4.4 Explain the value of maintaining traceability
FL-1.4.5 Compare the different roles in testing
FL-1.5.1 Give examples of the generic skills required for testing
FL-1.5.2 Recall the advantages of the whole team approach
FL-1.5.3 Distinguish the benefits and drawbacks of independence of testing
FL-2.1.1 Explain the impact of the chosen software development lifecycle on testing
FL-2.1.2 Recall good testing practices that apply to all software development lifecycles
FL-2.1.3 Recall the examples of test-first approaches to development
FL-2.1.4 Summarize how DevOps might have an impact on testing
FL-2.1.5 Explain shift left
FL-2.1.6 Explain how retrospectives can be used as a mechanism for process improvement
FL-2.2.1 Distinguish the different test levels
FL-2.2.2 Distinguish the different test types
FL-2.2.3 Distinguish confirmation testing from regression testing
FL-2.3.1 Summarize maintenance testing and its triggers
FL-3.1.1 Recognize types of work products that can be examined by static testing
FL-3.1.2 Explain the value of static testing
FL-3.1.3 Compare and contrast static testing and dynamic testing
FL-3.2.1 Identify the benefits of early and frequent stakeholder feedback
FL-3.2.2 Summarize the activities of the review process
FL-3.2.3 Recall which responsibilities are assigned to the principal roles when performing reviews
FL-3.2.4 Compare and contrast the different review types
FL-3.2.5 Recall the factors that contribute to a successful review
FL-4.1.1 Distinguish black-box test techniques, white-box test techniques and experience-based test techniques
FL-4.2.1 Use equivalence partitioning to derive test cases
FL-4.2.2 Use boundary value analysis to derive test cases
FL-4.2.3 Use decision table testing to derive test cases
FL-4.2.4 Use state transition testing to derive test cases
FL-4.3.1 Explain statement testing
FL-4.3.2 Explain branch testing
FL-4.3.3 Explain the value of white-box testing
FL-4.4.1 Explain error guessing
FL-4.4.2 Explain exploratory testing
FL-4.4.3 Explain checklist-based testing
FL-4.5.1 Explain how to write user stories in collaboration with developers and business representatives
FL-4.5.2 Classify the different options for writing acceptance criteria
FL-4.5.3 Use acceptance test-driven development (ATDD) to derive test cases
FL-5.1.1 Exemplify the purpose and content of a test plan
FL-5.1.2 Recognize how a tester adds value to iteration and release planning
FL-5.1.3 Compare and contrast entry criteria and exit criteria
FL-5.1.4 Use estimation techniques to calculate the required test effort
FL-5.1.5 Apply test case prioritization
FL-5.1.6 Recall the concepts of the test pyramid
FL-5.1.7 Summarize the testing quadrants and their relationships with test levels and test types
FL-5.2.1 Identify risk level by using risk likelihood and risk impact
FL-5.2.2 Distinguish between project risks and product risks
FL-5.2.3 Explain how product risk analysis may influence thoroughness and test scope
FL-5.2.4 Explain what measures can be taken in response to analyzed product risks
FL-5.3.1 Recall metrics used for testing
Project progress metrics (e.g., task completion, resource usage, test effort)
Test progress metrics (e.g., test case implementation progress, test environment preparation progress, number of test cases run/not run, passed/failed, test execution time)
Product quality metrics (e.g., availability, response time, mean time to failure)
Defect metrics (e.g., number and priorities of defects found/fixed, defect density, defect detection percentage)
Risk metrics (e.g., residual risk level)
Coverage metrics (e.g., requirements coverage, code coverage)
Cost metrics (e.g., cost of testing, organizational cost of quality)
FL-5.3.2 Summarize test progress reports
Test progress reports support the ongoing test control and must provide enough information to make modifications to the test schedule, resources, or test plan, when such changes are needed due to deviation from the plan or changed circumstances.
During test monitoring and test control, the test team generates test progress reports for stakeholders to keep them informed.
Test progress reports are usually generated on a regular basis (e.g., daily, weekly, etc.) and include:
Testing period
Test progress (e.g., ahead or behind schedule), including any notable deviations
Impediments for testing, and their workarounds
Test metrics
New and changed risks within testing period
Testing planned for the next period
FL-5.3.2 Summarize the content for test reports
Test completion reports summarize a specific test activity (e.g., test level, test cycle, iteration) and can give information for subsequent testing.
A test completion report is prepared during test completion, when a project, test level, or test type is complete and when, ideally, its exit criteria have been met. This report uses test progress reports and other data.
Typical test completion reports include:
Test summary
Testing and product quality evaluation based on the original test plan (i.e., test objectives and exit criteria)
Deviations from the test plan (e.g., differences from the planned test schedule, duration, and effort).
Testing impediments and workarounds
Test metrics based on test progress reports
Unmitigated risks, defects not fixed
Lessons learned that are relevant to the testing
FL-5.3.3 Exemplify how to communicate the status of testing
Verbal communication with team members and other stakeholders
Dashboards (e.g., CI/CD dashboards, task boards, and burn-down charts)
Electronic communication channels (e.g., email, chat)
Online documentation
Formal test reports
FL-5.4.1 Summarize how configuration management supports testing
Configuration management (CM) provides a discipline for identifying, controlling, and tracking work products such as test plans, test strategies, test conditions, test cases, test scripts, test results, test logs, and test reports as configuration items.
Configuration management keeps a record of changed configuration items when a new baseline is created. It is possible to revert to a previous baseline to reproduce previous test results.
All configuration items, including test items (individual parts of the test object), are uniquely identified, version controlled, tracked for changes, and related to other configuration items so that traceability can be maintained throughout the test process
All identified documentation and software items are referenced unambiguously in testware
FL-5.5.1 Defect report objectives
Provide those responsible for handling and resolving reported defects with sufficient information to resolve the issue
Provide a means of tracking the quality of the work product
Provide ideas for improvement of the development and test process
FL-5.5.1 Defect report layout
Unique identifier
Title with a short summary of the anomaly being reported
Date when the anomaly was observed, issuing organization, and author, including their role
Identification of the test object and test environment
Context of the defect (e.g., test case being run, test activity being performed, SDLC phase, and other relevant information such as the test technique, checklist or test data being used)
Description of the failure to enable reproduction and resolution including the test steps that detected the anomaly, and any relevant test logs, database dumps, screenshots, or recordings
Expected results and actual results
Severity of the defect (degree of impact) on the interests of stakeholders or requirements
Priority to fix
Status of the defect (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed, rejected)
References (e.g., to the test case)
FL-6.1.1 Explain how different types of test tools support testing
Test management tools – increase the test process efficiency by facilitating management of the SDLC, requirements, tests, defects, configuration
Static testing tools – support the tester in performing reviews and static analysis
Test design and test implementation tools – facilitate generation of test cases, test data and test procedures
Test execution and test coverage tools – facilitate automated test execution and coverage measurement
Non-functional testing tools – allow the tester to perform non-functional testing that is difficult or impossible to perform manually
DevOps tools – support the DevOps delivery pipeline, workflow tracking, automated build process(es), CI/CD • Collaboration tools – facilitate communication
Tools supporting scalability and deployment standardization (e.g., virtual machines, containerization tools)
Any other tool that assists in testing (e.g., a spreadsheet is a test tool in the context of testing)
FL-6.2.1 Recall the benefits of test automation
Time saved by reducing repetitive manual work
Prevention of simple human errors through greater consistency and repeatability
More objective assessment
Measures that are too complicated for humans to determine
Easier access to information about testing to support test management and test reporting
Reduced test execution times to provide earlier defect detection, faster feedback, and faster time to market
More time for testers to design new, deeper, and more effective tests
FL-6.2.1 Recall the risks of test automation
Unrealistic expectations about the benefits of a tool
Inaccurate estimations of time, costs, effort required to introduce a tool, maintain test scripts and change the existing manual test process
Using a test tool when manual testing is more appropriate
Relying on a tool too much
The dependency on the tool vendor which may go out of business, retire the tool, sell the tool to a different vendor or provide poor support
Using an open-source software which may be abandoned, meaning that no further updates are available, or its internal components may require quite frequent updates as a further development
The automation tool is not compatible with the development platform
Choosing an unsuitable tool that did not comply with the regulatory requirements and/or safety standards