1/327
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
False-fail result
test result - defect reported but no defect exists in test object
Error - Mistake
Human action produces an incorrect result
Defect (bug, fault)
Flaw in a component or system causing component or system to fail to perform required function.
Failure
Deviation of the component or system from its expected delivery, service or result.
Quality
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
Risk
A factor that could result in future negative consequences; usually expressed as impact and likelihood.
False-pass result
A test result which fails to identify the presence of a defect that is actually present in the test object.
Testing
Also know as evaluation. The process consisting of all life-cycle activities both static and dynamic, concerned with planning, preparation, and evaluation of software products and related work products to determine that they are fit for purpose and to detect defects.
Requirement
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
Review
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
Debugging
The process of finding, analyzing, and removing the causes of failures in software.
Confirmation testing (re-testing)
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Test Strategy
A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects)
Test Execution
The process of running a test on the component or system under test, producing actual results
Test approach
The implementation of the test strategy for a specific project.
Test Plan
A document describing the scope, approach, resources, and schedule of intended test activities. It identifies among others test items, the features to be tested, the testing tasks, who will do each task, degree of test independence, the test environment, the test design techniques, and entry an exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
Test monitoring
A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.
Test condition
An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
Test basis
All documents from which the requirements of a component or system can be inferred. the documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Test Data
Data the exists before a test is executed, and that affects or is affected by the component or system under test. Example - in a database
Coverage (test coverage)
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
Test procedure specification (test procedure, test script, manual test script)
A document specifying a sequence of actions for the execution of a test.
Test Suite
A set of several test cases for a component or system under test, where the post condition of one test if often used as the precondition for the next one.
Incident
Also known as deviation. Any event occurring that requires investigation.
Testware
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up, and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
Regression testing
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.
Exit Criteria
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task that have not been finished. Exit criteria are used to report against and to plan when to stop testing.
Test log
A chronological record of relevant details about the execution of tests.
Test summary report
A document summarizing testing activities and results. It is also contains an evaluation of the corresponding test items against exist criteria.
Error Guessing
A test design technique where the experience of the test is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
Independence of testing
Separation of responsibilities, which encourages the accomplishment of objective testing
Test policy
A high level document describing the principles, approach, and major objectives regarding testing.
Verification
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
Validation
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
V-model
A framework to describe the software development lifecycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development lifecycle.
Test level
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test, and acceptance test.
Integration
The process of combining components or systems into larger assemblies.
Off-the-shelf software (commercial off-the-shelf software, COTS)
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Performance
The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
Incremental development model
A development lifecycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. in some but not all versions of this lifecycle model, each subproject follows a "mini V-model" with its own design, coding, and testing phases.
Iterative development model
A development lifecycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product, a subset of the final product under development, which grows from iteration to iteration to become the final product.
Agile software development
A group of software development methodologies based on iterative incremental development., where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.
Agile manifesto
A statement on the values that underpin agile software development. The values are: individuals and interactions over processes and tools; working software over comprehensive documentation; customer collaboration over contract negotiation; responding to change over following a plan.
Efficiency testing
The process of testing to determine the efficiency of a software product.
Component testing (unit testing, module testing)
The testing of individual software components. Synonym to program testing.
Stub
A skeletal or special-purpose implementation of a software component, used to develop a that calls or is otherwise dependent on it. It replaces a called component.
Driver (test driver)
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
Robustness testing
Testing to determine the robustness of the software product.
Test-driven development
A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
Integration testing
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
System testing
The process of testing an integrated system to verify that it meets specified requirements.
Functional requirement
A requirement that specifies a function that a component or system must perform.
Non-functional requirement
A requirement that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability, and portability.
Test environment (test bed)
An environment containing hardware, instrumentations, simulators, software tools, and other support elements needed to conduct a test.
Acceptance testing (acceptance, user acceptance testing)
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers, or other authorized entity to determine whether or not to accept the system.
Maintenance
Modification of a software product after delivery to correct defects, to improve performance or other attributes or to adapt the product to a modified environment.
Alpha testing
Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.
Beta testing (field testing)
Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external testing for off-the-shelf software in order to acquire feedback from the market.
Test Type
A group of test activities aimed at testing a component or system, focused on a specific test objective, i.e. functional test, usability test, regression test, etc. A test type may take place on one or more test levels or test phases.
Functional testing
Testing based on an analysis of the specification of the functionality of a component or system.
Black-box testing (specification based testing)
Testing, either functional or non-functional , without reference to the internal structure of the component or system.
Functionality testing
The process of testing to determine the functionality of a software product.
Interoperability testing
Also known as compatibility testing. The process of testing to determine the interoperability of a software product.
Security
Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data.
Security testing
Testing to determine the security of the software product.
Performance testing
The process of testing to determine the performance of a software product.
Load testing
A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
Stress testing
A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as to member or servers.
Usability testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate, and attractive to the users under specified conditions.
Maintainability testing
The process of testing to determine the maintainability of a software product.
Reliability testing
The process of testing to determine the reliability of a software product.
Portability testing
Also known as Configuration testing. The process of testing to determine the portability of a software product.
Functionality
The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions.
Reliability
The ability of the software product to perform its required functions under stated conditions for a specified number of operations.
Robustness
The degree to which a component or system can function correctly in the presence of invalid
Usability
The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.
Efficiency
The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions.
Maintainability
The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.
Portability
The ease with which the software product can be transferred from one hardware or software environment to another.
Black-box (specification-based) test design technique
Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
White-box testing (structure-based testing)
Testing based on an analysis of the internal structure of the component or system.
Code coverage
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage, or condition coverage.
White-box (structural-based) test design technique
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
Impact Analysis
The assessment of change to the layers of development documentation, test documentation, and components, in order to implement a given change to specified requirements.
Static testing
Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static analysis.
Dynamic testing
Testing that involves the execution of the software of a component or system.
Informal review
Also known as Adhoc review. Review not based on a formal (documented) procedure.
Formal review
A review characterized by documented procedures and requirements, e.g. inspection.
Moderator (inspection leader)
The leader and main person responsible for an inspector or other review process.
Entry criteria
The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort needed to remove the failed entry criteria.
Metric
A measurement scale and the method used for measurement.
Technical review
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.
Peer review
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough
Inspection
A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure.
Static analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software development artifacts. Static analysis is usually carried out by means of a supporting tool.
Compiler
A software tool that translates programs expressed in a high order language into their machine language equivalents.
Test case specification
A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.
Test design technique
Procedure used to derive and/or select test cases.
Traceability
The ability to identify related items in documentation and software, such as requirements with associated tests. See also horizontal traceability, vertical traceability.