Manual-QA-w Scenario-115
Manual Testing and Scenario Based Interview Questions
Q1: What is software testing?
It is a process of analyzing a software item to detect the differences between existing and required conditions and to evaluate the features of the software item.
It is a verification and validation process.
Process of demonstrating that errors are not present.
Q2: What is the difference between verification and validation?
Verification:
It is a process of confirming whether the software meets its requirement or not.
Process of examining/reviewing work product.
It is a QA activity.
It’s a static process performed at compile time.
It is performed by a QA team or by a developer.
Cost and time effective.
Activities involved in this are testing the application.
Validation:
It is a process of confirming whether the s/w meets the user’s requirement or not.
Process of executing a product & examining how it behaves.
Are we building the right product?
It is a QC activity.
It’s a dynamic process performed at run time.
It is performed by a QC team or by tester.
Cost and time taking.
Activities involved in this are inspections, reviews, and walk-throughs.
Q3: What is the difference between quality assurance and quality control?
Quality Assurance:
It ensures the prevention of defects in the process used to make software applications.
It involves process-oriented activities.
Aim to prevent defects.
Eg:- verification
It is the technique of managing the quality.
All team members are responsible for QA.
QA is responsible for SDLC.
It is a process to create the deliverables.
Quality Control:
It executes the program or code to identify the defects in the software application.
It involves product-oriented activities.
Aim to identify and improve.
Eg:- validation
It is a method to verify the quality.
The testing team is responsible for QC.
QC is responsible for STLC.
It is a process to verify the deliverables.
Q4: What is SDLC?
The Software Development Life Cycle refers to all the activities that are performed during software development, including requirement analysis, design, implementation, testing, deployment, and maintenance phases.
Q5: Explain STLC – Software Testing Life Cycle.
The software testing life cycle refers to all the activities performed during the testing of a software product. The phases include –
Requirement analyses and validation: In this phase, the requirements documents are analyzed and validated, and the scope of testing is defined.
Test planning: In this phase, the test plan strategy is defined, the estimation of test effort is defined along with the automation strategy, and tool selection is done.
Test Design and Analysis: In this phase, test cases are designed, test data is prepared, and automation scripts are implemented.
Test environment setup: A test environment closely simulating the real-world environment is prepared.
Test execution: The test cases are prepared, and the bugs are reported and retested once resolved.
Test closure and reporting: A test closure report is prepared with the final test results summary, learning, and test metrics.
Q6: What is dynamic testing?
It involves in the execution of code and validates the output with the expected outcome.
Q7: What is static testing?
It involves in reviewing the documents to identify the defects in the early stages of SDLC.
Q8: What is white box testing?
This is also called glass-box testing, clear-box, and structural testing.
It is based on the applications’ internal code structure.
In this, an internal perspective of the system, as well as programming skills are used to design test cases.
In white box testing, the tester analyses the internal architecture of the system as well as the quality of source code on different parameters like code optimization, code coverage, code reusability, etc.
This testing usually was done at the unit level.
Q9: What is black box testing?
It is a process of testing a system component considering input, output, and general function.
The tester interacts with the system through the interface providing input and validating the received output.
It doesn’t require knowledge of internal program structure.
In this, we test UI & backend (coding/database).
External actions are performed.
Q10: What is positive and negative testing?
Positive:
It determines what the system is supposed to do.
It helps to check whether the application is justifying the requirements or not.
Negative:
It determines what the system is not supposed to do.
It helps to find the defects in the software.
Q11: What is gray box testing?
It is a combination of both the black-box and white-box testing. The tester who works on this type of testing needs to have access to design documents, this helps to create better test cases.
Q12: What is a test strategy?
It is a high-level document and is usually developed by the project manager. It’s a document that captures the approach about how we go about testing the product and achieving the goals.
Q13: What is test plan?
It is a document which contains the plan for all the testing activities.
Q14: What is test scenario?
It gives the idea of what we have to test or the testable part of an application is called TS.
Q15: What is test case?
It is a set of conditions under which tester determines whether an application/software is working correctly or not.
Q16: What is test bed?
An environment configured for testing is called test bed. It consists of hardware, s/w, network configuration.
Q17: What is test suite?
A collection of test cases is called as test suite.
Q18: What is test data?
It is a document that is basically used to test the s/w program. It is divided into 2 categories:
+ve test data which is generally given to the system to generate the expected result.
–ve test data which is used to test
Q19: What is defect life cycle?
Defect Life Cycle or Bug Life Cycle is the specific set of states that a Bug goes through from discovery to defect fixation. Bug Life Cycle phases/status: - The number of states that a defect goes through varies from project to project. Below lifecycle diagram, covers all possible states
New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.
Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to the developer team.
Open: The developer starts analyzing and works on the defect fix.
Fixed: When a developer makes a necessary code change and verifies the change, he or she can make the bug status “Fixed.”
Pending retest: after fixing the defect the developer gives a particular code for retesting the code to the tester. Here the testing is pending on the tester’s end, the status assigned is “pending request.”
Retest: Tester does the retesting of the code, to check whether the defect is fixed by the developer or not and changes the status to “Re-test.”
Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug detected in the software, then the bug is fixed and the status assigned is “verified.”
Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the status to “reopened”. Once again the bug goes through the life cycle.
Closed: If the bug no longer exists then the tester assigns the status “Closed.”
Duplicate: If the defect is repeated twice or the defect corresponds to the same concept of the bug, the status is changed to “duplicate.”
Rejected: If the developer feels the defect is not a genuine defect then it changes the defect to “rejected.”
Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next release, then the status “Deferred” is assigned to such bugs
Not a bug: If it does not affect the functionality of the application then the status assign to a bug is “Not a bug”.
New – A bug or defect when detected is in a new state
Assigned – The newly detected bug when assigned to the corresponding developer is in the Assigned state
Open – When the developer works on the bug, the bug lies in the Open state
Rejected/Not a bug – A bug lies in rejected state in case the developer feels the bug is not genuine
Deferred – A deferred bug is one, fix which is deferred for some time (for the next releases) based on the urgency and criticality of the bug
Fixed – When a bug is resolved by the developer it is marked as fixed
Test – When fixed the bug is assigned to the tester and during this time the bug is marked as in Test
Reopened – If the tester is not satisfied with the issue resolution the bug is moved to the Reopened state
Verified – After the Test phase if the tester feels the bug is resolved, it is marked as verified
Closed – After the bug is verified, it is moved to Closed status.
Q20: What is smoke and sanity testing?
Smoke:
It is a kind of Software Testing performed after the software built to ascertain that the critical functionalities of the program are working fine.
The purpose is to reject a badly broken application so that the QA team does not waste time installing and testing the software application.
In Smoke Testing, the test cases chose to cover the most important functionality or component of the system. The objective is not to perform exhaustive testing, but to verify that the critical functionalities of the system are working fine.
Sanity Testing:
Sanity testing is a kind of Software Testing performed after receiving a software build, with minor changes in code, or functionality, to ascertain that the bugs have been fixed and no further issues are introduced due to these changes.
Q21: What is an exit and entry criteria?
Entry:
It describes when to start testing i.e. what we have to test it should be stable enough to test.
Ex:- if we want to test the home page, the SRS/BRS/FRS document & the test cases must be ready and should be stable enough to test.
Exit:
It describes when to stop testing i.e. once everything mentioned below is fulfilled then s/w release is known as exit criteria:-
Followed before actually releasing the software to the client. Checking whether the computer testing is done or not.
Documents checking:- test matrix (RTM)/summary reports.
SUSPENSION CRITERIA
when to stop testing temporarily.
Q22: What is a blocker?
A blocker is a bug of high priority and high severity. It prevents or blocks testing of some other major portion of the application as well.
Q23: What is regression testing?
To test whether the changed component has introduced any error to unchanged component or not is called as regression testing. It is perform on QA/production site depends.
Q24: What is retesting?
To test whether the reported bug has been resolved by the developer team or not, is known as retesting.
Q25: What is monkey / ad-hoc testing?
It is an informal testing performed without a planning or documentation and without having knowledge of the applications/software functionalities. Monkey testing is a type of testing that is performed randomly without any predefined test cases or test inputs.
Q26: What is the difference between severity and priority?
Priority:
“How prior we need to fix the bug is priority.”
It means the occurrences of defect.
Decide by developer team. Types (low, medium, high, critical)
Severity:
“How severe the bug is severity”. It means how bad the defect is and what impact it can cause in our application.
Decide by the testing team. Types (minor, medium, major)
Q27: What is defect priority?
A defect priority is the urgency of the fixing the defect. Normally the defect priority is set on a scale of P0 to P3 with P0 defect having the most urgency to fix.
Q28: What is defect severity?
Defect severity is the severity of the defect impacting the functionality. Based on the organization, we can have different levels of defect severity ranging from minor to critical or show stopper.
Q29: Give an example of Low Priority-Low severity, Low Priority-High severity, High Priority-Low severity, High Priority-High severity defects.
Low Priority-Low severity – A spelling mistake in a page not frequently navigated by users.
Low Priority-High severity – Application crashing in some very corner case.
High Priority-Low severity – Slight change in logo color or spelling mistake in company name.
High Priority-High severity – Issue with login functionality.
Q30: What is unit testing?
It is also called as module testing /component testing.
It is done to check whether the individual unit or module of the source code is working properly.
It is done by the developer.
Q31: What is integration testing?
It is a process of testing the interface between the two s/w units.
It is done by 3 ways: - big-bang, top-down, bottom-up approach.
Process of combining & testing multiple components together.
Normally done by developer but a tester can also perform if he has the knowledge of coding.
Q32: What is system testing?
It is a black box testing technique performed to evaluate the computer system. It includes both functional and non-functional testing. Verifying the completed system to ensure that the application works as intended or not.
“The behavior of the system is tested as defined by the scope of the development project”.
Carried out by specialist tester/independent tester.
Q33: What is user-appearance testing?
User-requirement testing is done.
Done by client as well as end user.
It is a final stage of testing before used.
Q34: What is alpha-beta testing?
Alpha Testing:
Developer records all the issues.
Done by the end user at dev site. (involves client or tester + dev)
Beta Testing:
Dev go through all the issues after specific period of time.
Done by the end user at the client site. (involves client/user)
Q35: How monkey testing is different from adhoc testing?
In case of adhoc testing although there are no predefined or documented test cases still testers have the understanding of the application. While in case of monkey testing testers doesn’t have any understanding of the application.
Q36: Explain TDD (Test Driven Development).
Test Driven Development is a software development methodology in which the development of the software is driven by test cases created for the functionality to be implemented. In TDD, first the test cases are created and then code to pass the tests is written. Later the code is re-factored as per the standards.
Q37: Explain Equivalence Class Partitioning.
Equivalence class partitioning is a specification based black box testing techniques. ECP means Grouping test data into equivalence classes with the assumption that all the data items lying in the classes will have same effect on the application. In simple it means diving any module into equal parts and test each part separately.
Example:
For testing a Square – Program (program that prints the square of a number- the equivalence classes can be: -Set of Negative numbers, whole numbers, decimal numbers, set of large numbers etc.)
Suppose we have to test 1-100 no’s. So, 1st we will divide this no into 5 equal parts. (Like 1-20, 21-40,41-60,61-80,81-100). Now we will select random 3 values and multiply these values with the no of parts. Whatever the no will be, we will check for those values from all the module in place of checking 100 values.
Purpose: testing a complete module is exhaustive testing and time-consuming that’s why we use equivalence partitioning, as it is time saving.
Q38: What is boundary value analysis?
Boundary value analysis is a software testing technique for designing test cases wherein the boundary values of the classes of the equivalence class partitioning are taken as input to the test cases. It is also called as a part of stress and –ve testing. e.g. if the test data lies in the range of 0-100, the boundary value analysis will include test data – 0,1, 99, 100.
Q39: What are some defect reporting attributes?
Some of the attributes of a Defect report are
DefectId – A unique identifier of the defect.
Defect Summary – A one line summary of the defect, more like a defect title.
Defect Description – A detailed description of the defect.
Steps to reproduce – The steps to reproduce the defect.
Expected Result – The expected behavior from which the application is deviating because of the defect.
Actual Result- The current erroneous state of the application w.r.t. the defect.
Defect Severity – Based on the criticality of the defect, this field can be set to minor, medium, major or show stopper.
Priority – Based on the urgency of the defect, this field can be set on a scale of P0 to P3.
Q40: What is a stub?
In case of top-down integration testing, many a times lower level modules are not developed while beginning testing/integration with top level modules. In those cases Stubs or dummy modules are used that simulate the working of modules by providing hard-coded or expected output based on the input values.
Q41: What is a driver?
In case of bottom-up integration testing, drivers are used to simulate the working of top level modules in order to test the related modules lower in the hierarchy.
Q42: What are some advantages of automation testing?
Some advantages of automation testing are Test execution using automation is fast and saves considerable amount of time.
Carefully written test scripts remove the chance of human error during testing.
Tests execution can be scheduled for nightly run using CI tools like Jenkins which can also be configured to provide daily test results to relevant stakeholders.
Automation testing is very less resource intensive. Once the tests are automated, test execution requires almost no time of QAs. Saving Qa bandwidth for other exploratory tasks.
Q43: What are some disadvantages of automation testing?
Some disadvantages of automation testing are
It requires skilled automation testing experts to write test scripts.
Additional effort to write scripts is required upfront.
Automation scripts are limited to verification of the tests that are coded. These tests may miss some error that is very glaring and easily identifiable to human(manual QA).
Even with some minor change in application, script updation and maintenance is required Verification strategies/error guessing/sdlc/white box techniques/exhaustive testing/v model/spiral model/non-functional testing.
Q44: Difference between Waterfall Model and Agile Methodology.
The Waterfall Model and Agile Methodology are two different approaches to software development:
Waterfall Model:
Sequential and linear approach Fixed requirements at the beginning Emphasis on documentation Limited customer involvement Late bug detection
Agile Methodology:
Iterative and incremental approach Flexible and adaptive to changing requirements Emphasis on collaboration and communication Continuous testing throughout the process Active customer involvement
The Waterfall Model is suitable for projects with stable requirements and a focus on predictability. Agile Methodology is ideal for projects requiring flexibility, collaboration, and the ability to adapt to evolving requirements.
Q45: What is a test plan, and what are the steps to create one?
A test plan is a document that outlines the strategy, scope, resources, and schedule for testing a product. It is an important part of the software development process, as it helps ensure that the product is of high quality and meets the requirements and specifications. To create a test plan, you can follow these steps:
Identify the goals of the testing. What do you want to achieve with the testing? What are the objectives of the test plan?
Define the scope of the testing. What features and functions of the product will be tested? What environments and platforms will the testing be conducted on?
Determine the resources needed for testing. What personnel, equipment, and tools will be required?
Develop a testing schedule. When will the testing take place? How long will it take?
Determine the test approach. How will the testing be conducted? What types of testing will be used (e.g., unit testing, integration testing, system testing, acceptance testing)?
Create a test matrix. This is a table that maps the test cases to the requirements or functions being tested. Integration Testing
SystemTesting It enables you to assess, validate, and test the application design in addition to the business requirements. A form of testing known as “system testing” determines whether the system as a whole satisfies both functional and non-functional criteria. Integrity testing’s primary goal is to find flaws in how components or sections communicate with one another. The primary goal of system testing is to confirm that the system satisfies the requirements and is suitable for use in the environment intended. Integration testing will primarily concentrate on the interfaces between components or modules. The general conduct of the system is examined during system testing. To make sure they function as intended, integration testing examines the relationships between components or subsystems. System testing includes putting the system through realistic conditions to make sure it performs as anticipated in the target setting. Typically, developers or testers who work closely with developers conduct integration testing. Typically, testers who are not members of the programming team perform system testing.
White-box testing is a form of integration testing, which means that testers are familiar with the inner workings of the components or modules under test. System testing is a type of “black-box” testing, which indicates that the testers are blind to how the system actually works. Integration testing involves testing specific scenarios involving the interactions between components or subsystems. System testing involves testing the system in a range of scenarios to ensure it works as intended under different conditions. Integration testing is usually automated using testing frameworks and tools. System testing is usually carried out manually, although some aspects may be automated using testing tools. Integration testing is generally less expensive and less time-consuming than system testing. System testing is generally more expensive and more time-consuming than integration testing.
Write the test cases. A test case is a set of steps and expected results that a tester follows to verify that a feature or function of the product is working correctly.
Review and revise the test plan. Make sure that the test plan is complete, accurate, and feasible.
Execute the testing. Follow the test plan and test cases to test the product.
Document the results of the testing. This includes any issues or defects that were found, and how they were addressed.
By following these steps, you can create a comprehensive and effective test plan that will help ensure the quality and reliability of your product.
Q46: What is the difference between system and integration testing?
Q47: What does verification mean?
S.NO. Boundaryvalueanalysis Equivalencepartitioning
It is a technique where we identify the errors at the boundaries of input data to discover those errors in the input center. It is a technique where the input data is divided into partitions of valid and invalid values.
Boundary values are those that contain the upper and lower limit of a variable. In this, the inputs to the software or the application are separated into groups expected to show similar behavior.
Boundary value analysis is testing the boundaries between partitions. It allows us to divide a set of test conditions into a partition that should be considered the same.
It will help decrease testing time due to a lesser number of test cases from infinite to finite. The Equivalence partitioning will reduce the number of test cases to a finite list of testable test cases covering maximum possibilities.
The Boundary Value Analysis is often called a part of the Stress and Negative Testing. The Equivalence partitioning can be suitable for all the software testing levels such as unit, integration, system.
Sometimes the boundary value analysis is also known as Range Checking. Equivalence partitioning is also known as Equivalence class partitioning.
Verification includes different activities such as business requirements, system requirements, design review, and code walk-throughs while developing a product. It is also known as static testing, where we are ensuring that “we are developing the right product or not”. And it also checks that the developed application fulfilling all the requirements given by the client.
Q48: Difference between boundary value analysis and equivalence partitioning?
Q49: What is the difference between authorization and authentication?
Authentication
Authentication is the process of identifying a user to provide access to a system.
In this, the user or client and server are verified.
It is usually performed before the authorization.
It requires the login details of the user, such as user name & password, etc.
Data is provided through the Token Ids.
Example: Entering Login details is necessary for the employees to authenticate themselves to access the organizational emails or software.
Authentication credentials can be partially changed by the user as per the requirement.
Authorization
Authorization is the process of giving permission to access the resources.
In this, it is verified that if the user is allowed through the defined policies and rules.
It is usually done once the user is successfully authenticated.
It requires the user’s privilege or security level.
Data is provided through the access tokens.
Example: After employees successfully authenticate themselves, they can access and work on certain functions only as per their roles and profiles.
Authorization permissions cannot be changed by the user. The permissions are given to a user by the owner/manager of the system, and he can only change it.
Q50: Explain the process of test case review.
The process of test case review involves the following steps:
Preparation: Test cases are prepared based on the requirements and specifications of the software. They are documented in a standardized format.
Selection of Reviewers: A group of qualified reviewers is selected, including test leads, domain experts, developers, and other stakeholders with relevant knowledge and expertise.
Review Meeting: A review meeting is conducted, either in person or virtually, where the reviewers gather to discuss and examine the test cases. A designated moderator or the test case author leads the meeting.
Test Case Examination: Reviewers systematically analyze each test case, ensuring accuracy, coverage of scenarios, and alignment with requirements. They look for clarity, consistency, and adherence to best practices.
Feedback and Discussion: Reviewers provide feedback, raise concerns, and ask questions during the meeting. The discussion clarifies doubts and establishes a shared understanding.
Issue Identification and Resolution: Identified issues, defects, or improvements are documented. The responsible person addresses these issues, either during the meeting or afterward, to improve the quality of the test cases.
Follow-up Actions: The test case author incorporates suggested changes, updates the test cases, and shares them with reviewers for final review or approval. The test case review process ensures the reliability and effectiveness of test cases, improving the overall quality of the testing effort.
Q51: Differentiate between regression and retesting.
RegressionTesting
Regression Testing is carried out to confirm whether a recent program or code change has not adversely affected existing features
The purpose of Regression Testing is that new code changes should not have any side effects to existing functionalities
Defect verification is not the part of Regression Testing
Based on the project and availability of resources, Regression Testing can be carried out parallel with Re- testing
You can do automation for regression testing, Manual Testing could be expensive and time-consuming
Regression testing is known as a generic testing
Regression testing is done for passed test cases
Regression testing checks for unexpected side-effects
Regression testing is only done when there is any modification or changes become mandatory in an existing project
Test cases for regression testing can be obtained from the functional specification, user tutorials and manuals, and defect reports in regards to corrected problems
Re-testing
Re-testing is carried out to confirm the test cases that failed in the final execution are passing after the defects are fixed
Re-testing is done on the basis of the Defect fixes
Defect verification is the part of re- testing
Priority of re-testing is higher than regression testing, so it is carried out before regression testing
You cannot automate the test cases for Retesting
Re-testing is a planned testing
Retesting is done only for failed test cases
Re-testing makes sure that the original fault has been corrected
Re-testing executes a defect with the same data and the same environment with different inputs with a new build
Test cases for retesting cannot be obtained before start testing.
Q52: What is the purpose of ECP?
Testing a complete module is exhaustive testing and time consuming that’s why we use Equivalence partitioning as it is time saving.
Q53: What are the important Black Box Test Design Techniques?
Black-box test design techniques:
Equivalence partitioning
Boundary value analysis
Decision Table Testing
State Transition Testing
Use Case Testing
Q54: What is random testing?
It is a Block box test design technique and informal one.
Q55: What is the purpose of Test Design Technique?
The purpose of test design techniques is to identify test conditions and test scenarios through which effective and efficient test cases can be written. Using test design techniques is a best approach rather the test cases picking out of the air. Test design techniques help in Achieving high test coverage. Defining tests that will provide insight into the quality of the test object.
Q56: What is use case testing?
A black box test design technique in which test cases are designed to execute User scenarios of Business scenarios.
Q57: What is the equivalence test design technique?
It is a Black-box (Specification Based) Test Case Design Technique with two primary goals-
To reduce the number of test cases to the necessary minimum.
To select the right test cases to cover all possible scenarios.
Q58: What is State Transition Testing?
A system may exhibit a different response depending on current conditions or previous history (its state). In this case, that aspect of the system can be shown with a state transition diagram. It allows the tester to view the software in terms of its states, transitions between states, the inputs or events that trigger state changes (transitions), and the actions that may result from those transitions. The states of the system or object under test are separate, identifiable, and finite in number.
Q59: What are the different types of test design techniques?
Different types of test design techniques:
Test design techniques are categorized into two types. They are:
Static testing technique.
Dynamic testing technique.
The dynamic techniques are subdivided into three more categories. They are:
Specification-based (black-box, also known as behavioral techniques) techniques.
Structure-based (white-box or structural techniques) techniques.
Experience-based techniques.
The Specification-based or Black-box testing techniques are
a. Equivalence partitioning.
b. Boundary value analysis.
c. Decision tables.
d. State transition testing.
e. Use case testing.
Structure-based or White-box testing techniques are Statement Testing and Coverage, Decision Testing and Coverage, and Linear Code Sequence And Jump (LCSAJ).
Q60: What is Walkthrough in static technique?
Walk-through:
It is not a formal process/review
It is led by the authors
Author guides the participants through the document according to his or her thought process to achieve a common understanding and to gather feedback.
Useful for the people if they are not from the software discipline, who are not used to or cannot easily understand software development process.
Is especially useful for higher level documents like requirement specification, etc.
The goals of a walkthrough:
i. To present the documents both within and outside the software discipline in order to gather the information regarding the topic under documentation.
ii. To explain or do the knowledge transfer and evaluate the contents of the document
iii. To achieve a common understanding and to gather feedback.
iv. To examine and discuss the validity of the proposed solutions
Q61: What is a Technical Review in static technique?
The aim of this review technique is to achieve consensus about the technical aspect of the document. They are informal in nature and it is the experts, who identify defects in the document. The experts who are a part of the review are architects, chief designers, key users, etc. However, peers are also a part of the review as well. In a technical review, the value of the technical concepts and alternatives is assessed. It is also ensured that the right technical concepts are used.
Q62: What is a Static testing technique?
Static test techniques provide a great way to improve the quality and productivity of software development. It includes the reviews and provides the overview of how they are conducted. The primary objective of static testing is to improve the quality of software products by assisting engineers to recognize and fix their own defects early in the software development process.
Q63: What are the uses of Static Testing?
The uses of static testing are as follows:
Since static testing can start early in the life cycle so early feedback on quality issues can be established. As the defects are getting detected at an early stage so the rework cost most often relatively low. Development productivity is likely to increase because of the less rework effort.
Types of the defects that are easier to find during the static testing are: deviation from standards, missing requirements, design defects, non- maintainable code and inconsistent interface specifications.
Static tests contribute to the increased awareness of quality issues.
Q64: Can u tell me about the kick-off in a formal review?
This kick-off meeting is an optional step in a review procedure. The goal of this step is to give a short introduction on the objectives of the review and the documents to everyone in the meeting. The relationships between the document under review and the other documents are also explained, especially if the numbers of related documents are high. At customer sites, we have measured results up to 70% more major defects found per page as a result of performing a kick-off.
Q65: What is a formal review?
Formal reviews follow a formal process. It is well structured and regulated. A formal review process consists of six main steps:
Planning
Kick-off
Preparation
Review meeting
Rework
Follow-up