Involves manually executing test cases without automation.
Types of Software Testing
Various approaches to validate software.
White Box Testing
Testing internal structures or workings of an application.
Black Box Testing
Testing the functionality without any knowledge of the internal implementation.
Grey Box Testing
A combination of white box and black box testing, involving partial knowledge of internal structures.
Functional Testing
Verifying that each function of the software application operates in conformance with the requirement specification.
Non-Functional Testing
Testing aspects such as performance, usability, and security.
Automation Testing
Using automated tools to execute tests.
Unit Testing
Testing individual units or components of the software.
Integration Testing
Testing the interaction between different units or components.
System Testing
Testing the entire system as a whole.
Performance Testing
Evaluating the speed, stability, and scalability of the system.
Usability Testing
Evaluating how easy the system is to use.
Compatibility Testing
Testing the software's compatibility with different hardware, operating systems, and network environments.
Load Testing
Testing the system's performance under anticipated load.
Stress Testing
Testing the system beyond normal conditions to evaluate its limits.
Scalability Testing
Testing the system's ability to scale up or handle increasing amounts of work.
Stability Testing
Testing the system's reliability under continuous operation.
Incremental Testing
Testing by adding one unit at a time.
Non-Incremental Testing
Testing all units at once.
Top-Down Testing
Testing from top to bottom.
Bottom-Up Testing
Testing from bottom to top.
Development Testing
Testing activities carried out by the development team.
Testers are often the programmers themselves, but can also involve programmer/tester pairs or separate testing groups for critical systems.
Formal processes with detailed records of test results may be used for critical systems.
Testing Granularity:
Unit Testing: Testing individual program units or object classes, focusing on functionality.
Integration Testing: Integrating several individual units to create composite components, focusing on testing component interfaces.
System Testing: Integrating all components and testing the system as a whole, focusing on testing component interactions.
Development testing is primarily a defect testing process, aiming to discover bugs and is interleaved with debugging.
Unit Testing
Testing program components, such as methods or object classes.
Individual functions or methods are the simplest type of component.
Tests should call routines with different input parameters.
When testing object classes:
Test all operations associated with the object.
Set and check the value of all attributes associated with the object.
Put the object into all possible states by simulating all events that cause a state change.
Integration Testing
Testing the interface of composite components created by integrating individual components (e.g., A, B, and C).
Test cases are not applied to individual components but rather to the interface of the composite component.
Interface errors may not be detectable by testing individual objects because they result from interactions between objects in the component.
System Testing
A crucial phase in the Software Development Life Cycle (SDLC) where the entire software system is tested as a whole.
Aims to validate that the software meets its specified requirements and functions correctly in the intended environment.
Verifies the system's compliance with both functional and non-functional requirements, ensuring its readiness for deployment to end-users.
Test-Driven Development (TDD)
An approach to program development where testing and code development are interleaved.
Code is developed incrementally along with a test for that increment.
The process does not move on to the next increment until the code passes its test.
Introduced as part of agile methods such as Extreme Programming but can also be used in plan-driven development processes.
TDD Process Steps
Identify the increment of functionality required (should be small and implementable in a few lines of code).
Write a test for this functionality and implement it as an automated test.
Run the test, along with all other tests. The new test will initially fail, indicating it adds something to the test set.
Implement the functionality and re-run the test.
Once all tests run successfully, move on to implementing the next chunk of functionality.
Benefits of TDD
Reduces the costs of regression testing.
Regression testing checks that changes have not introduced new bugs and that new code interacts as expected with the existing code.
Automated testing dramatically reduces the costs of regression testing.
Confidence that new functionality has not caused or revealed problems with existing code.
Most useful in new software development where functionality is either implemented in new code or by using well-tested standard libraries.
Has proven successful for small and medium-sized projects.
Release Testing
Testing a particular release of a system intended for use outside of the development team, typically for customers and users.
In complex projects, the release could be for other teams developing related systems or for product management.
Distinctions between Release Testing and System Testing
A separate team not involved in the system development should be responsible for release testing.
System testing by the development team should focus on discovering bugs (defect testing), while release testing aims to check that the system meets its requirements and is good enough for external use (validation testing).
Goal of Release Testing
To convince the supplier that the system is good enough for use, showing that it delivers its specified functionality, performance, and dependability.
Should consider all system requirements, not just end-user requirements.
Usually a black-box testing process where tests are derived from the system specification – also known as ‘functional testing’.
Requirements-Based Testing
Requirements should be testable, allowing a tester to check that the requirement has been satisfied.
A systematic approach to test case design where each requirement is considered, and a set of tests are derived for it.
Validation rather than defect testing—demonstrating that the system has properly implemented its requirements.
Scenario Testing
Devising typical scenarios of use to develop test cases for the system.
Scenarios should be realistic and relatable to real system users.
A scenario test should be a narrative story that is credible, fairly complex, motivates stakeholders, and is easy to evaluate.
Performance Testing
Testing for emergent properties, such as performance and reliability, once a system is completely integrated.
Designed to ensure the system can process its intended load, usually involving running a series of tests where the load is increased until system performance becomes unacceptable.
User Testing
Also known as usability testing or user acceptance testing (UAT), is a critical phase where real end-users interact with the software to evaluate its usability, functionality, and user experience.
The primary goal is to ensure that the software meets user expectations, addresses user needs, and is intuitive and easy to use.
Essential, even after comprehensive system and release testing, due to influences from the user’s working environment.
It is practically impossible for a system developer to replicate the system’s working environment.
Types of User Testing
Alpha testing: Users of the software work with the development team to test the software at the developer’s site.
Beta testing: A release of the software is made available to users to experiment and raise problems they discover with the system developers.
Acceptance testing: Customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment.
Alpha Testing
Users and developers work together to test a system as it is being developed.
Users can identify problems and issues that are not readily apparent to the development testing team.
Users can provide information about practice that helps with the design of more realistic tests.
Beta Testing
An early, sometimes unfinished, release of a software system is made available to customers and users for evaluation.
Beta testers may be a selected group of customers who are early adopters of the system, or the software may be made publicly available.
Mostly used for software products that are used in many different environments.
Essential to discover interaction problems between the software and features of the environment where it is used.
It is also a form of marketing.
Acceptance Testing
An inherent part of custom systems development.
Takes place after release testing.
Involves a customer formally testing a system to decide whether or not it should be accepted from the system developer.
Acceptance implies that payment should be made for the system.
Stages in the Acceptance Testing Process
Define acceptance criteria: Should take place early in the process before the contract for the system is signed.
Plan acceptance testing: Involves deciding on the resources, time, and budget for acceptance testing and establishing a testing schedule.
Derive acceptance tests: Tests are designed to check whether or not a system is acceptable, covering both functional and non-functional characteristics.
Run acceptance tests: The agreed acceptance tests are executed on the system, ideally in the actual environment where the system will be used.
Negotiate test results: The developer and the customer negotiate to decide if the system is good enough to be put into use.
Reject/accept system: A meeting between the developers and the customer to decide on whether or not the system should be accepted.