Software Engineering - QA and Test Notes
Software Engineering - QA and Test
Motivation and Challenges
Ensuring quality in software development.
Software testing, validation, and reviews as key components.
Disciplines in Software Engineering
Development:
Requirements: Context analysis and Requirements Engineering.
Design: Architecture and Detailed design.
Implementation.
Quality Management:
Quality assurance and testing: Test, inspection, metrics.
Evolution:
Roll-Out, Operation, Maintenance.
Further development, Reuse, Reengineering.
Change management.
Management:
Strategy, Economy, Team.
Dates, Risks.
Customer, client/contractor.
Innovation.
Processes and procedure models: Improvement, process model, maturity levels.
Configuration management, Documentation, Knowledge management.
People in the SWE process and digital ethics.
Tools.
Basic topics and Focus in software engineering.
Software Quality Problems
Software errors are common in everyday life.
Examples of software failures:
Delays in student financial aid (BAföG) due to software errors.
Limited access to funds for Deutsche Bank customers due to software errors.
GM recalling 4.3 million cars.
Software errors in Roche's Accu-Chek Connect diabetes management app.
Causes of Software Errors
The primary cause of an error is an incorrect implementation.
Real causes are manifold.
Ariane 501 Example:
Ariane 5 was intended as a successor to the Ariane 4 family (which had over 100 successful launches), leading to code reuse.
Maiden flight on 4.6.1996.
The rocket blew itself up after a few seconds, resulting in mission loss.
Costs exceeded 500 M€, and the program was held up for 3 years.
A Commission of Inquiry report was released on 6/19/1996 (http://sunnyday.mit.edu/accidents/Ariane5accidentreport.html).
Primary cause: operand error during conversion, lack of exception handling.
Root causes:
Value ranges were assumed for the variables concerned but not documented (distributed responsibility, lack of documentation).
The assumptions could not be verified on the basis of the planned trajectory, as this was not part of the requirements specification (management, lack of software engineering).
Quality Assurance Measures to Prevent Errors
Cost management: Seeing the costs of a preventive measure in relation to the costs of a defect.
Reuse: Existing software must not be reused for a new task without inspection. It must first be checked whether its capabilities meet the requirements of the new task.
Specification: The capabilities of a software and all the assumptions it makes about its environment must be clearly specified.
Error handling: Every potential error situation in a software must either be handled, or the reasons for not handling it must be documented in such a way that the validity of the assumptions made can be verified.
System test: When testing software that consists of several components, it is not enough to test each component in isolation. Comprehensive system tests under the most realistic conditions possible are necessary.
Review: In addition to a thorough test, every program must be inspected by competent experts because the fulfillment and adequacy of assumptions and results in particular often cannot be tested.
Ensuring Quality
Quality management: established process for dealing with quality.
Describing quality (using Requirements Engineering).
Evaluating the quality of software (SW testing).
Achieving quality (throughout the entire lecture, professional software engineering + SW architecture).
Finding errors/defects (SW test).
Avoiding errors/defects (throughout the entire lecture, professional software engineering + SW architecture).
Quality assurance (QA): concrete procedure for ensuring quality.
Constructive QA
Analytical QA
Organizational QA
Organizational and constructive measures are more important and are supplemented by analytical measures.
Quality cannot be "tested" in the product.
Software Quality Assurance
Organizational: Software Project Management, Constructive Software Engineering.
Constructive: Software Examination.
Analytical: Dynamic test, Static test, Reviews.
Execution using the computer: Dynamic test.
Analysis using the computer: Static test.
Analysis by humans: Reviews.
Software Testing
The primary objective of testing is to find errors.
Further test objectives:
Have all requirements been met? (Verification)
Creating confidence in the level of quality.
Create trust.
Information for stakeholders.
Assessing quality risks.
Validation.
Check legal/contractual conformity.
Definition of Software Test
Testing is the process within the software development lifecycle that evaluates the quality of a component or system and related work products.
Failure: An event in which a component or system does not meet its requirements within specified limits (ISTQB Certified Tester Glossary).
Defect: An imperfection in a work product where it does not meet its requirements (ISTQB Certified Tester Glossary). Cause of a failure.
Error: Human action that leads to an error (ISO 24765).
When customers report errors, it is considered a failure.
Test Case
Contents:
Name: Test objective, what is to be tested?
Tested request: Reference to a request or to an error.
Precondition: What data/state is present?
Postcondition: What data/state is available after the test case has been executed?
Description of the test steps: Input, expected outputs, expected exceptions.
Test infrastructure: What is necessary to execute the test case?
Logical vs. Concrete Test Case
Logical test case: Value ranges for input/output.
Empty list.
List with one element.
List with many elements.
Specific test case: Concrete input/output.
Empty list.
List with the element "Person x".
List with the elements: "Person x", "Person y", "Person z".
Test Case Example: Inserting an Item in an Empty ToDo List
Name: Inserting an item in an empty ToDo list.
Tested requirement: Administration ToDo list, A-102.
Precondition: ToDo list is empty.
Postcondition: ToDo list contains added element.
Description of the test steps:
User selects the option to add an item and enters the required data.
System displays updated list.
Test infrastructure: Database connection, test environment X available, …
Positive vs. Negative Test Case
Positive test case:
Correct input.
Expected correct result.
Negative test case:
Inadmissible entries.
Expects exception treatment.
Who Tests?
Everyone is responsible for the quality of a product!
Quality cannot be tested into a product.
Psychology of Testing
Test success is influenced by psychological factors.
Blindness to your own mistakes.
Difficulty in leaving the path already taken.
Human "weaknesses" in the interpretation of information.
Techniques to Minimize Bias
Pair (Programming, Testing).
Test Driven Development (first test, then write code).
Test/review by another person.
When to Test?
Untested code is not code!
Continuous Testing.
The earlier a defect is found, the cheaper it is to fix.
How Others Test
Scanner checks for completeness and correctness of the installed parts.
TANK CRASH-TEST: Tank can withstand certain impact forces.
TANK TEMPERATURE TEST: Tank is functional down to -40 degrees.
Test Levels
Unit tests, Component tests: Uncover errors in components.
Integration test: Uncover errors in the interaction of multiple components.
System test: Uncover errors in end-to-end application workflows.
Acceptance test: Generate confidence in the product, check applicability for intended use.
How to Test?
Systematically and based on experience.
Never without a goal and a plan.
Black-Box Test: Derived without knowledge of the program logic.
White-Box Test: Derived with knowledge of the program logic.
Summary of Testing
What is software testing? Execution of the program with the objective of finding errors.
How do you ensure quality? Through constructive, analytical, and organizational measures.
When to test? As early as possible, always.
Why do you test? To find errors.
Who tests? Everyone; all team members/project members are responsible for the quality of the software.
How much testing do you do? Software can be delivered with acceptable risks.
How do you test? Systematically. Black box, white box, equivalence scale analysis.
What is a failure? Visible occurrence of a fault.
What is a defect? Cause of a failure.
Can you test your own results well? There are psychological aspects that need to be taken into account.
Test Documentation
Test specification: Specifies the test results. Logical test steps are supplemented by specific test data and test steps before or during execution. Specific test data is often only logged in the event of an error.
Error logging: Contains the reference to the test case that led to the error, all information required to reproduce the test case, and a classification of the error.
Test protocol: Summary of the executed test cases, the test results, and at the system test level, a basis for discussion with the customer in preparation for acceptance.
Verification vs. Validation
Testing "against requirements": Verification.
Check against "application context": Validation.
Validation using Usability
Not everything that is functionally ok is also easy to use.
10 heuristics for the design of user interfaces (Jakob Nielsen):
Principle 1: Visibility of system status - FEEDBACK.
Principle 2: Correspondence between system and real world - METAPHOR.
Principle 3: User control and freedom - NAVIGATION.
Principle 4: Consistency and standards - CONSISTENCY.
Principle 5: Error prevention - PREVENTION.
Principle 6: Recognize instead of remember - MEMORY.
Principle 7: Flexibility and efficiency of use - EFFICIENCY.
Principle 8: Aesthetic and minimalist design - DESIGN.
Principle 9: Support users in detecting, diagnosing, and correcting faults - RECOVERY.
Principle 10: Help and documentation - HELP.
Review
Review: A type of static testing in which the quality of a work product or process is evaluated by individuals (ISTQB Certified Tester Glossary).
When is a Review Useful?
As soon as you say someone should look over my (interim) result.
Before milestones.
Before customer appointments.
Before handover to the next phase.
Before you accept an (interim) result.
Who Carries Out Reviews?
Everyone is responsible for the quality of a product!
Quality cannot be tested!
When do you conduct reviews?
As soon as you say someone should look over my (interim) result.
Before milestones.
Before customer appointments.
Before handover to the next phase.
Before you accept an (interim) result.
Error Occurrence and Troubleshooting
Model from product development transferable to SE
Studies have shown that the majority of errors are caused during development. In contrast, errors are often only noticed very late during testing or even during use. However, according to the so-called "Rule of Ten", they cost many times more to rectify.
Rule of Ten of Troubleshooting Costs
Model from product development transferable to SE
According to the Rule of Ten, the costs of rectifying faults increase by a factor of ten from one development stage to the next. In some cases, the recall costs can reach 1000 times the manufacturing costs. Current recall campaigns in the automotive industry show that the costs for a recall can add up to €1.4 billion.
How do you Carry Out Reviews?
Systematically.
Checklist-based: A checklist created in advance is used as a basis.
Role-based: Review from a specific perspective, e.g. customer, user, PO, developer
Perspective-based: Certain aspects defined in advance are reviewed, e.g. consistency or compliance, etc..
ad-hoc.
Formal vs. informal.
A combination of both is effective.
Review Documentation
Checklists for the checklist/perspective-based review.
If applicable, process description for formal review types.
Review protocol contains all findings of a review as well as a summarized evaluation.
Rules for Formulating Review Findings
Formulate the findings neutrally and objectively.
Classify the errors according to their severity.
Please note that a review refers to an artifact and not to a person!
Provide an opportunity for queries!