Soft Eng Ch 8: Software Testing

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/28

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

29 Terms

1
New cards

Program Testing

Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put into use.

When you test software, you execute a program using artificial data.

You check the results of the test run for errors, anomalies or information about the program’s non-functional attributes.

Can reveal the presence of errors NOT their absence.

Testing is part of a more general verification and validation process, which also includes static validation techniques.

2
New cards

Program testing goals

To demonstrate to the developer and the customer that the software meets its requirements.

  • For custom software, this means that there should be at least one test for every requirement in the requirements document. For generic software products, it means that there should be tests for all of the system features, plus combinations of these features, that will be incorporated in the product release. (Ex: validation testing)

To discover situations in which the behavior of the software is incorrect, undesirable or does not conform to its specification.

  • Defect testing is concerned with rooting out undesirable system behavior such as system crashes, unwanted interactions with other systems, incorrect computations and data corruption. (Ex: defect testing)

3
New cards

Validation testing

To demonstrate to the developer and the system customer that the software meets its requirements

A successful test shows that the system operates as intended. (system performs correctly given a set of test cases)

4
New cards

Defect testing

To discover faults or defects in the software where its behavior is incorrect or not in conformance with its specification

A successful test is a test that makes the system perform incorrectly and so exposes a defect in the system. (test cases are designed to expose defects)

5
New cards

V & V confidence

to establish confidence that the system is ‘fit for purpose’.

Depends on system’s purpose, user expectations and marketing environment

Software purpose

• The level of confidence depends on how critical the software is to an organization.

User expectations

• Users may have low expectations of software due to previous experience

• May push for more testing as a result

Marketing environment

• Getting a product to market early or first may be more important than finding defects in the program.

6
New cards

Software Inspections

Concerned with analysis of the static system representation to discover problems (static verification)

 May be supplement by tool-based document and code

analysis.

 Viewing requirements, models, source code and test cases

7
New cards

Software Testing

Concerned with exercising and observing product behavior (dynamic verification)

 The system is executed with test data and its operational behavior is observed.

8
New cards

Inspection limits

These involve people examining the source representation with the aim of discovering anomalies and defects.

 Inspections do not require execution of a system so may be used before implementation.

 They may be applied to any representation of the system (requirements, design, configuration data, test data, etc.).

 They have been shown to be an effective technique for discovering program errors.

Inspections and testing are complementary and not opposing verification techniques.

 Both should be used during the V & V process.

Inspections can check conformance with a specification but not conformance with the customer’s real requirements.

 Inspections cannot check non-functional characteristics such as performance, usability, etc.

9
New cards

Scenario based testing

The best way to systematically test a system is to start with a set of scenarios that describe possible uses of the system and then work through these scenarios each time a new version of the system is created.

involves inventing a typical usage scenario and using this to derive test cases.

 Using the scenario, you identify a set of end-to-end pathways that users might follow when using the system.

 An end-to-end pathway is a sequence of actions from starting to use the system for the task, through to completion of the task.

10
New cards

Development testing

includes all testing activities that are done by the team building the system. (unit testing, component testing, system testing)

11
New cards

Unit testing

is the process of testing individual components in isolation.

 It is a defect testing process.

Units may be:

Individual functions or methods within an object

 Object classes with several attributes and methods

 Composite components with defined interfaces used to access their functionality.

12
New cards

Object class testing

Complete test coverage of a class involves

  • Testing all operations associated with an object

  • Setting and interrogating all object attributes

  • Exercising the object in all possible states.

Inheritance makes it more difficult to design object class tests as the information to be tested is not localised

13
New cards

Automated testing

Whenever possible, unit testing should be automated so that tests are run and checked without manual intervention.

In automated unit testing, you make use of a test automation framework (such as JUnit, AssertJ) to write and run your program tests.

Unit testing frameworks provide generic test classes that you extend to create specific test cases. They can then run all of the tests that you have implemented and report, often through some GUI based IDE, on the success of otherwise of the tests.

Automated testing has components:

  • A setup part, where you initialize the system with the test case, namely the inputs and expected outputs.

  • A call part, where you call the object or method to be tested.

  • An assertion part where you compare the result of the call with the expected result. If the assertion evaluates to true, the test has been successful if false, then it has failed

14
New cards

Choosing unit test cases

The test cases should show that, when used as expected, the component that you are testing does what it is supposed to do.

 If there are defects in the component, these should be revealed by test cases.

This leads to 2 types of unit test case:

 The first of these should reflect normal operation of a program and should show that the component works as expected.

 The other kind of test case involves abnormal inputs to check that these are properly processed and do not crash the component.

15
New cards

General testing guidelines

Choose inputs that force the system to generate all error messages

 Design inputs that cause input buffers to overflow

 Repeat the same input or series of inputs numerous times

 Force invalid outputs to be generated

 Force computation results to be too large or too small

16
New cards

Testing guidelines (sequences)

Test software with sequences which have only a single value.

 Use sequences of different sizes in different tests.

 Derive tests so that the first, middle and last elements of the sequence are accessed.

 Test with sequences of zero length.

17
New cards

Component testing

Software components are often composite components that are made up of several interacting objects. (test groups of related objects)

 For example, in the weather station system, the reconfiguration component includes objects that deal with each aspect of the reconfiguration.

 You access the functionality of these objects through the defined component interface.

 Testing composite components should therefore focus on showing that the component interface behaves according to its specification.

 You can assume that unit tests on the individual objects within the component have been completed.

18
New cards

Interface testing

Objectives are to detect faults due to interface errors or invalid assumptions about interfaces.

Interface types:

Parameter interfaces: Data passed from one method or procedure to another, like methods in an object

Shared memory interfaces: Block of memory is shared between

procedures or functions.

Procedural interfaces: Sub-system encapsulates a set of procedures to be called by other sub-systems.

Message passing interfaces: Sub-systems request services from other sub-systems

19
New cards

Interface testing guidelines

Design tests so that parameters to a called procedure are at the extreme ends of their ranges.

 Always test pointer parameters with null pointers.

 Design tests which cause the component to fail.

 Use stress testing in message passing systems.

 In shared memory systems, vary the order in which components are activated.

20
New cards

System testing

involves integrating components to create a version of the system and then testing the integrated system.

 The focus in system testing is testing the interactions between components.

 System testing checks that components are compatible, interact correctly and transfer the right data at the right time across their interfaces.

 System testing tests the emergent behavior of a system.

 some system functionality and characteristics only become obvious when you put the components together

During system testing, reusable components that have been separately developed and off-the-shelf systems may be integrated with newly developed components.

The complete system is then tested.

 Components developed by different team members or sub-teams may be integrated at this stage.

System testing is a collective rather than an individual process.

 In some companies, system testing may involve a separate testing team with no involvement from designers and programmers.

21
New cards

Use-case testing

The use-cases developed to identify system interactions can be used as a basis for system testing.

 Each use case usually involves several system components so testing the use case forces these interactions to occur.

 The sequence diagrams associated with the use case documents the components and interactions that are being tested.

22
New cards

Test cases derived from sequence diagram

An input of a request for a report should have an associated acknowledgement. A report should ultimately be returned from the request.

 You should create summarized data that can be used to check that the report is correctly organized.

An input request for a report to WeatherStation results in a summarized report being generated.

 Can be tested by creating raw data corresponding to the summary that you have prepared for the test of SatComms and checking that the WeatherStation object correctly produces this summary. This raw data is also used to test the WeatherData object.

23
New cards

Testing policies

Exhaustive system testing is impossible so testing policies which define the required system test coverage may be developed.

Examples of testing policies:

 All system functions that are accessed through menus should be tested.

 Combinations of functions (e.g. text formatting) that are accessed through the same menu must be tested.

 Where user input is provided, all functions must be tested with both correct and incorrect input.

24
New cards

Test driven development

an approach to program development in which you inter-leave testing and code development.

Tests are written before code and ‘passing’ the tests is the critical driver of development.

 You develop code incrementally, along with a test for that increment. You don’t move on to the next increment until the code that you have developed passes its test.

 TDD was introduced as part of agile methods such as Extreme Programming. However, it can also be used in plan-driven development processes.

25
New cards

TDD Process activities

Start by identifying the increment of functionality that is required. This should normally be small and coded in a few lines.

 Write a test for this functionality and implement this as an automated test.

 Run the test, along with all other tests that have been implemented. Initially, you have not implemented the functionality so the new test will fail.

 Implement the functionality and re-run the test.

 Once all tests run successfully, you move on to implementing the next chunk of functionality.

26
New cards

Benefits of TDD

Code coverage

  • Every code segment that you write has at least one associated test so all code written has at least one test.

Regression testing

  • A regression test suite is developed incrementally as a program is developed.

Simplified debugging

  • When a test fails, it should be obvious where the problem lies. The newly written code needs to be checked and modified.

System documentation

  • The tests themselves are a form of documentation that describe what the code should be doing.

27
New cards

Regression testing

is testing the system to check that changes have not ‘broken’ previously working code.

 In a manual testing process, regression testing is expensive but, with automated testing, it is simple and straightforward. All tests are rerun every time a change is made to the program.

 Tests must run ‘successfully’ before the change is committed.

28
New cards

Stages of acceptance testing

  • Define acceptance criteria

Acceptance means the software developers should get their final payment

  • Plan acceptance testing

  • Derive acceptance tests

  • Run acceptance tests

  • Negotiate test results

  • Reject/accept system

29
New cards

Agile methods and acceptance testing

In agile methods, the user/customer is part of the development team and is responsible for making decisions on the acceptability of the system.

 Tests are defined by the user/customer and are integrated with other tests in that they are run automatically when changes are made.

 There is no separate acceptance testing process.

 Main problem here is whether or not the embedded user is ‘typical’ and can represent the interests of all system stakeholders.