1/61
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Review
is a process for evaluating a documented software project product
Formal Design Reviews, Peer Reviews, Expert Opinions
Give the (3) Methodologies of Review
Formal design reviews
also called design reviews (DRs), differ from all other review methods
Review Leader
Appropriate candidates for review team leadership
Review Team
The entire of this should be selected from among the senior members of the project team
Inspection participants, Walkthrough participants
Give the (2) recommended peer review team
Inspection Participants
The number of participants in an inspection range from a minimum of three to a maximum of six
Review Leader (Moderator)
Leader of the inspection responsible for planning the inspection and coordinating it
The Author
is, with no exception, a participant in each type of peer review
Designer, Coder or Implementer, Tester
These (3) are the Specialized Professionals
A Designer
They are the system analyst responsible for analyzing and designing the software system reviewed.
A Coder or Implementer
A professional who is thoroughly acquainted with coding tasks, preferably the designated coding team leader
A Tester
This experienced professional, preferably the leader of the assigned testing team, focuses identifying design errors usually detected during the testing phase
Walkthrough Participants
The participants are led through the material in one of two formats
Review Leader (Coordinator
Candidates for the coordinator position should have traits similar to those of the inspection moderator.
The Author
is, with no exception, a participant. In many cases, he/she serves as the coordinator
A Standards Enforcer
This team member, who specializes in developing standards and procedures
A Maintenance Expert
called upon to focus on maintainability, flexibility, and testability issues
A User Representative
Participation of an internal or external user representative in the walkthrough team contributes to the review’s validity
Expert opinions
prepared by outside experts, support quality evaluation by introducing additional capabilities to the internal review staff
Software testing
is an activity in which a system or component is executed under specified conditions.
Software testing levels
are the different stages of the software development lifecycle where testing is conducted
Unit Testing
A level of the software testing process where individual units of software are tested
Integration Testing
A level of the software testing process where individual units are combined and tested as a group
System Testing
A level of the software testing process where a complete, integrated system is tested
Acceptance Testing
A level of the software testing process where a system is tested for acceptability
Incremental Testing
tests the software in steps – software modules as they are completed (unit tests)
Big Bang Testing
tests the software as a whole once the completed package is available
Black Box Testing
is also called “structural testing”. It identifies bugs only according to software malfunctioning as they are revealed in its erroneous outputs.
White Box Testing
examines internal calculation paths in order to identify bugs
Manual Testing
is the process of testing software by hand to learn more about it, to find what is and isn’t working
Automated Testing
is the process of testing the software using an automation tool to find the defects
Alpha Site Testing
are tests performed by potential users at the developer’s site on a new software package.
Beta Site Testing
is much more commonly applied than alpha site testing.
software operation contract review
are based on the contract draft
Software operation services plans
are annual plans that direct the management regarding the required resources
Software operation quality metrics
are used to identify trends in software operation services efficiency, effectiveness, and customer satisfaction, and as basic information for planning and budgeting.
Relevant
Related to an attribute of substantial importance
Valid
Measures the required attribute
Reliable
Produces similar results when applied under similar conditions
Comprehensive
Applicable to a large variety of implementations and situations
Mutually exclusive
Does not measure attributes measured by other metrics
Easy and simple
The implementation of the metrics data collection is simple and performed with minimal resources
Does not required independent data collection
Metrics data collection can be integrated with other project data collection systems
Immune to biased interventions by interested parties
The data collection and processing system is protected from unwanted changes
Software Product Metrics
are a quantitative representation of software products or intermediate product’s attributes
Software Product Size Metrics, Software Attributes Metrics
The (2) Classification of Software Product Metrics
Thousand Lines of Codes
This metric represents metrics based on the physical completed size of software
Function Points
This metric represents the result of applying a measure from the group of functional size measurement methods
Software Functionality Metrics
relate to the following aspects: suitability, accuracy, interoperability, security, and functionality compliance.
Software Reliability Metrics
User metrics distinguish between: Full Reliability, Vital Reliability, Total Unreliability
Full Reliability
When all software system functions perform properly
Vital Reliability
When all vital functions function properly
Total Unreliability
When all software system functions fail
Software Usability Metrics
relate to the following aspects: understandability, learnability, operability, attractiveness, and usability compliance.
Software Efficiency Metrics
relate to the following aspects: behavior overtime, resource utilization, and efficiency compliance
Software Maintainability Metrics
relate to the following aspects: analyzability, changeability, stability, testability, and maintainability compliance.
Software Portability Metrics
relate to the following aspects: adaptability, installability, coexistence, replaceability, and portability compliance.
Software Effectiveness Metrics
relate to a variety of implementation situations: corrections and changes of the original software product
Software Productivity Metrics
relate to a variety of implementation situations allowing comparison between tasks and teams and between time periods
Software Safety Metrics
relate to the user being injured as a result of software safety failure.
Software Satisfaction Metrics
related to user satisfaction, where the level of satisfaction is measured by a questionnaire.