1/48
OBSIDIAN LINKS: Software crisis
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
what is the software crisis
Term used during early computing but can be applied now, the difficulty of producing useful and efficient programs with good design in the required time, caused by the rapid increase in computer power and possible problem complexity
what caused the software crisis
As software complexity increased, many software problems arose because existing methods were inadequate.
Also caused by the fact computing power has outpaced the skill ability of programmers effectively using those capabilities.
processes and methodologies created to try improve it but large projects will always be vulnerable
what are the crisis signs of the software crisis
over budget
over time
inefficient software
low quality software
not meeting requirements
hard to maintain
never delivered
what is a software engineering study
consists of an aim - what the study tries to answer or do
methodology - how to conduct design and implementation
results
conclusions - making conclusions needs to be based on evidence and critical thinking based on the threats to validity
threats to validity - anything which could affect the credibility of the product developed
which of these bathtub curves is the best
people can have different affects on the bathtub curve
the red curve is best
what data is needed to construct a bathtub curve
requires accurate failure data (not always easy to gather):
need to measure time according to a subjective timeline decided by the team e.g.
release
hourly, daily, monthly, yearly
why is it hard to gather accurate failure data
hard to know which failures to include
minor, major, critical failures?
the severity of each is subjective
if combining all 3 failures in 1 bathtub, makes it difficult to compare bathtubs on a common basis as minor failures become equivalent to showstoppers
if trying to track the types individually, individual bathtubs need to be created
how does brooks law impact the bathtub curve
bathtub can be drawn for a developing system as well and not just a released system, where the end coincides with more people being added and higher failure rate
how do refactoring practices impact the bathtub curve
when you do refactoring, there will be a temporary dip in the failure rate, but once the refactoring wears out it will go up again, resulting in these spikey teeth like bathtub curve - shows that refactoring is temporary, and later on in the system need to put in more effort for the same results as earlier in the system
bathtubs not always a smooth line, but can have bumps and zigzags to represent the tech debt and refactoring changes
what is a productivity drain
any wasted activity or situation that negatively affecting a programmers ability to code on time and on budget
why are productivity drains bad
bad because productivity drains impact how well programmers can do tasks like refactoring
shows how human factors have a significant impact on software engineering, a lot of the principles depend on whether humans can follow them effectively.
most times not, leading to the software crisis
what are the 6 productivity drains
misaligned work
excess work in progress
rework
demand vs. capacity imbalance
repetitive and manual effort
aged and cancelled effort
what is the drain - misaligned work
basically ‘productive procrastination’
based on unclear communication and poor grasp on priorities
developers tackle tasks that don’t directly contribute to the important tasks
can tech debt, TODO be classed as misaligned work? there’s a balance between what we should do now or what we should leave
what is the drain - excess work in progress
no penalty for overloading developers
this creates a silent stressor as uncomplete tasks pile up and project deadlines slip (tech debt)
what is the drain - rework
having to redo work - possibly refactoring, again, debatable on whether refactoring is rework or is useful
a roadblock to productivity (blocker) caused by
a tangled web of unclear requirements
poor communication between teams
inadequate testing practices
unaddressed tech debt
what is the drain - demand vs. capacity imbalance
mismatch between the demand for work and the available capacity to handle it (basically, either too much work or no work all, no balance at all)
occurs when one stage in a connected workflow operates too quickly or slowly for the next stage
what is the drain - repetitive manual effort
repetitive tasks like manual testing, data entry, and low value work
this work steals valuable time from better more innovative tasks and could just be automated
what is the drain - aged and cancelled effort
miscommunication/inflexibility can cause work to be abandoned or obsolete (binned work) before completion
caused by lack of adaption to new information or feedback
what is software cost estimation
estimating the time and cost and effort it will take to develop a system
what is the cone of uncertainty in estimation
concept that shows that over time, the amount of error you make in estimating the amount of time it takes to complete something goes down
estimation accuracy increases as you become more knowledgeable in what the system is going to do
what is a BDUF project
big design up front
associated with waterfall methodology where design must be completed and perfected before implementation
what is an iterative project
the project is developed in iterations or stages that can be repeated as needed, associated with agile
how does the cone of uncertainty differ between BDUF and iterative projects
BUDF - greater estimation error
iterative - better estimation accuracy
what are the different software estimation techniques
algorithmic cost modelling
expert judgement
estimation by analogy
Parkinson’s law
specification-based
what is algorithmic cost modelling estimation
a formulaic approach based on historical cost information, generally based on software size
cost/effort is estimated as a formula including product, project, and process attributes (A, B, M)
values estimated by project manager
most models ae similar but have different A, B, M values
doesn’t always involve experts, just people who collect the data
what is the formula for algorithmic cost modelling
Effort = A * (B/M)
A is anticipated system complexity decided by the PM - this again is subjective to define
B the number of hours available
M a value reflecting the number of product, process and people attributes
what is expert judgement estimation
1 of more experts in both software development and application domain use their experience to predict the software costs
process iterates until consensus is reaches
similar to algorithmic cost modelling, but intuition only and no formula
what are the pros of expert judgement
cheap estimation method
can be accurate if experts have direct experience of similar systems
what are the cons of expert judgement
very inaccurate if there are no true experts
what is estimation by analogy
cost of a project computed by comparing the project to a similar project in the same application domain
NASA does this a lot since many systems are similar, so past systems act as a good guide for future system estimation
what are the pros of estimation by analogy
accurate if previous project data available
what are the cons of estimation by analogy
impossible is no comparable project done and need to start from the beginning
needs systematically maintained cost data
what is Parkinson’s law estimation
states that the work expands to fill the time available, so whatever costs are set, they’ll always be used up
how do we combat Parkinson’s law
state what, the project will cost whatever resources are available and that’s all it’s getting (applying a fixed cost)
what are the pros of Parkinson’s law
no risk of overspending
what are the cons of Parkinson’s law
system is usually unfinished because the cost is determined by the available resources instead of the objective statement
what is specification-based estimation
project cost is agreed on basis of an outline proposal and the development is constrained by that cost
a detailed specification may be negotiated
may be the only appropriate when other detailed information is lacking - no prior knowledge, the specification itself is the guide onto how long it will take e.g. making a spec in uni, you are your own expert
what are some issues with Cyclomatic Complexity
entirely different programs can have the same CC value
cant be calculates where case and switch statements are used for conditions
people argue it is no better than LOC, might as well use LOC
hard to interpret e.g. CC 10 vs. CC 20,
2x as complex?
some classes have a very high CC but can be simple and easy to understand e.g. doing 1 thing that has many conditions
developed for procedural like C, should it be used for OO?
CC often replaces using avg(CC) where the average CC of methods in a class
this does not represent the spread of CC across the class e.g. {10,1,1} vs. {4,4,4}
what are some issues with DIT
nobody has come up with a good value for DIT e.g. earlier inheritance study is not the only correct one
some systems need lots of inheritance and some don’t e.g. GUI lots DIT vs. maths based systems not a lot DIT
most developers in industry don’t see the value of DIT, so what’s the point in collecting it? LOC and CC are more favoured - most systems have a median DIT = 1
most systems collapse to DIT 1 as a result of merging sub and super classes
DIT meant to reflect how humans structure info but in reality doesn’t really show this, we don’t know what level of inheritance is good
what is Di Penta 2020 refactoring study about
so many things were developed in the 60s-80s, because nobody has come up with a better alternative since
what is Tuckman’s Theory of Teams
suggests that every team goes through four key stages as they work on a task before it can become effective
these phases are all necessary and inevitable for a team to grow up to face challenges, tackle problems, find solutions, plan work, and deliver results
as the team develops maturity and ability, relationships establish, and leadership style changes to more collaborative or shared leadership
what are the phases in Tuckerman’s theory of teams
forming
storming
norming
performing
adjorning
what is the forming phase
team spends time getting acquainted
open discussion on the purpose, methods, and problems of the task
what is the storming phase
open debate - conflict may exist on the goals, leadership, methods of task
most challenging stage - lack of trust, personality clashes, productivity nosedive from misaligned work
what is the norming phase
a degree of consensus enables the team to move on
differences are worked through, team learns to cooperate and take responsibility - focus on team goals and productivity
harmony between members
what is the performing phase
team is effective
open compliance emerges to group norms and practice allowing the team to be flexible, functional, and constructive
team requires minimum assistance from leadership and can effective in productivity and performance
what is the adjourning phase
the team agrees it has completed the phase
followed optionally by mourning stage (not in the original stages but added after) where the team is nostalgic that it’s over if the team was closely bonded
what is the role of the project manager in Tuckerman’s theory of teams
need to support the team throughout the different stages in different ways
forming - be open and encourage open communication, give high guidance as roles are unestablished
storming - be patient, focus the energy, support the wounded, authoritative leadership to push things back on track
norming - a chance to coordinate the consensus, can be relaxed but ensure it doesn’t go back to storming
performing - maintain the team momentum (not easy), little guidance needed
just a student study on inheritance