1/6
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Why Validate Models?
A lot of environmental models are black boxes, meaning its hard to gain any sort of understanding of the system and how it works. By validaitng models, we can make it more transparent and build our understanding of the system.
There are various tools to test and evaluate environmental models:
Sensitivity & uncertainty analysis
Cross-validation
Etc
Compounding Uncertainties
Uncertainties are brought into models at every step of making and using a model. Remember that this includes:
Conceptualisation of the model & our thinking of the problem
Analysis of relationships with a chosen developed model
Testing, refining, and evaluating the developed model, a feedback loop of adjusting parameters and outputs
Applications of model
Uncertainties are any departure from the real world, any non-complete determinism and can even arise when lots of information is available and accessible (depends on how we use the information)
Epistemic Uncertainty
Uncertainties surrounding determinate facts
Measurement error, imperfections in sampling technique (instruments, improper usage, etc)
Systematic error, bias in the sampling technique
Natural variation/Inherent randomness, always present in systems (non-deterministic systems)
Model uncertainty from omission of information and representation
Subjective judgement, the ways we interpret data (our bias, understandings, interpretations, etc)
Linguistic Uncertainty
Uncertainties surrounding communication of models
Non-numeric vagueness, fuzziness around (a lot, many, etc) and permits borderline cases
Context dependence, failure to understand the context (relational, etc)
Ambiguity, fact words that have more than one meaning
Underspecificity, unwanted generality
Location Uncertainties
Context: the boundaries of system (how closed is it, have they been identified, etc)
Model uncertainty, in the structure (structural uncertainty, number of inputs, how they’re used) or from hardware/software (technical uncertainty)
Inputs, some of which can be controlled (random), others can’t. Uncertainty about exogenous factors that control a system (& their importance) and data available to describe the system (amount of knowledge vs. variability of data)
Parameter uncertainty, associated with data used to parameterise and calibrate a model, which tend to be oosely constrained & difficult to measure
Outcome uncertainty, being the prediction erros
Level of Uncertainties
Depending on the system investigating, our level of uncertainties can vary
Statistical uncertainty, we can describe with statistical terms (sampling error, inaccuracies, etc)
Scenario uncertainty, related to uncertainty in qualitative outcomes
Recgonise ignorance, ‘known unknowns’
Total ignorance, ‘unknown unknowns’
Nature of Uncertainties
Epistemic uncertainty, in what we ‘know’, and is reducible (limited & inaccurate data, measurement errors, limited understanding, subjective judgements, etc)
Variability uncertainty, randomness induced by variation associated with many potential sources (external input data, parameters, and certain model structures, just field things and we know that much about because it’s chaotic)