1/66
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
primary data collection
collected by person who is investigating
secondary data collection
collected by someone else, but used by the researcher
qualitative
relating to, measuring, or measured by the quality of something rather than its quantity.
qualitative: ordinal
ordered with relative value (small, medium, large)
qualitative: nominal
data in labels
Quantitative
capable of being measured or expressed as an amount using numbers
Quantitative: Discrete
countable values, countably infinite values, numeric. 1,2,3
Quantitative: Continuous
values that are continually counted. 1.1, 1.2, 1.3, 1.4, 1.5
IV (independent variable)
Variable that is changed to record changes in the dependent variable.
DV (dependent variable)
A variable that is changed in response to the IV changing.
Controlled Variable
Factor in an experiment that a scientist purposely keeps the same and controls so it doesn't affect the dependent variable.
Confounding variable
In the system. Researcher can't tell if this caused DV change or IV
Case study
A study observing a variable which is not controlled as naturally occurring within case context.
Controlled experiment
Investigating the relationship between the IV and its effect on the DV.
Correlational study
Non controlled investigation to find the connection between multiple variables
Fieldwork
Going somewhere to observe the unique variables in a place.
Literature review
Using secondary sources for research
Modelling
Model simulating something.
Simulation
Use model to investigate, use an app to simulate colour blindness
Random sampling
Every person in a population equal likelihood to be selected.
Experimental group
Group exposed to the IV.
Control group
Group not exposed to the IV.
Random allocation
Sub selection in the sample for an equal chance to be in the control or experimental group.
Within subjects design
All people tested under normal rest vs sleep deprived state in problem solving.
Order effects
Effects in investigation if people do experiment twice, as they are more used to it.
Counterbalancing
Method to overcome order effects by having 1/2 participants do one half of the experiment first, then the other half after, while the other half of participants swap order.
Between Subjects Design
A design where sample is randomly allocated to only 1 condition of the experiment per group, either control or experimental.
Mixed design
Study with both between and within aspects, often when multiple IV affect DV.
Principles: Integrity
Ethical principle of being honest, communicating results and publishing them even if unfavorable.
Justice
Fairness in all aspects of research, recruiting, treatment, ensuring fairness for all participants.
Beneficence
Doing good, maximizing benefits for all and minimizing harms, avoiding causing harm.
Non maleficence
If harm must be caused, do not proceed.
Respect
Respect everyone's beliefs and decisions, which should be protected.
Reproducibility
Repeating research under changed conditions.
Deidentify people
Guidelines for confidentiality, including storing and disposing of data safely.
Voluntary participation
Participants can say no, with no pressure to participate and the freedom to discontinue participation at any time.
Withdrawal rights
The responsibility of the researcher to explain this to the participants beforehand.
Informed consent
Participants must be given information about the study and its nature before agreeing to take part.
Deception in research
Permitted if knowing the truth confounds results, but participants must be debriefed afterward.
Debrief
Participants are told of the deception, aims, results, and conclusions of the study, and informed of a counselor if needed.
Individual participant differences
Unique combination of personal characteristics, abilities, and backgrounds each participant brings to an experiment.
Use of non standard instructions and procedures
When research procedures differ and may affect participants' responses.
Experimenter effects/bias
Researchers can inadvertently or intentionally influence a participant's responses.
Placebo effect
Fake treatment causing the same effect because of belief rather than the actual IV.
Situational variables
Environmental factors influencing participant behavior, such as lights, noise, and temperature.
Demand characteristics
Clues in an experiment that tell the participant what the purpose of the experiment is, potentially altering their behavior.
true value
True Value: the value or range of values you obtain if your measurement is perfect without any error
But there are always all types of errors! True value is something we believe to exist and aspire to, but it is also something that we are unlikely to ever know for sure.
accuracy
Accuracy is how closely a measurement is to the ‘true’ value of the quantity being measured
True value = The value, or range of values that would be found if the quantity could be measured perfectly.
It is described in qualitative terms, such as being more accurate or less accurate
For example, if a student’s true value of their height is 172 cm, but they measure their own height as 176 cm by using a damaged ruler, then their value has poor accuracy.
Precision
Precision is how closely a set of measurement values agree with each other. Unlike accuracy, precision does not indicate how close the measurements are to the true value.
If a study was replicated and similar results were obtained this indicates that the results are precise and that our method is reliable.
For example, a fridge thermometer is checked every day for a week, and obtains the following results: 3.1, 3.2, 3.1, 3.1, 3.2, 3.2, 3.1
These results can be considered to be precise as the values are close together and quite consistent.
Repeatability
Repeatability: the ability to obtain the same data values again under the same experimental conditions by the same observer.
A good measure for precision
NOT good measure for accuracy, NOT
Reproducibility
Reproducibility: the ability to obtain the same data values again under slightly different experimental conditions, such as with a different measuring instrument, in a significantly different experimental environment, or with different experimenters.
Requires clear experimental methods and well-defined variables
A good measure for accuracy, as well as precision
validity
In general, validity denotes how well a measurement measures what it claims to be measuring.
If I want to measure the fitness of a group of individuals, but I only collect data about their height and weight to calculate each individual’s BMI, do you think that is a valid measurement?
Do you think VCE exam is a valid measurement of students’ learning?
Similarly, we use qualitative terms, low/medium/high to describe the validity of a measurement.
internal validity
Internal Validity: an assessment of how well an investigation actually measures what it was designed to measure. That is, was it the IV that caused the change in the DV, or was it something else?
Factors that can influence internal validity:
confounding and extraneous variable
research design
sampling and allocation methods used
A lack of internal validity suggests that the results are inaccurate, and that no conclusion can be made.
If a study has low internal validity, then external validity is irrelevant.
external validity
External Validity: an assessment of the generalizability of your findings to a greater population.
Generalizability: how much of your findings obtained from the experiment participants still hold true on other target populations?
e.g, Replication crisis & W.E.I.R.D.: Western, Educated, Industrialized, Rich, Democratic, e.g., the Müller-Lyer illusion
random errors
Random errors are unpredictable errors that affect the precision of a measurement.
how can random errors occur
They can occur as a result of:
environmental factors
imprecise or unreliable instruments
variations in procedure
Example: During a reaction time test, a participant is momentarily distracted by a sudden noise
how can random errors be eliminated
When you have a random error, if you measure the same thing multiple times, the scores will cluster around the true value. However, the average of these scores is quite close to the true value.
Random errors cannot be eliminated from an experiment.
Random errors may be reduced by:
repeating and conducting more measurements
calibrating measurement tools correctly
refining measurement procedures
controlling any other extraneous variables
increasing the sample size of participants
systematic errors
Systematic errors result in a consistent or proportional difference between the observed value and true value each time a measurement is taken.
reduce accuracy of a measurement
sources of systematic errors
They can occur as a result of:
observation errors
environmental interference
inaccurate instrument calibration eg, using an inaccurate ruler
how can systematic errors be eliminated
They can be reduced by being familiar with the instruments being used and using them correctly.
Repetition of measurements will NOT reduce systematic errors.
uncertainty
Uncertainty refers to the inherent (inbuilt) lack of knowledge of what the true value is. I.e how on earth do we accurately measure the exact amount of stress someone is feeling??
For example, a study may aim to test ‘positive mood’ and use a range of measures that attempt to assess it. However, given the blurred boundaries of what ‘positive mood’ truly is, researchers would still have some uncertainty in its assessment. The uncertainty of measurement reflects the lack of exact knowledge regarding the true value and less quantifiable nature of what is being measured.
sources of uncertainty
Sources of uncertainty:
Measured construct not clearly defined, e.g., how to measure a subjective feeling? Does the same data value represent the same actual value?
No standardized tools, e.g., three thermometers give three different readings, how do we make sure what the room temperature is?
No standardized procedure, e.g., a mixture of self-report and observation
Incomplete data, e.g., only collecting 50% of intended responses, does the data still representative for our sample?
Contradictory data, e.g., suggesting possible confounding factors.
When evaluating personally sourced or provided data, students should be able to identify contradictory (incorrect data) and incomplete data (missing data – questions without answers or variables without observations), including possible sources of bias
outliers
data point not like the other data points
do we omit outliers?
no you must consider if they are genuine values and reasons for outliers
what method do you use to measure and ensure there’s no outliers
the median, the middle value.
causes of outliers
An accident during the experiment or measurement
Data entry errors
Naughty participants
Some unaccounted underlying mechanism
mitigation of outliers
Multiple measurements to rule out random or personal errors
Search for confounding variables or previously ignored mechanism if the same outlier repeatedly occurs