Identifying limitations/gaps in research allows curious researchers to address them.
Psychologists aim to determine causes of mental events and behaviors (e.g., impact of divorce on children).
Approaches:
Quantitative: Statistical analysis of data from experiments/surveys.
Qualitative: In-depth analysis of data from interviews/observations/case studies.
Mixed Methods: Integrates both.
Theoretical Framework: Grounded in theory.
Theories: Systematic ways of organizing/explaining observations.
Provides framework for hypothesis.
Hypothesis: Proposed relations between variables (cause-effect).
Variables: Phenomenon that can take on more than one value.
Continuous Variables: Continuum of possible values (e.g., reaction time in seconds ranging from 0.5 to 5 seconds).
Categorical Variables: Fixed values (e.g., car make, biological gender).
Whether research results can be applied to the entire population of interest.
Population: Entire group a researcher is interested in (e.g., all people diagnosed with anorexia).
Samples: Smaller subset of the population studied to make inferences.
Representative Sampling: Similar enough to the population for conclusions to be true for the rest of the population.
Sampling Bias: When the sample is not representative.
Requires internal and external validity.
Internal Validity: Procedures of the study are sound, not flawed.
Compromised by unrepresentative samples or unstandardized design.
External Validity: Experimental conditions resemble real-world situations.
Replicating results using different data collection procedures.
Balance: Must balance internal and external validity; tightly controlled experiments may not resemble real life.
Researcher is impartial.
Variables can be measured objectively.
Measure: Concrete means to determine the value of a variable.
Illness: Number of clinic visits.
Hunger: Time without food.
Aggression: Amount of electric shock delivered.
Reliability and Validity: Variable measure must be reliable and valid.
Reliable Measures: Produce consistent measurements (e.g., not a rubber ruler).
Valid Measures: Actually measure the variable of interest (e.g., intelligence ≠ knitting speed).
Test-Retest Reliability: Test yields similar scores for the same individual over time.
Internal Consistency: Different ways of asking the same question yield similar results.
Inter-rater Reliability: Two different testers rate the same person similarly.
Measure's ability to assess the variable it is supposed to assess.
Validation Research: Relates the measure to an objective criterion or measures with demonstrated validity.
Example: Depression measure predicting suicide risk.
Predictive Validity: A valid measure can predict other theoretically related variables.
Tests are biased if the mean score of different groups are systematically different, and the results of the test scores make incorrect predictions in real life (e.g., Westernized IQ tests for indigenous people).
Mitigation: Using multiple measures
Question 1: A systematic way of organizing and explaining observations.
Answer: A theory.
Question 2: A sample is defined as a subgroup…
Answer: of the population that it's likely to be representative of, population as a whole.
Question 3: A measure is internally consistent if…
Answer: Several ways of asking the same question yield similar results.
Grounded in theory.
Representative of the population.
Generalizable.
Reliable.
Valid.
Experimental, Descriptive, Correlational, and Qualitative Research.
Psychology uses a scientific approach
Goals: Description, prediction, and understanding.
Description: Summarizes relationships between variables.
Prediction: Anticipates future events.
Understanding: Identifies causes of phenomena.
Descriptive: Describes behavior.
Correlational: Predicts behavior.
Experimental: Establishes causes of behavior.
Asks whether systematic variation in one variable causes variation in another.
Aims to assess cause-and-effect relationships.
Independent Variable (IV): Manipulated by the experimenter.
Dependent Variable (DV): Response of the participants.
Example: THC exposure and appetite.
IV: THC dose (0 or 2.5 mg).
DV: Amount of food consumed in 30 minutes.
Similar to the experimental group but not exposed to the treatment (e.g., placebo).
Framing the Hypothesis: Predicts the relationship between variables (derived from a theory).
Operationalizing the Variables: Converting abstract concepts to testable forms.
Developing a Standardized Procedure: Setting up experimental/control conditions, attending to demand characteristics and researcher bias.
Demand Characteristics: Participants trying to respond how they think the experimenter wants them to.
Blind Studies: Participants (and sometimes researchers) are unaware of important aspects to prevent bias.
Double-Blind Studies: Minimize expectancy effects from participants and experimenters; neither knows the treatment.
Selecting and Assigning Participants:
Random Assignment: Minimizes systematic differences between groups (age, gender).
Applying Statistical Techniques to the Data:
Descriptive Statistics: Describing the data.
Inferential Statistics: Determining causality vs. chance.
Drawing Conclusions:
Evaluating whether hypotheses are supported (IV and DV are related as predicted).
Interpreting findings in the broader theoretical framework.
Assessing generalizability.
Cleanest findings; can determine cause and effect.
Replicable.
Complex real-world issues are hard to study in the lab.
IV cannot always be manipulated (e.g., divorce).
Quasi-Experimental Designs: Do not allow random assignment; common in psychology.
Example: Comparing intellectual ability of children of divorced vs. together parents.
External Validity: Uncertainty about how well lab findings parallel real life.
Seeks to describe phenomena rather than manipulate variables.
Does not demonstrate cause and effect.
Methods: Case studies, naturalistic observations, survey research.
In-depth study of behavior in one person or a small group.
Used when large numbers are unavailable; common in clinical research.
Drawbacks:
Small sample size (limits generalizability).
Susceptible to researcher/observer bias.
Usefulness: Before/after quantitative studies.
In-depth study of phenomena in their natural setting.
Examples: Primate behavior in the wild.
Advantage: Good generalizability.
Disadvantages:
Observation can alter behavior.
Technique cannot infer the cause of behavior.
Asks questions of large numbers to gain information on attitudes and behavior.
Used to describe behavior and make arguments about influencing variables.
Does not demonstrate cause and effect.
Two approaches: Interviews and questionnaires.
Limitations:
People may respond inaccurately.
Sampling issues.
Random Sample: Selected from general population without systematic bias.
Stratified Random Sample: Specifies the percentage from each population category.
Aims to determine the degree to which variables are related.
Assesses the extent to which being high/low on one measure predicts being high/low on the other (e.g., weight and height).
Can determine mathematical association between data from experiments, case studies, or surveys.
Does not establish causality.
Correlation does not equal causation.
Correlation Coefficient: Measures the extent to which two variables are related (positive or negative).
Range: +1 to -1.
Strong correlation: Close to +1 or -1.
Scatterplot Graphs: Show the scores of every participant along two dimensions.
Studying phenomena in natural settings, drawing meaning from people's words, language, and action.
Involves in-depth analysis of relatively few participants (e.g., 30 participants), yielding richer data and deeper understanding.
Multiple orientations; no single way to approach/analyze qualitative research.
The role of the researcher is acknowledged.
Positivism: Universal truth can be discovered through systematic observation and objective research/measurement.
Interpretivism: Social worlds are subjective and constructed by individuals.
Critical Theory: Focuses on examining the role of power and how it's expressed in our society.
Objectivism: Phenomena exist independent of our beliefs and consciousness.
Constructivism: Meaning of phenomena is the result of social and intellectual constructions.
Subjectivism: Meaning is assigned to phenomena by the observer.
Question 1: In experimental research, the dependent variable is the
Answer: measured
Question 2: Case studies are…
Answer: subject to more researcher bias than other approaches.
Question 3: A stratified random sample reflects…
Answer: the proportion drawn from each population category.
EEG (Electroencephalogram)
CAT Scan (Computerized Axial Tomography)
MRI (Magnetic Resonance Imaging)
PET (Positron Emission Tomography)
FMRI (Functional Magnetic Resonance Imaging)
Example: FMRI used to study brain activity during mental imagery.
Easy access to research and data.
Recruit participants easily, with stratified demographics.
Cost and efficiency benefits over paper surveys.
Access, automation, and easy data processing.
Participants unaffected by researchers in own environment.
Accuracy of information must be scrutinized carefully.
Connectivity issues may exclude participants, leading to sampling issues.
Uncontrolled data collection may make it hard to verify meaningful completion.
Ensuring welfare is a challenge, with anonymous information.
Confidentiality
Debriefing procedures challenges online.
Digital collection of data more efficient for researcher; makes it easier for data to be shared.
Anonymity of participants & ability to keep results confidential
Respect rights and dignity of participants.
Act with professionalism and integrity.
Informed Consent: Participants understand the purpose and expectations.
Voluntary Participation: No excessive rewards or forced participation.
Confidentiality: Information not made public without consent.
Deception: Only when necessary; must inform participants afterward.
Ethical Approval: Required when conducting research with people.
Animal Research: Benefits must outweigh costs.
Does the theoretical framework make sense?
Does the hypothesis flow logically from the theory?
Is the sample adequate and appropriate?
Are the measures and procedures adequate?
Are the data conclusive?
Are the broader conclusions warranted?
Does this study say anything meaningful?
Is the study ethical?
Skepticism: Questioning results.
Objectivity: Being impartial; putting aside biases.
Open-mindedness: Considering all sides.
Straw Man: Attacking the opposing argument to strengthen your own (without evidence).
Appeals to Popularity: Claiming a widespread argument must be true (without evidence).
Appeals to Authority: Claiming an argument is true because of the authority of the person making it (without evidence).
Arguments Directed to the Person: Attacking the authors of the opposing argument based on their supposed failings (instead of evidence).
Question 1: Which of the following are challenges faced by internet research?
Answer: All of the above
Question 2: Informed consent involves…
Answer: the participant's ability to agree to take part in the study.
Question 3: Three principles underlying critical thinking. They are…
Answer: Scepticism, objectivity and open mindedness. This includes skepticism questioning and analysing objectivity, referring to impartial approach and open mindedness, is considering all sides of an issue.