PSYC 302 Exam Study Guide
Levels of Analysis
Biological:
Neurochemistry
Genetics
Brain structures
Individual:
Personality
Perception
Cognition
Behavior
Social:
Interpersonal interaction
Cultural norms
Groups
Goals of Psychological Science
Describe:
Improve understanding of a phenomenon through systemic observation.
Predict:
Using observed relationships to predict outcomes.
Explain:
Determining the cause of phenomena.
Difference Between Predictive and Explanatory Research
Predictive Research:
Focused on improving accuracy in prediction without concern for underlying causes.
Explanatory Research:
Aims to understand the "why" behind phenomena.
Research Questions
Descriptive Research Question:
Example: What is the age distribution at which toddlers first produce two-word utterances in a community sample?
Predictive Research Question:
Example: Predict kindergarten readiness scores from preschool measures including language skills, executive function tasks, parent reports, and classroom observations.
Explanatory Research Question:
Example: Why do people slow down when they're distracted?
Types of Research
Basic Research:
Achieving a more detailed understanding of a phenomenon for the sake of knowledge.
Applied Research:
Direction applied to real-world problems, aimed at solving immediate issues.
Ways of Knowing
Intuition:
Relying on instincts or emotions rather than facts.
Authority:
Accepting ideas based on the credibility of an expert (e.g., parent, doctor, government).
Tenacity:
Refusing to alter acquired knowledge despite contrary evidence; includes bigotry and habit.
Rationalism:
Using logic and reasoning to derive conclusions.
Falsifiability in Theories
Problem of Falsifiability (Freud):
Freud's hypotheses cannot be proven wrong; unconscious conflicts are unobservable and unmeasurable, making them unscientific.
Popperian Criterion:
For a theory to be considered scientific, it must be falsifiable: we must be able to conceive of observations that could demonstrate the theory to be false.
Challenges in Psychology
Challenge:
Often studies unobservable phenomena such as love, aggression, hunger, memory, and intelligence.
Solution:
Utilize operational definitions to define constructs in terms of specific observable measures.
Key Concepts in Measurement
Systematic Empiricism:
Psychology often employs systematic empirical methods for understanding phenomena.
Operational Definitions:
Must define invisible constructs to facilitate measurement; precision in measurement is crucial.
Good Theories
Criteria for a Good Theory:
Parsimony:
Explaining many phenomena with fewer statements/conditions.
Precision:
Closely related to measurable variables; sometimes lacking in psychology.
Testability:
A theory must produce testable hypotheses to be falsifiable.
Constructs and Definitions
Constructs:
Unobservable intervening variables that simplify complex theories.
Operational Definition:
A definition outlining how a variable is measured, ensuring clarity and precision.
Hypothesis:
A specific prediction about a phenomenon that should be observed if a theory is accurate.
Population and Sampling
Population of Interest:
Researchers aim to draw conclusions about large groups of people.
Random Sampling:
Every member of the population has an equal chance of selection.
Convenience Sampling:
Involves studying individuals close to the researcher, as it is more feasible; accounts for 90% of research.
Research Designs
Experimental Research:
Involves manipulation and control of variables.
Non-experimental Research:
Involves observation and recording without manipulation; examples include surveys, case studies, and naturalistic observations.
Types of Experimental Conditions:
Laboratory Experiments:
High internal validity.
Field Research:
Results generalizable to other settings.
Psychometrics and Measurement
Psychometrics:
The field of measurement in psychology.
Measurements:
Assignment of numbers to individuals reflecting characteristics (operationalization of constructs).
Types of Measures:
Simple Measurements:
e.g., weight, temperature, consistent.
Complex Measurements:
e.g., short-term memory, depression, high variability.
Types of Measurement Levels
Nominal Level:
Categorical variables; category labels assigned.
Ordinal Level:
Rank order of individuals reflected by assigned scores.
Interval Level:
Numerical scales with equal interpretation.
Ratio Level:
Involves a true zero representing complete absence; counts of objects/events.
Reliability in Research
Reliability Types:
Test-Retest Reliability:
Stability over time, visualized via scatter plot.
Internal Consistency:
Consistency across items on a measure, measured via split-half correlation.
Inter-rater Reliability:
Consistency among different observers.
Validity Types
Face Validity:
Measure corresponds to the construct of interest.
Content Validity:
Measure encompasses all aspects of the construct.
Criterion Validity:
Scores correlated with relevant other variables.
Discriminant Validity:
Measures are not correlated with conceptually distinct variables.
Threats to Validity and Reliability
Demand Characteristics:
Cues that inform participants about the researcher's expectations, influencing behavior.
Social Desirability Bias:
Responding according to perceived societal expectations.
Hawthorne/Observer Effects:
Behavioral changes due to observation; can be a mechanism for demand characteristics.
Self-Selection Bias:
Occurs when individuals who feel strongly about a subject respond more than those indifferent.
Bias Minimization Strategies
Employ unobtrusive measurements.
Standardize interactions with subjects.
Use blind or double-blind procedures where applicable.
Emphasize anonymity of subjects.
Maintain a non-threatening and low-key setting.
Advantages of Experiments
Control Over Extraneous Variables:
Extensive control leads to more reliable results.
Economical:
You can achieve more with fewer resources.
Critical Experiments:
Pitting two theories against one another.
What-If Experiments:
Conducted without a compelling theory to observe outcomes.
Replication:
Repeating experiments to validate initial results.
Experimental Design Considerations
Manipulation of IV:
Must be robust to produce expected effects.
Reliability and Validity of DV:
Measure must be reliable, valid, and have adequate range.
Experimental Design Types
Between-Subjects Design:
Different groups receive different treatments, ensuring no interaction between the groups.
Within-Subjects Design:
All participants experience all levels of the IV; economical and controls for individual differences.
Mixed Designs:
Combination of between and within-subjects factors; allows flexible analysis.
Types of Research Methods
True Experiments:
Investigate causal relationships where IV can be manipulated ethically and feasibly.
Quasi-Experimental Research:
Investigate causal relationships but cannot manipulate IVs or randomize participants.
Key Experiments
Milgram Experiment:
Participants instructed to administer shocks for incorrect answers; revealed obedience to authority with 2/3 willing to administer dangerous shocks.
Non-Experimental Research Types
Used for single-variable research, correlational research, regression.
Single Variable Research:
Focuses on one variable rather than a relationship between two; exemplified by the Milgram experiment.
Correlational Research
Describes non-causal relationships between two variables (e.g., verbal intelligence and mathematical ability); visualized with scatter plots.
Weaknesses:
Limited to linear relationships, lacks directionality, prone to third-variable issues and spurious correlations.
Regression Analysis
Investigative method for relationships between variables.
Similarities and differences with correlation:
Both quantify relationships, but regression assesses how changes in one variable affect another; results change if X and Y are swapped.
Key Advantages:
Correlation is concise; regression allows for detailed analysis and accounts for confounding variables.
Validity Types
Internal Validity:
Supports causal conclusions regarding the IV and observed differences in the DV.
External Validity:
Validity of generalizing conclusions outside the specific conditions of the study.
Non-Experimental Methods
Naturalistic Observation:
Observing behavior in natural settings, may face reactivity and observer bias issues.
Survey Research:
Self-reports of variables of interest; sampling strategy crucial for accurate population representation.
Research Designs
One-Group Posttest Only:
Weakest design; measures DV after an intervention without control.
One-Group Pretest-Posttest:
Measures the DV before and after an intervention, lacking a control group.
Interrupted Time-Series:
Focused on assessing direction before and after an intervention.
Sampling Strategies
Probability Sampling:
Researchers can specify selection probabilities for each member.
Non-Probability Sampling:
Cannot specify selection probabilities; commonly used in psychology.
Snowball Sampling:
Participants recruit additional participants.
Quota Sampling:
Ensures proportional representation of subgroups in a sample.
Self-Selection Sampling:
Participants choose to take part in the research voluntarily.
Sampling Bias:
Occurs when samples are not representative, leading to inaccurate results.
Scientific Method Steps
Review literature, develop falsifiable hypotheses, build theories.
Use deductive and inductive logic for systematic empiricism.
Accumulate knowledge over time.
Advanced Experimental Designs
Factorial Designs:
Each level of one independent variable is combined with each level of others to produce all possible combinations.
Mixed Factorial Designs:
Combines at least one between-subjects and one within-subjects factor.
Cross-Sectional Research:
Studies existing groups without manipulating variables.
Longitudinal Research:**
Follows one group over time, minimizing cohort effects but more resource-intensive and prone to attrition.
Data Analysis and Summary
Main Effects:
Changes in the DV across levels of a single IV, averaged across others.
Data Handling:
Spreadsheets used for data housekeeping, computing, recoding, and dealing with outliers.
Measures of Central Tendency:
Mean, median, mode; used to summarize observations.
Measures of Dispersion:
Range, standard deviation; describe variability around central tendency.
Reports and Results Presentation
Statistical Analysis Types:
Inferential statistics (t-test, correlation, chi-square); descriptive statistics to summarize findings.
Choosing the Correct Statistical Test:
Parameter selection is crucial for reliable conclusions.
Statistical Tests Explained
Chi-Square Test:
Non-parametric, tests associations between categorical variables.
T-Test Family:
Compares means of two groups; includes variations like one-sample, independent, and dependent samples.
Analysis of Variance (ANOVA):
Compares means across multiple groups; includes types concerning between and within subjects.
Pivot Table:
Summarizes data to draw conclusions.