Basic researchers: Psychologists who increase our understanding of psychology by creating and improving the theories that predict social behavior.
Applied researchers: Psychologists who translate the findings of basic researchers into social action and apply psychological ideas to the real world.
Scientific method: A systematic way of creating knowledge by observing, forming a hypothesis, testing a hypothesis, and interpreting the results. The scientific method helps psychologists conduct experiments and formulate theories in a logical and objective manner.
Steps of scientific method
observe a pattern
generate a hypothesis
scientifically test hypothesis
interpret results and refine hypothesis\
ex. “men interrupt more than women” —> “men are less likely to interrupt women they find physically attractive, compared to women they don’t find attractive”
Hypothesis: A specific statement made by a researcher before conducting a study about the expected outcome of the study based on prior observation. Hypotheses are falsifiable statements that researchers believe to be true (see falsification).
Constructs: Theoretical ideas that cannot be directly observed, such as attitudes, personality, attraction, or how we think.
Operationalize: The process of specifying how a construct will be defined and measured.
4 types of research
archival studies
naturalistic observation
surveys
experiments
Archival data: Stored information that was originally created for some other purpose not related to research that can later be used by psychologists, such as census data.
ex. newspapers, census data, social media posts, pop culture
social psychologists can collect police reports for data on DV
Naturalistic observation: A research design where scientists gather data by observing people in the environment within which the behavior naturally occurs (for instance, observing leadership styles in a corporate office).
can create ethical problems if not careful if observing ppl that don’t know are being observed
Reactivity: When people change their behavior simply because they’re being observed (see social desirability bias and good subject bias).
Participant observation: A technique used during naturalistic observation where scientists covertly disguise themselves as people belonging in an environment in an effort to observe more authentic social behaviors.
Survey: A research design where researchers collect data by asking participants to respond to questions or statements.
Self-report scale: A type of survey item where participants give information about themselves by selecting their own responses (see survey).
Social desirability bias: The tendency for participants to provide dishonest responses so that others have positive impressions of them.
people not reporting instances where they cheated on a test on a survery
Case study: A type of research where scientists conduct an in-depth study on a single example of an event or a single person to test a hypothesis.
PsycINFO database: The most comprehensive database of research books and journal articles across psychological subdisciplines
HOW DO SOCIAL PSYCHOLOGISTS DESIGN STUDIES?
4 Types of Research Design:
preexperimental design
true experiments
quasi-experiments
correlational designs
Preexperiment: A research design in which a single group of people is tested to see whether some kind of treatment has an effect, such as a one-shot case study or a one-group pretest-posttest
Types of preexperimental designs:
One-shot case study: A type of preexperimental research design that explores an event, person, or group in great detail by identifying a particular case of something or trying a technique once, then observing the outcome.
ex. intrviewing world leader how they make leadership decisions
involes 1) identifying a particular case and 2) observation/outcome
One-group pretest-posttest design: A type of preexperimental research design in which the expected outcome is measured both before and after the treatment to assess change.
ex. driving text before and after given alcohol
Confounding variables: Co-occurring influences that make it impossible to logically determine causality. Confounding variables, such as bad weather or the inability to concentrate on a survey due to illness, provide alternate explanations for the outcome of an experiment that make it impossible to know whether the results are due to the independent variable (see internal validity).”
(True) Experiment: A research design where scientists randomly assign participants to groups and systematically compare changes in behavior. Experiments allow scientists to control confounding variables and establish cause-effect relationships.
Random assignment to experimental condition: A solution to the problem of confounding variables by creating equivalent groups at the start of an experiment. Random assignment cancels out the influence of confounds by distributing them equally across groups.”
Independent and Dependent Variables
Independent variable: A variable that is manipulated at the beginning of an experiment to determine its effect on the dependent variable.
Dependent variable: The measured outcome of an experiment that is affected by the independent variable.
Pretest-posttest control group design: A type of true experiment where the dependent variable is tested both before and after the experimental manipulation.
Control group: A group of participants in a true experiment that serves as a neutral or baseline group that receives no treatment.
Posttest-only control group design: A type of true experiment where the dependent variable is measured for two or more groups, including a control group, only after the experimental manipulation.
Between-participants design: An experimental research design where the levels or conditions of the independent variable are different for each group of participants; patterns are found by comparing the responses between groups.
Within-participants design: An experimental research design where the same group of participants all experience each experimental condition; patterns are found by comparing responses for each condition.
Order effects: Variations in participants’ responses due to the order in which materials or conditions are presented to them.
QUASI-EXPERIMENTAL DESIGNS
Quasi-experiment: A research design where outcomes are compared across different groups that have not been formed through random assignment but instead occur naturally.
Correlational design: A research design where scientists analyze two or more variables to determine their relationship or association with each other.
HOW DO SOCIAL PSYCHOLOGISTS ANALYZE THEIR RESULTS?
Statistics: Mathematical analyses that reveal patterns in data, such as correlations, t tests, and analyses of variance.
Standard deviation: The amount of variability in a distribution. In other words, how widely dispersed the data are.
t test: A statistical test that uses both the mean and the standard deviation to compare the difference between two groups.”
Analysis of variance (ANOVA): A statistical test that uses both the mean and the standard deviation to compare the differences between three or more groups
Correlation: A type of statistical test that determines whether two or more variables are systematically associated with each other by identifying linear patterns in data.
Scatterplot: A graph that demonstrates the relationship between two quantitative variables by displaying plotted participant responses.
Correlation coefficient: A number that indicates the relationship or association between two quantitative variables. It ranges from –1.00 to +1.00.
Positive correlation: A positive correlation occurs when the correlation coefficient is between +0.01 and +1.00. In this case, as one variable increases, the other also increases.
Negative correlation: A negative correlation occurs when the correlation coefficient is between –0.01 and –1.00. In this case, as one variable increases, the other decreases.
Statistical significance: The likelihood that the results of an experiment are due to the independent variable, not chance (see p value).
p-value: A number that indicates the probability or likelihood that a pattern of data would have been found by random chance. Commonly seen as a variation of “p < .05,” which, in this example, means there is a less than 5% probability the patterns are due to chance.
2.4 HOW CAN RESEARCH BE ANALYZED IN TERMS OF QUALITY?
Reliability: Consistency of measurement, over time or multiple testing occasions. A study is said to be reliable if similar results are found when the study is repeated.
Internal validity: The level of confidence researchers have that patterns of data are due to what is being tested, as opposed to flaws in how the experiment was designed.
External validity: The extent to which results of any single study could apply to other people or settings (see generalizability).
Generalizability: How much the results of a single study can apply to the general population (see external validity).
Random sampling: A sampling technique used to increase a study’s generalizability and external validity wherein a “researcher randomly chooses people to participate from a larger population of interest.
Replication: The process of conducting the same experiment using the same procedures and the same materials multiple times to strengthen reliability, internal validity, and external validity.
Informed Consent: Participants should be told what they will be asked to do and whether there are any potential dangers or risks involved in the study before it begins.Deception: Participants should be told the truth about the purpose and nature of the study as much as possible. Deception, or hiding the true nature of the study, is only allowed when it is necessary because knowing the truth would change how the partici“pants responded.Right to Withdraw: Participants have the right to stop being in the study at any time, for any reason, or to skip questions on a survey if they are not comfortable answering them.Debriefing: After completing the study, all participants should be given additional details about the hypotheses of the study, allowed the opportunity to ask questions, and even see the results if they wish. This debriefing after the study is complete should definitely include an explanation of any deception that was involved (if deception occurred) so that participants have the right to withdraw their data if they are upset about the deception.
Institutional review boards (IRBs): Committees of people who consider the ethical implications of any study before giving the researcher approval to begin formal research.
American Psychological Association (APA): A large organization of professional psychologists who provide those in the field with information in the form of scholarly publications, citation style guidelines, and ethical standards for research.
Informed consent: Participants’ right to be told what they will be asked to do and whether there are any potential dangers or risks involved before a study begins.
Deception: Hiding the true nature of an experiment from a participant to prevent a change in how the participant would respond.
Right to withdraw: The right participants have to stop being in a study at any time, for any reason, or to skip questions on a survey if they are not comfortable answering them.
Debriefing: Additional details given to participants after participation in an experiment, including information about the hypotheses, an opportunity to ask questions, an opportunity to see the results, and an explanation of any deception.”