1/71
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Factorial Design
more than one factor/variable, and each factor/variable has more than one condition/level; generally, there should be a condition for each possible combination of independent variables
Factor
independent variable, something that can influence an outcome of dependent variable ex. medication, therapy, training
Condition
level, actual manifestations of your IV in the study, ex. placebo, low dose, high dose
3×3×2 Design
number of IV’s = 3; Levels in first IV = 3; Levels in second IV = 3; total number of conditions = 18
True Experimental Factorial Design
each independent variable can be manipulated
Hybrid Factorial Design
at least one variable cannot be manipulated (quasi-independent), and is instead measured (ex. gender); no causation can be established for the variable that is not manipulated
Mixed Design
at least one independent variable is within-persons
Cell Mean
the average on the dependent variable for participants with a specific combination of the levels of the independent variables; specific to the condition (which is a combination of variables)
Marginal Mean
the average of all participants on one level of the independent variable, ignoring the other independent variables; specific to the variable (an average of conditions for that variable)
Main Effects
the effect of the independent variable on the dependent variable; all, some, or none of the independent variables may have a main effect on the dependent variable; marginal means used to test main effects
Main Effect Hypothesis
a prediction that focuses on the effect of one independent variable on the dependent variable at a time, ignoring all other independent variables
Interaction Effects
the effect of one independent variable on the dependent variable depends on another independent variable; you need to know about both independent variables to most accurately understand the dependent variable; can have a significant interaction, even in the absence of significant main effects; cell means relevant here
Interaction Effect Hypothesis
a prediction about how the levels of one independent variable will combine with another independent variable to impact the dependent variable in a way that extends beyond the sum of the two separate main effects
Vignettes
a description of a hypothetical situation, event, or scenario to which participants react
Analysis in Factorial Design
reliability, manipulation check, descriptive statistics, and inferential statistics
Descriptive Statistics (Factorial Design)
who was in our study? what was the average receptivity? does the mean receptivity look different by condition?
Two-Way ANOVA
a statistical test that allows us to simultaneously test how two separate nominal or categorical independent variables (or factors) influence the dependent variable, and how those independent varaibles interact to influence the dependent variable (inferential statistics); one test will tell us the significance of each main effect, as well as the interaction
F(2,87) = #.##, p= .##, eta²= .##
F-test symbol(Between-subjects df, within-subjects df)=F-score, p=significance level, eta²= effect size
Single-Subject Design
a special type of within-subjects design using one participant or one group to assess changes within that individual or group; if nothing is manipulated, it is a case study; if something is manipulated, it is a single-subject design;
How can something be manipulated with just one subject?
special case of within-person design, measures are taken before and after treatment to test for change, may remove treatment (or reapply treatment) for additional tests of change
A-B Design
a single subject design in which researchers take a baseline measurement (A), then introduce the intervention, and then measure the same variable again (B)
A-B-A Design
a single-subject design in which researchers establish a baseline (A), introduce the intervention and measure the same variable again (B), then remove the intervention and take another measurement (A)
A-B-A-B Design
a single-subject design in which researchers establish baseline (A), introduce the intervention (B), remove the intervention (A), and then reintroduce the intervention (B), measuring the DV each time
Mixed Design
an experimental design that combines within-subjects and between-subjects method of data collection; multiple IVs with 2+ levels, one or more variables manipulated between-subjects and within subjects; for any one subject, they get multiple conditions for one variable, and only one condition for another variable
Mixed Design Analysis
manipulation check, reliability, descriptive, inferential
Between-Subjects
expose participants to one level of treatment, randomly assign participants to one condition, strength is internal and external validity, weakness is power; ex. two-group design, multigroup design, factorial design, and mixed design
Within-Subjects
expose participants to all level of treatment, randomly assign participants to a sequence of treatment conditions, strength is power, weakness is internal and external validity; ex. pretest-posttest design, repeated-measures design, and mixed design
Benefits of Mixed Design
combines some of the strengths of within and between subjects designs, power or within-subjects design, fewer subjects needed for this variable, control of between-subjects design (random assignment, no order effects); can test main effects and interactions
Drawbacks of Mixed Design
weakness of both within- and between-subjects design, order effects of within-subject design, increased number of participants needed for between-subject design; potential confounds if conditions are not equivalent
Waiting-List Control Group
no treatment given initially; a control group often used in clinical research; participants in this group do not receive the treatment or intervention until after the completion of the study
Treatment-as-Usual Control Group
a comparison group often used in clinical research in which an already-established treatment is administered for comparison to experimental treatment
Experimenter-Expectancy Effect
occurs when a bias causes a researcher to unconsciously influence the participants of an experiment
Double-Blind
both the participants and the administrators of treatment are not aware of, or blind to, the types of treatment being provided in order to reduce the likelihood that expectancies or knowledge of condition will influence the results
Single-Blind
only one party is aware of the condition (usually the researcher is aware and the participant is not)
Mixed Design ANOVA
a statistical analysis that tests for differences between two or more categorical independent variables, where at least one is a between-subjects variable and another is a within-subjects variable; can tell us if there are differences in conditions, then use a post-hoc test to explore specific differences; also explores main effects and interaction effects (inferential analysis)
Program Evaluation
using the scientific method to assess whether an organized activity is achieving its intended objectives; essentially research in an applied setting
Areas of Program Evaluation
needs, process, and outcome
Needs
an assessment of which features of a program are most valuable and who they benefit most; what would be valuable and for whom?; purpose is to identify program features to continue or discontinue and to determine potential new components to add
Process
an assessment of a general program operation, including whom the program serves and how the program delivers services to that population; who does an existing program serve and how?; purpose is to determine ways to improve program implementation and delivery, also seeks to ensure a match between program goals and those serviced
Outcome
an assessment of whether a program effectively produces outcomes that are consistent with the stated objectives or goals; is the program meeting its goals?; purpose is to identify unmet goals and outcomes the program can improve in order to better serve clients, or to establish evidence of program effectiveness
Focus Groups
several participants form a group and discuss a specific topic with the researcher/facilitator/moderator; qualitative approach
Interviews
researcher/facilitator conducts a conversation with a participant; generally one participant at a time; qualitative approach
Case Study
usually a specific person who is studied in detail over time (can be a group or organization too); qualitative design
Content Analysis
reviewing written records for themes; qualitative apprach
Visual Ethnography
reviewing visual media, like pictures and movie; qualitative approach
Quantitative Approach
survey (correlational); quasi-experiment to compare different programs/conditions; archival data
Archival Data
existing employee or client records that can be gathered, encoded, and analyzed
Structured Interviews
set list of questions that get asked to everyone
Unstructured Interviews
not a set list of questions, an idea of topics to be covered
Critical Incident Questions
focusing on a key event
Behavioral Questions
focus on discussing things the participant has actually done
Situational Questions
discuss what the participant would do in a situation
Single-Item Indicator
only one item or question being used to measure a variable
Good Interview Questions
clear and easy to understand, open-ended, eliminates assumptions
Plan for Program Design
who are stakeholders; understand goals of evaluation; describe current program; create a plan; execute; communicate results
Stakeholders
individuals who will use the evaluation and who will benefit from it. Identified by engaging organizational leadership, as well as those with less influence
Understand Goals of Evaluation
we identify what the evaluation hopes to accomplish, the steps needed to do so, and how the organization hopes to use the evaluation’s results
Describe the Current Program
we collect information about the program’s mission and specific goals, and the nature of the program and services delivered, and whom the program serves
Create a plan
we formulate and describe a plan that outlines our evaluation’s design; what measures to use?; literature review to obtain established measures; who to collect data from and when
Execute
collect data based on our design, analyze the data using appropriate statistics
Communicate Results
form conclusions, dispense results to stakeholders (written report, presentation, meeting)
Single Sample T-Test
a statistic to evaluate whether a sample mean statistically differs from a specific value; compares scale score to some set value (such as the mid-point)
Word Clouds
a visual representation of the frequency that certain words are used in a qualitative assessment; larger words indicate higher frequency of use
Biases
our assumptions and experiences influence how we perceive things, as a human studying other humans, it can be hard to set these aside
Heuristics
mental shortcuts
To test causality
manipulating the IV before measuring the DV, random assignment, making conditions equivalent (removing confounding variables)
Pseudoscience
anecdotes; evidence = “proof”; confirmation bias; handpicking supporting evidence, ideas are fixed and hostile to change; no peer review; overstates findings; replication is unimportant
Science
empirical data; evidence = “support for” not proves; falsifiable and open to refutation; examines all evidence, updates ideas as data is considered (open to change); peer review; cautious interpretation; replication is required
Ethics
avoid exposing participants to undo harm; who benefits from the findings; avoid exploiting vulnerable populations; participation is voluntary and can stop at any point, consent necessary; debrief participants after the study ends, especially if deceit was used; data integrity
Reliability
is the assessment consistent? across items of the same scale (internal consistency; across raters (inter-rater); across forms (parallel forms); across time (test-retest)
Validity
is it measuring what it is supposed to?
Skills to be a good scientist
creativity, objectivity, communication, empiricism, skepticism, open-mindedness