1/96
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What is an observational design?
The researcher observes, records, and quantifies ongoing behavior, aka a data collection protocol; therefore, it can be either experimental or nonexperimental
What is the defining characteristic of observational designs?
The manner in which data are collected; data collection procedure forfeits some degree of control
What is required for the sampling of observations?
Must be randomized or nonsystematic
What does the observer as a test mean?
The researcher, since the "test" for this design is their observation, must meet all the psychometric requirements of a good test
What are the psychometric requirements of a good test?
Must be reliable (scorer and test-retest), valid, standardized, and objective
What is observer characteristics?
Experimenter effects that occur when there is more than one observer; becomes a possible extraneous variable
How can observer characteristics be controlled for?
Train raters to standardize, build into design as a moderator variable, select for good observers
What are the levels of observation as a continuuum?
Naturalistic/complete, observer-participant, participant-observer, complete participant (least to most intrusive/reactive)
What is naturalistic/complete observation?
Research conducted in such a way that the participants' behavior is disturbed as little as possible by observation process
What is observer-particpant observation?
Observations made such that there is no interaction, but participants are aware of observer's presence
What is participant-observer observation?
Researchers participate in naturally occurring groups and record their behaviors
What is complete participant observation?
Observations made within the observer's own group; observer is completely immersed in activities being observed because they are part of the group
What is offline observation?
Observe first, document and analyze later; researcher observes, participates, or watches events as they happen and then writes up notes, codes behaviors, or analyzes data after observation session is over
What is online observation?
Researcher records details during observation; data collection and observation occur simultaneously
What are problems that arise with observation?
Intrusiveness, reactivity, issues of privacy
Why is intrusiveness a potential problem?
Intrusiveness from observer may affect natural flow/dynamic of the group, and behavior from participants may become less natural as a result
Why is reactivity a potential problem?
Reactivity from the participants threatens construct validity; Hawthorne effect (change behavior due to awareness of researcher's presence)
Why is invasion of privacy a potential problem?
Privacy is a right, and in some scenarios we just cannot observe; if privacy is violated, participants may lose trust in researcher or group
What is survey research?
Measurement and assessment of opinions, attitudes, and so on; usually done by questionnaires or sampling methods
What are the steps for creating a questionnaire?
1. Determine purpose
2. Decide types of questions
3. Write items
How do we determine the purpose/direction of the questionnaire?
Ask the participants for useful information and anticipate questions of interpretation that may arise
What are the options for types of questions in a questionnaire?
Open-ended/constructed response and closed-ended
What are open-ended/constructed response questions?
Participants can answer in their own words
What are closed-ended questions?
Limits participant responses to alternatives or researcher's own thoughts
What is important to consider when writing items?
The questions and items may have a big influence on how people respond
What are the options for format of the items?
Constructed-response (fill-in/write-in), true/false, multiple choice, Likert scales
What are additional things to do or consider when writing items?
Address a single issue per item, know that loaded items generate specified responses, avoid bias or topics that may influence answer, know that there may be effect of question order, make alternatives clear, know that there may be adjacent question effect so items not independent of each other, know that participants characteristics are reflected in approaches to responding that may alter validity of study
What is a sampling frame?
List of all people in the study population
What is probability sampling?
Random sampling from the population frame
What is non-probability sampling?
Nonrandom sampling where participants are selected without random chance and sample has bias
What are the types of sampling?
Uncontrolled, haphazard/intercept, convenience, systematic, simple random, stratified, cluster, multi-stage, oversampling, undersampling
What is uncontrolled sampling?
Type of non-probability sampling: researcher has no control in selection of respondents (ex. magazine, radio); poor response rate
What is haphazard/intercept?
Type of non-probability sampling: researcher may have some control over selection but still basically hit-and-miss method (ex. finding people on the street)
What is convenience sampling?
Type of non-probability sampling: nonrandom sample chosen for ease of access, has high risk of bias and is potential threat to external validity
What is systematic sampling?
Type of probability sampling: randomly selected system, so pick every nth person
What is simple random sampling?
Type of probability sampling: most common form of probabilistic sampling, everyone has equal chance of being selected, increases external validity
What is stratified random sampling?
Type of probability sampling: randomly sampling from subgroups within population to ensure appropriate representation
What is cluster sampling?
Type of probability sampling: randomly selects pre-existing groups/clusters and then samples every member of that cluster
What is multi-stage sampling?
Type of probability sampling: samples starting from pre-existing clusters in stages, using smaller and smaller sampling units at each stage
What is oversampling?
Type of probability sampling: variation of stratified random sampling where researcher intentionally over-represents one or more groups
What is undersampling?
Researcher underrepresents one or more groups
What is the vocal minority?
Small group of the population that expresses their perception, opinions, and participates in studies
What is the silent majority?
Large group with an opinion but stays silent, largely due to lack of vested interest or strong opinions about the issue
What is the margin of error?
Extent or amount of sampling error in results; primary determined by/affected by sample size (larger sample size = smaller MOE)
How high are response rates for most studies?
Overall rate rarely reach 30%, varies by survey mode
What are the survey modes?
Face-to-face interviews, phone interviews, mail, magazine, internet-based
Who tends to respond to surveys?
Strongly opinionated people/those who want to complain, people with interest, people with time; the disparity between those who choose to respond and those who don't is a potential threat to external validity
What is an anonymous survey?
Research does not know/have the participants identity
What is a confidential survey?
Researcher knows participant's identity but removes identifying information rom collected/reported data
What is a high stakes setting for a survey?
Settings where some desirable outcome is dependent on participant's responses on the measures or assessments
What is a low stakes setting for a survey?
Settings where there is little or no consequence or outcome from participant's responses to the measure or assessments
What are biased samples?
Some members of population have much higher probabilty of being included in the sample compared to others; groups are overrepresented or underrepresented on some characteristics, usually personality
What are some characteristics of participants that tend to bias samples?
Willing to respond/participate, stronger opinions, more willing to share ideas with others, more engaged, motivated by incentives offered
What are the three ways samples can be biased?
Being approachable/convenient to contact, being available/contactable, being eager/willing to respond/participate
What is careless responding?
Inattentiveness, arbitrary response patterns, or an unwillingness to comply with testing demands
How do we deter careless responding?
Being non-anonymous, having instructional warnings, asking "nicely" for good data, having a live survey, having rewards, threatening with punishments
How do we detect careless responses?
Response latency (how long it takes to answer), detection/instructed items (attention check), invariability and consistency approaches (check if answers are TOO similar or NOT similar enough), outlier analysis
How do we control for careless responses?
Using bogus items, designing studies to ensure participant will be willing to answer
What is response distortion?
Conscious or unconscious distortion of responses to create a positive impression
How do we deter response distortion?
Forced-choice formats, warnings, verifications, and threats, asking for elaboration
How do we detect response distortion?
Bogus items, lie and SDR measurements, inconsistency responding, response latency
What is socially desirable responding/impression management?
People respond in a way that makes then look good instead of being honest, creates biased data that doesn't represent participant's true standing on the matter
What is self-deception?
Self-view that is not completely accurate, usually positive view you have about yourself; you thin kit's true but your actions show otherwise
What are the characteristics of self-report/survey items?
Ask people questions about themselves in a questionnaire or interview, measured variable since it's being recorded, researcher records behavior or attitude (behavioral observations or physiological measures)
What are the formats for self-report item writing?
Constructed response/write-in, true/false, multiple choice, Likert scale, single issue items, clear questions and response options, loaded items
What are possible effects?
Adjacent question, item order, and diagnostic effects due to loaded items, question order effects, careless responding, response distortion, social desirability responding, innattentiveness or unwillingness to comply
What are forced choice items?
Selecting the best answer from the options, no option to choose N/A
What are leading questions?
Having wording that leads people to a particular response; questions should be worded as neutrally as possible
What are double-barreled questions
Ask two questions, confounds response; ask simple questions one a time
What are double negatives?
Negatively worded question that also contains negative phrasing; remove negative wording if possible, and avoid disagreement with a negative
What are correlational designs?
Research designs where we measure two or more variables and attempt to determine the degree of relationship between them
What are correlational designs characterized by?
No manipulation, low control, no causal inferences
Is a design correlational if it uses a correlation analysis?
NO it does not mean your whole study is correlational, you can manipulate something (experimental design) but still use correlation for part of the analysis; design depends on how the study was CONDUCTED, not how the data was analyzed
What are the types of correlational designs?
Predictive, concurrent, postdictive
What are predictive correlational designs?
IV data are collected before DV data with an appreciable time interval between the two; purpose is to predict future outcomes
What are concurrent correlational designs?
IV and DV data are collected at about the same time without any appreciable time interval between the two; purpose is to examine how variables relate in the moment
What are postdictive correlational designs?
DV has been occurring in the past before the IV are collected, you begin with the outcome then go backwards to find possible predictors; especially vulnerable to bias because you're relying on people's memory about past events
What are archival designs?
Research conducted using data that the researcher had no part in collecting, characterized by differences in the time frames of the collection of IV and DV data; no control over how this data were collected, unknown quality
How is archival data acquired?
Public records or archives that researcher simply examines/selects for analysis; associated with postdictive design
What are the limitations of archival designs?
Most are collected for unscientific reasons so may not be useful/incomplete/subject to bias; by nature carried out after the fact which makes ruling out alternative hypotheses for observed correlations difficult (reliance on post hoc explanations elevates susceptibility to alt. explanations)
What are primary research designs?
Researchers collect new and original data from participants (experiments, surveys, observations, interviews, etc.)
What are secondary research designs?
Uses already existing data rather than collecting new data, gets effect sizes from published/unpublished studies and makes a summary conclusion with all the research literature; this is meta-analysis
What is sampling error?
Error that results from using sample that gives incomplete information about a population; small samples have more unstable effect sizes while large samples have more stable effect sizes
What is the purpose of meta-analysis in regards to sampling error?
Controls for sampling error and obtains more stable effect size estimates
What is statistical significance?
Indicates how likely it is that an experimenter's results were due to chance or an actual effect; based on p-values --> if p < .05, effect/result is statistically significant
What is practical significance?
Indicates whether a result is large enough to be meaningful in a real-world context; based on effect size/magnitude of effect, this is what meta-analysis focuses on
What is Cohen's d?
Represents the standardized difference between the two means in experimental data; measures the strength and direction of a treatment
What is Pearson's r?
Measures the strength and direction of a relationship between 2 variables (correlational data)
What is first order meta-analysis?
Most common type; combines effect sizes from individual primary studies to get a single, overall estimate
What is second-order meta-analysis?
Meta-analysis of meta-analyses; gets findings from multiple meta-analyses on the same topic instead of individual studies and combines their effect size to yield a more stable estimate
What are concerns/limitations of meta-analysis?
Garbage In-Garbage Out, apples and oranges, number of primary studies available, selection of studies, file drawer problem, causality concerns
What does garbage in-garbage out mean?
If the primary studies are poor quality, the meta-analysis will also be poor
What is the "apples and oranges" concern?
Studies must measure the same construct; putting mismatched studies together distorts results
What is the concern with number of primary studies available for meta-analysis?
Fewer studies lead to weaker meta-analysis results, so the more the better
What is the selection of studies concern for meta-analysis?
Researchers must make proper judgement calls about inclusion and exclusion of studies, and judgement calls include how to code variables, how to treat outliers, which moderators to test, etc.
What is the file drawer problem?
When a researcher has non-significant or null results and does not publish, and instead "file away" results in their "desk drawer"; this leads to publication bias, and meta-analyses that use only published studies are susceptible to positive findings since they are overrepresented, which may thus inflate effect sizes
What are causality concerns with meta-analysis?
You can infer causality only if the primary studies used were causal (experiments); if they are correlational, meta-analysis will also be correlational