1/98
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
How are surveys/interviews different from other forms of research?
Surveys/interviews collect self-reported data emphasizing opinions and behaviors rather than observed outcomes.
What can be studied with surveys?
Attitudes, beliefs, behaviors, demographics, preferences, and psychological constructs.
Strengths and weaknesses of mail surveys
Strengths: inexpensive, broad reach. Weaknesses: low response rate, delayed responses.
Strengths and weaknesses of internet surveys
Strengths: fast, cost-effective, wide access. Weaknesses: sampling bias, tech barriers.
Strengths and weaknesses of group surveys
Strengths: efficient, consistent environment. Weaknesses: peer influence, limited generalizability.
Strengths and weaknesses of phone interviews
Strengths: clarification possible, personal touch. Weaknesses: costly, time-consuming.
Strengths and weaknesses of personal interviews
Strengths: rich data, flexible. Weaknesses: expensive, time-intensive.
How to counteract weaknesses in survey methods?
Use mixed methods, improve clarity, offer incentives, ensure anonymity, use follow-ups.
How should surveys be constructed?
Use clear, concise, neutral questions; pilot test; use logical flow.
Closed vs open-ended questions
Closed: easy to analyze but limited detail. Open-ended: richer data but harder to code.
What are scales in surveys?
Measurement tools (e.g., Likert); should be clear, avoid ambiguity or bias.
Types of questions to avoid
Leading, loaded, double-barreled, ambiguous questions.
Sampling techniques and which are better
Random, stratified, convenience, snowball. Random best for generalizability.
Difference between nomothetic and idiographic research
Nomothetic seeks general laws; idiographic focuses on individual understanding.
Why use idiographic or nomothetic research?
Idiographic for unique cases; nomothetic for broader applications.
Single-subject vs case study
Single-subject: systematic manipulation; Case study: descriptive exploration.
History of single-subject design
Origin in behaviorism, especially B.F. Skinner’s work.
Why use baselines in single-subject research?
Establish comparison; should be stable and consistent.
Different single-subject designs
AB, ABA, ABAB, multiple baseline, changing criterion.
Carryover effects in single-subject designs
Effects from previous condition that influence results, lowering validity.
Challenges in single-subject designs
Generalizability, variability, reactivity; addressed with replication and blinding.
What is a non-reactive measure?
A measure that does not influence participants’ behavior.
What are traces and products?
Traces: physical evidence. Products: artifacts from behavior.
Accretion vs erosion traces
Accretion: build-up (e.g. trash). Erosion: wear (e.g. carpet).
Controlled vs natural traces
Controlled: researcher-managed. Natural: real-world occurrence.
Confounds in non-reactive studies
Ambiguous causality, selective survival of data.
Ethical considerations in trace research
Consent, privacy, potential harm.
Difference between document and record; continuous vs discontinuous
Document: written artifact. Record: systematic log. Continuous: ongoing; discontinuous: periodic.
Processing archival data
Coding, categorizing, quantifying existing records.
Confounds in archival research
Incomplete records, selective reporting, outdated info.
Ethical issues in archival studies
Privacy, consent, sensitivity, misuse.
AB design
A single-subject research design where 'A' is the baseline and 'B' is the treatment phase.
ABA design
A single-subject design that includes a baseline (A), treatment (B), and return to baseline (A) to assess treatment effect.
ABAB design
A single-subject design with two baseline-treatment cycles (A-B-A-B), increasing internal validity.
Multiple baseline design
A design that applies treatment across different behaviors, settings, or subjects at different times to control for confounds.
Changing criterion design
A design where the behavior target changes step-by-step to demonstrate treatment effectiveness.
Likert scale
A psychometric scale commonly used in surveys to measure attitudes by levels of agreement.
Semantic differential scale
A type of scale using bipolar adjectives (e.g., good–bad) to measure connotative meaning.
Double-barreled question
A question that asks about two things at once, reducing clarity and validity.
Leading question
A question that suggests a particular answer, introducing bias.
Loaded question
A question with built-in assumptions that may be controversial or emotionally charged.
Stratified sampling
A sampling method that divides the population into subgroups and samples from each, ensuring representation.
Convenience sampling
A non-random sampling technique that selects individuals who are easiest to reach.
Snowball sampling
A method where existing participants recruit future participants, useful for hard-to-reach populations.
Systematic sampling
Selecting every nth individual from a list after a random start.
Archival research
A method involving analysis of pre-existing data such as records, documents, or databases.
Physical trace measure
A non-reactive method that uses physical evidence of past behavior (e.g., footprints, wear).
questionnaire
a set of questions created to learn about individuals, not meant to be aggregated
sampling bias
occurs when a sample overrepresents some subset of the population and underrepresents other subsets
response rate
calculated by dividing the actual number of responses by the number of potential responses
interviewer bias
the interviewer’s expectations or preferences. reduced in internet surveys
group-administered surveys
given in a setting where it is easy for recipients to complete them, most people will comply. response rate is stronger than for mail surveys
socially desirable responses
reflects what is deemed appropriate by society, but are not necessarily reflections of the respondent’s practices or beliefs
funnel structure
where a survey begins with general, interesting, and easy to answer questions. the questions become more specific and require more thought in the middle of the survey
demographic questions
descriptive questions about the respondent’s social statistics, such as gender, age, income level, etc.
branching
the answers to demographic questions might determine which specific questions the person will see next
filter question
used to determine the next question to ask based on the answer
reliability
the degree to which a measurement tool provides consistent answers
test-retest reliability
measures the degree to which a test generates the same responses upon retesting
alternative-forms reliability
assesses how well two forms of the same test yield comparable results
construct reliability
the degree to which respondents’ replies within the instrument are consistent
split-half reliability
determines if a person’s replies to one half of the items are related to the same person’s replies to the other half of the items
cronbach’s alpha
assesses internal consistency. a statistical technique that compiles the correlations of every item with every other item within the tool
validity
the degree to which a measuring tool measures what it purports to measure
construct validity
the extent to which the concepts thought to be measured within the tool are actually being measured
face validity
the most straightforward. the degree to which a measurement tool appears to be measuring what it is supposed to be measuring
criterion validity
measures how well the results of an instrument correlate with other outcomes or behaviors
pilot study
a small group of people is given the survey, giving the researcher the opportunity to work out any bugs in the survey questions or the data gathering
population
consists of all the members of a given group to which the research is meant to generalize
sample
a subset of the population
sampling frame
a list of all the members of a population
elements
the members of the sample who are chosen from the sampling frame
random selection
all members of the population are equally likely to be chosen as part of the sample
random sample
elements are randomly chosen from a sampling frame
systematic sampling
elements are not chosen randomly; chosen according to some specific plan or strategy
cluster sampling
clusters of potential respondents that represent the population are identified, then all of the people in those clusters are included in the sample
quota sampling
a combination of convenience sampling and stratified sampling
nonprobability sampling
each member of the population is not equally likely to be selected and the outcome could easily be a biased sample
time-series design
several measurements are made before and after the introduction of the independent variable
withdrawal design
a series of baseline measurements is compared to the measurements taken after the introduction of the intervention. the intervention is then withdrawn, and measurements are continued.
ABAB design
a research design involving baseline, intervention, then return to baseline, then again the intervention
ABAC design
introduction of a second intervention. baseline → intervention #1 → baseline → new intervention
ABCB design
can be used to assess the effect of an intervention compared with a placebo condition in the following way. baseline → intervention → placebo → intervention
reversal design
a new and opposite intervention is introduced
alternating-treatments design
a variation of the ABAB design; this design does not require baseline, and two or more treatments are presented to the participant instead of just one. introduction of treatment may be random or systematic
multiple-baselines design
the effect of a treatment ton two or more behaviors is assessed, or the effect of a treatment on a single behavior is assessed across two or more situations
changing-criterion design
avoids withdrawing treatment. used to assess an intervention when the criterion for that intervention is routinely changed
subject bias
what the participant thinks should happen. confound in single-subject designs
demand characteristics
what the participant thinks the research wants to happen. confound in single-subject designs
natural trace measures
trace measures that occur without researcher intervention
controlled trace measures
trace measures that involve researcher intervention
products
these are purposeful creations by individuals
selective survival
refers to the notion that some trace or product evidence may not endure over time
selective deposit
refers to the circumstance in which not all traces are equally representative
record
an account or statement is created for another person to read, view, or hear
document
not created at someone’s request or for someone else to use. considered more personal than records
data reduction
reducing the amount of information to a more usable amount
content analysis
researcher develops a coding system that is used to record data regarding the content of records
intercoder reliability
determines reliability of the coding system