Research Methods Exam 2

Conceptual definition: a researcher’s definition of a variable at the theoretical level. Another name is construct.Example: If a researcher is studying "happiness," they define what happiness means in their study (e.g., a sense of life satisfaction and joy).

Self-report measure: A method of measuring a variable in which people answer questions about themselves in a questionnaire or interview.

Observational measure: A method of measuring a variable by recording observable behaviors or physical traces of behaviors

Physiological measure: A method of measuring a variable by recording biological data

Categorical variable: A variable whose levels are categories (male and female) Also called nominal variable.

Quantitative variable: A variable whose values can be recorded as meaningful numbers. Example: Height in inches (e.g., 65 inches, 72 inches).

Ordinal scale: A quantitative measurement scale whose levels represent a ranked order, and in which distances between levels are not equal.Example: A race finish order (1st, 2nd, 3rd) – the time difference between 1st and 2nd may not be the same as between 2nd and 3rd.

Interval scale: A quantitative measurement scale that has no “true zero” and in which the numerals represent equal intervals (distances) between levels (temperature in degrees) Example: Temperature in Celsius (0°C doesn’t mean "no temperature")

Ratio scale: A quantitative measurement scale in which the numerals have equal intervals and the value of zero truly means “none” of the variable being measured. Example: Weight in pounds (0 pounds means no weight).

Reliability: The consistency of the results of a measure. Example: If a scale shows the same weight when you step on it multiple times, it is reliable.

Validity: Whether a measure actually measures what it’s supposed to. Example: A math test should measure math skills, not reading ability.

Test-retest reliability: The consistency in results every time a measured is used. Example: Taking an IQ test today and again in a month, and getting nearly the same score.

Interrater reliability: The degree to which two or more coders or observers give consistent ratings of a set of targets. Example: Two judges in a talent show giving similar scores to the same performer.

Internal reliability: In a measure that contains several items, the consistency in a pattern of answer, no matter how a question is phrased. Example: A happiness survey with multiple questions (e.g., "I feel joyful" and "I feel satisfied") should get similar responses from the same person.

Correlation coefficient: A single number, ranging from -1.0 to 1.0, that indicates the strength and direction of an association between two variables. Example: A correlation of 0.8 between studying and test scores means studying is strongly linked to higher scores.

Slope direction: The upward, downward, or neutral slope of the cluster of data points in a scatterplot.

Strength: A description of an association indicating how closely the data points in a scatterplot cluster along a line of best fit drawn through them. Example: If all points closely follow a line, there is a strong correlation; if they are scattered, the correlation is weak.

Average inter-item correlation (AIC):  The average of how well each survey question correlates with others measuring the same thing. Example: In a stress survey, if most questions (e.g., "Do you feel overwhelmed?" and "Do you feel anxious?") get similar answers, the AIC will be high.

Cronbach’s alpha: A correlation-based statistic that measures a scale’s internal reliability. A statistical measure of how well a set of items in a survey work together.

Face validity: The extent to which a measure is subjectively considered a plausible operationalization of the conceptual variable in question.

Content validity: The extent to which a measure captures all parts of a defines construct

Criterion validity: An empirical form of measurement validity that establishes the extent to which a measure is associated with a behavioral outcome with which it should be associated.

Known-groups paradigm: a method for establishing criterion validity, in which a researcher tests two or more groups who are known to differ on the variable of interest, to ensure that they score differently on a measure of that variable. Testing a measure on groups known to be different.Example: A stress test should show higher scores for students before an exam than during vacation.

Convergent validity: An empirical tests of the extent to which a self-report measure correlates with other measures of a theoretically similar construct. A measure should be similar to other measures of the same thing. Example: A new happiness survey should have similar results to an already proven happiness survey.

Discriminant validity: A measure should not be too similar to something it shouldn’t measure.Example: A happiness test should not be strongly correlated with an intelligence test, because they measure different things.

Survey: A method of posing questions to people on the telephone, in personal interviews, on written questionnaires, or via the internet. Also called a poll

Poll: A method of posing questions to people on the telephone, in personal interviews, on written questionnaires, or via the internet. Also called a survey

Open-ended question: A survey question format that allows respondents to answer any way they like

Forced-choice question: A survey question format in which respondents give their opinion by picking the best of two or more options.

Likert scale: A survey question format using a rating scale containing multiple response options anchored by the specific terms strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree. A scale that does not follow this format exactly is called a Likert-type scale

Semantic differential format: A survey question format using a response scale whose numbers are anchored with contracting adjectives

Leading question: A type of question in a survey or poll that is problematic because its wording encourages one response more than others, thereby weakening its construct validity

Double-barreled question: A type of question in a survey or poll that is problematic because it asks two questions in one, thereby weakening its construct validity.

Negatively worded question: A question in a survey or poll that contains negatively phrased statements, making its wording complicated or confusing and potentially weakening its construct validity.

Response set: A shortcut responders may use to answer items in a long survey, rather than responding to the content of each item.

Acquiescence: Answering “yes” or “strongly agree” to every item in a survey or interview. (Yea-saying)

Fence sitting: Playing it safe by answering in the middle of the scale for every question in a survey or interview

Socially desirable responding: Giving answers on a survey (or other self-report measure) that make one look better than one really is

Faking good: Giving answers on a survey (or other self-report measure) that make one look better than one really is

Faking bad: Giving answers on a survey (or other self-report measure_ that make one look worse than one really is

Observational research: The process of watching people or animals and systematically recording how they behave or what they are doing.

Observer bias: A bias that occurs when observer expectations influence the  interpretation of participant behavior or the outcome of the study

Observer effect: A change in behavior of study participants in the direction of observer expectations

Masked design: A study design in which the observers are unaware of the experimental condition to which participant have been assigned

Reactivity: A change in behavior of study participants (such as acting less spontaneously) because they are aware they are being watched

Unobtrusive observation: An observation in a study made indirectly, through physical traces of behavior, or made by someone who is hidden or is posing as a bystander

Population: A larger groups from which a sample is drawn, the groups to which a study’s conclusions are intended to be applied.

Sample: The group of people, animals, or cases used in a study, a subset of the population of interest.

Census: A set of observations that contains all members of the population of interest

Biased Sample: A sample in which some members of the population of interest are systematically left out and therefore the results cannot generalize to the population of interest

Unbiased Sample: A sample in which all members of the population of interest are equally likely to be included (usually through some random method), and therefore the results can generalize to the population of interest.

Convenience Sampling: Choosing a sample biased on those who are easiest to access and readily available, a biased sampling technique.

Self-Selection: A form of sampling bias that occurs when a sample contains only people who volunteer to participate

Probability Sampling: A category name for random sampling techniques, such as simple random sampling, stratified random sampling, and cluster sampling, in which a sample is drawn from a population of interest so each member has an equal and known chance of being included in the sample. Also called random sampling.

Non-probability Sampling: A category name for nonrandom sampling techniques, such as convenience, purposive, and quota sampling, that result in a biased sample.

Simple Random Sampling: The most basic form of probability sampling, in which the sample is chosen completely at random from the population of interest (drawing names out of a hat)

Systematic Sampling: A probability sampling techniques in which the researches uses randomly chosen number N, and count off every Nth member of a population to achieve a sample.

Cluster Sampling: A probability sampling technique in which clusters of participants within the population of interest are selected at random, followed by data collection from all individuals in each cluster

Multistage Sampling: A probability sampling technique involving at least two stages: a random sample of clusters followed by a random sample of people within the selected clusters

Stratified Random Sampling: A form of probability sampling; a random sampling technique in which the researcher identifies particular demographic categories, or strata, and then randomly selects individuals within each category

Oversampling: A form of probability sampling; a variation of stratified random sampling in which the researcher intentionally overrepresent one or more groups.

Random Assignment: The use of a random method (flipping a coin) to assign participants into different experiment groups

Purposive Sampling: A biased sampling technique in which only certain kind of people are included in a sample

Snowball Sampling: A variation on purposive sampling, a biased sampling technique in which participants are asked to recommend acquaintances for the study.

Quota Sampling: A biased sampling technique in which a researcher identifies subsets of the population of interest, sets a target number for each category in the sample, and non-randomly selects individuals within each category until the quotas are filled.

Bivariate correlation: An association that involves exactly two variables

Mean: The average of a set of numbers. You add them all up and divide by how many numbers there are

Effect size: The magnitude, or strength, of a relationship between two or more variables

Statistical Significance: In NHST, the conclusion assigned when p < .05 that is, when it is unlikely the result came from the null-hypothesis population

Replication: The process of conducting a study again to test whether the result in consistent

Outlier: A score that stands out as either much higher or much lower than most of the other scores in a sample

Restriction of Range: In a bivariate correlation, when one variable doesn’t have enough variety to see a true relationship.

Curvilinear Association: An association between two variables which is not a straight line; instead, as one of the variables increases, the level of the other variable increases and then decreases (or vice versa)

Directionality Problem: In a correlational study, the occurrence of both variables being measured around the same time, making it unclear which variable in the association came first.

Third-Variable Problem: In a correlational study, the existence of a plausible alternative explanation for the association between two variables.

Spurious Association: A bivariate association that a relationship between two variables that disappears when you separate them into different subgroups.

Moderator: A variable that, depending on its level, changes the relationship between two other variables.