1/103
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Informal observations
When people make observations without any systematic procedure for conducting their observations or assessing the accuracy of what they observed.
Selective observations
When people see only those patterns that they want to see, or when they assume that only the patterns they have directly experienced actually exist.
Overgeneralization
Assuming that broad patterns exist based on very limited observations.
Anecdotal evidence
Data based on people’s personal experiences that has not been collected systematically.
Theories
Systematic explanations of a natural or social behavior, event, or other phenomenon.
Empirical evidence
Data collected through a scientific process.
Peer review
A formal process in which other researchers review a scholarly work to ensure that it meets the standards and expectations of their field.
Authority
A socially defined source of knowledge that might shape our beliefs about what is true and what is not true.
Quantitative methods
Types of research that generate or analyze data that can be represented by and condensed into numbers. Survey research is probably the most common quantitative method in sociology, but methods such as content analysis can also be conducted in a way that yields quantitative data.
Qualitative methods
Types of research that generate or analyze data involving words, pictures, and other symbols beyond numbers. Two of the most common qualitative methods in sociology include ethnographic observation and in-depth interviewing.
Sample
The subset of the larger population that the researcher has collected data from, and that represent the target population within the study.
Cases
Members of the sample that the researcher has gathered data on, such as the individual interviewees or organizations being studied.
Target population
The larger group (of people, organizations, objects, etc.) that a researcher is interested in learning about, and that their research question applies to. (Also referred to as a population of interest.)
Population of interest
The larger group (of people, organizations, objects, etc.) that a researcher is interested in learning about and that their research question applies to. (Also referred to as a target population.)
Basic research
Research conducted for its own sake. (Also known as pure science.)
Applied research
A type of research that applies scientific knowledge to a practical problem.
Evaluation research
A type of research that aims to assess whether social programs are effective in achieving their objectives.
Theoretical models
Theories that provide a simplified understanding of some process. Theoretical models (also just called “models”) reduce a complex phenomenon into its most important parts.
Levels of analysis
The different levels of aggregation at which social scientists can study phenomena, which can range from the macro (communities, societies, or countries), meso (organizations or other kinds of groups) to the micro (individuals).
Operational definitions
A definition or procedure for how researchers actually measure an abstract concept when they are collecting data.
Operationalization
The stage of the research process at which the researcher specifies explicitly and clearly how a concept will be measured.
Variable
A quantity or characteristic that can vary. Although scientists often use this term interchangeably with concept, a variable is technically the operational definition of a concept—the way the abstract concept is measured in the real world.
Research instrument
Particular tools that are used in research to measure concepts, such as a survey questionnaire or interview guide.
Independent variable
A variable that a researcher believes explains changes in another variable. Specifically, changes in the independent variable are thought to cause changes in the other variable (the dependent variable). Independent variables are also known as explanatory variables. In experiments, the independent variable that the researcher manipulates is called the experimental stimulus or treatment.
Dependent variable
A variable thought to be influenced or changed by another variable (the independent variable). Dependent variables are also referred to as response variables, outcome variables, and outcome measures.
Correlation
When variables are related to one another, in the sense that changes in one variable are associated with changes in another variable. (Correlation is also referred to as association.) Note that observing that two variables are correlated is not by itself evidence that changes in one variable cause changes in the other (i.e., correlation does not necessarily mean causation).
Positive relationship
A type of relationship between two numerical variables in which the value of one variable goes up as the value of the other variable goes up, and vice versa.
Negative relationship (or inverse relationship)
A type of relationship between two numerical variables in which the value of one variable goes down as the value of the other variable goes up, and vice versa. (Also called an inverse relationship.)
Hypothesis
Scientific conjectures—educated guesses—about how the various concepts being studied are related, which researchers develop based on logic or the findings of past research.
Deductive approach
An approach to empirical investigation in which researchers start with a social theory that they find noteworthy and then test its implications with data. (Also referred to with the terms deduction or deductive analysis.)
Inductive approach
An approach to empirical investigation in which researchers start with a set of observations and use the empirical evidence they gather to create a more general set of propositions about how the world operates. (Also referred to as induction or inductive analysis.)
Causal mechanism
The specific process or pathway by which one concept affects another. (Also referred to as a mediating concept, mediating variable, or linking concept.)
Causal story (or explanatory story)
A theory of how exactly changes in one concept lead to changes in another concept. (Also referred as an explanatory story or just a “story.”)
Moderation (also known as interaction)
When a concept (or variable) influences the relationship between two other concepts (or variables). (Also referred to as interaction or an interaction effect.) Specifically, the presence of this moderating concept (also called a conditioning concept) weakens or strengthens (or otherwise affects) the relationship between two concepts.
Reverse causality
A situation in which researchers believe that a change in concept A (or the independent variable) causes a change in concept B (or the dependent variable), but the opposite is actually the case.
WEIRD societies
Societies that fall into the categories of “Western, educated, industrialized, rich, and democratic”—which, given inequalities in where scientific research occurs, tend to be where samples for many studies are drawn.
Structural functionalism
A major theoretical approach that focuses on the interrelations between various parts of society and how each part works with the others to make society function in the way that it does—much like parts of the body work together to help an organism to thrive.
Positivsim
A paradigm of scientific knowledge that prioritizes principles of objectivity, knowability, and deductive logic.
Research design
The planning process for a scientific study, which typically involves a thorough review of the relevant literature, the formulation of a focused research question, and a detailed proposal for the methodological approach that will be used to answer that question.
Research question
The question a researcher hopes to answer by collecting and analyzing data for an empirical study.
Academic literature
The existing scientific studies that relate to a particular phenomenon.
Confirmation bias
A natural tendency to interpret data in ways that support, or “confirm,” one’s existing views, which can lead to flawed research findings that reflect the researcher’s personal biases.
Participant observation
A type of ethnographic observation where researchers get involved in the activities or organizations they are studying, taking on more or less formal roles as event participants or members of groups.
Empirical questions
Questions that have to do with our factual reality and that can be answered through research.
Normative questions
Questions that concern what norms or standards society should have, and whose answers therefore depend on people’s moral opinions. Research can inform, but not answer, normative questions.
Exploratory research
A type of research that examines new areas of inquiry, with the goals of (1) scoping out the magnitude or extent of a particular phenomenon, problem, or behavior; (2) generating initial ideas or hunches about that phenomenon; or (3) testing the feasibility of undertaking a more extensive study regarding that phenomenon.
Descriptive research
A type of research directed at making careful observations and generating detailed documentation about a phenomenon of interest.
Explanatory research
A type of research that seeks explanations of observed behaviors, problems, or other phenomena. Explanatory research seeks answers to “why” and “how” questions.
Abductive approach
An approach to empirical investigation in which researchers apply a particular theory to the social context they are examining and then look for deviations from that theory. (Also referred to with the terms abduction or abductive analysis.)
Literature review
A summary, analysis, and synthesis of the most significant published research on a scholarly topic.
Empirical papers
Papers that report the results of a quantitative or qualitative data analysis conducted by the author, oftentimes with original data collection as well.
Theoretical papers
Papers that focus on elaborating a conceptual model or framework for understanding a problem rather than discussing data the author has collected or analyzed. (Also referred to as theory papers.)
Gray literature
Research and information produced by nonacademics, including researchers working for government agencies, advocacy organizations, polling outfits, and think tanks.
Systematic reviews
A synthesis of past research usually focused on a narrow empirical question. More common in medical and policy fields, systematic reviews typically attempt to make precise conclusions about the effect of a specific intervention. They usually draw on the existing quantitative literature, though some cover qualitative work.
Signposting
Signaling the organization and structure of a paper (or presentation) to its readers (or audience) by stating and reiterating its key points or arguments.
In-depth interviews
Semi-structured (and, more rarely) unstructured interviews that focus on generating rich qualitative detail about a topic. (Also called qualitative interviews.) During in-depth interviews, researchers prioritize open-ended questions (which give respondents more flexibility to discuss what they think is important) and probes (predetermined or improvised follow-up questions).
Ethnographic observation
A qualitative method of studying a phenomenon within its social context by doing first-hand observations and providing detailed descriptions.
Bystander observation
A type of ethnographic observation in which researchers choose not to get involved in the activities or organizations they are studying, typically with the goal of being more impartial in their assessments of what they observe. (Also known as direct observation.)
Survery
A quantitative method of research (formally called survey research) that involves posing the same set of predetermined questions, typically in a written format, to a sample of individuals.
Mixed methods research design
A research design that uses qualitative and quantitative techniques jointly within a single study. (Also referred to as mixed methods or a mixed-methods approach.)
Unit of analysis
The class of phenomena (e.g., individuals, groups, objects, societies) that researchers want to learn about through their research.
Unit of observation
The class of phenomena (e.g., individuals, groups, objects, societies) that researchers can actually observe for their study, which may also be their unit of analysis or may be another unit that provides indirect knowledge of their unit of analysis.
Probability sampling
A type of sampling in which the researchers know the likelihood that a person (or other unit of analysis) in the population will be selected for membership in the sample. (Also called random sampling.) Probability sampling is widely used in quantitative studies.
Nonprobability sampling
A type of sampling in which the researchers do not know the likelihood that a person (or other unit of analysis) in the population will be selected for membership in the sample. Nonprobability sampling is common in qualitative research.
Random sampling/selection
A sampling process that ensures that cases from the population are picked at random.
Representative sample
A sample whose characteristics are similar to the population from which it was drawn, which means that findings from that sample can be reasonably generalized to the population.
Sampling frame
A list of members of a population that is available to researchers, which they use to select cases for their sample. Ideally, the sampling frame includes every single member of that population.
Sampling bias
A type of bias that occurs when the elements selected for inclusion in a study do not represent the larger population from which they were drawn.
Bias
A systematic error that may make research findings inaccurate in some way. Note that the term “bias” in this context does not just refer to the researcher’s personal biases, but to anything that causes a study’s results to fail to truthfully represent reality.
Sampled population
All the people (or other units of analysis) whom researchers seek to recruit from the population of interest.
Generalizability
The extent to which a study’s results can reasonably tell us something about the larger population from which its sample was drawn.
Sampling error
The difference between the statistics obtained from a sample and the actual parameters of a population. Probability sampling allows for the calculation of the sampling error (also called random sampling error) that is expected given the size of the sample being used.
Simple random sampling
A sampling technique where the researcher gives all members of a population (more accurately, of a sampling frame) an equal probability of being selected.
Systematic sampling
A sampling technique where the researcher selects elements from a sampling frame in specified intervals—for instance, every kth element on the list (where the selection interval k is calculated by dividing the total number of population elements by the desired sample size). To allow for an equal chance that every element could be selected, it is also important that the starting point be randomly chosen from within the first k elements on the list.
Stratified sampling
A sampling technique where researchers divide the study population into two or more mutually exclusive subgroups (known as strata) and then draw a sample from each subgroup. Stratified sampling is used to ensure that the sample adequately represents the identified subgroups.
Cluster sampling
A sampling technique in which a researcher begins by sampling groups (or clusters) of population elements and then selects elements from within those groups.
Pilot testing
Any preliminary vetting of a survey questionnaire, interview guide, or other research instrument.
Purposive sampling
A nonprobability sampling approach where the selection of cases is guided by the researcher’s theory about what concepts and processes matter. (Also called theoretical sampling.)
Convenience sampling
A sampling technique in which a researcher draws a sample from part of the population that is convenient to obtain—for instance, because potential interviewees are located near the researcher or otherwise are readily available.
Snowball sampling
A sampling technique where researchers ask study participants they have already recruited to help identify additional participants.
Inclusion/Exclusion criteria
Criteria that a researcher uses to decide whether to include/exclude a person (or other unit of analysis) from a sample.
Self-selection bias
Bias that occurs when certain types of people are more likely to volunteer for (or be selected into) a sample. For example, people with strong opinions on an issue may be more likely to participate in a study.
Social desirability bias
Bias that occurs when participants in a research study answer or act in particular ways to present themselves to the researcher in a more positive light.
Focus group
Interviews conducted with a group of respondents at the same time. During a focus group session, one or more moderators will typically ask the group questions about a particular political issue, product, or other topic.
Ethnography
A qualitative method of studying a phenomenon within its social context by doing first-hand observations and providing detailed descriptions. The word “ethnography” can also refer to studies that utilize ethnographic observation, including the books that ethnographers write based on such research.
In-depth interviewing
A qualitative method of research that involves conducting semi-structured or unstructured interviews, with a focus on asking individuals open-ended questions to elicit detailed information. (Also called qualitative interviewing.)
Triangulation
Using one research method to evaluate or extend the findings derived from another method.
Reflexivity
A self-reflective process that researchers engage in to understand how their own identity, beliefs, dispositions, actions, and practices may have influenced their research, especially the results they found.
Grounded theory
A purist approach to inductive investigation in which researchers start from a clean slate, letting data guide them to new theories rather than beginning with a set of existing theories to build on or test. By keeping an open mind while immersing themselves in real-world settings, researchers following the grounded theory approach should be able to generate novel theories that are not constrained by any preconceptions or well-established views.
Extended case method
An alternative strategy for qualitative research that does not follow the purely inductive approach of grounded theory. When using the extended case method (also known as the extended case study approach), researchers start with an existing theory and look for one or more cases that deviate from that theory, which they can use to explain why the theory falls short. Observations of the chosen case or cases are used to find and correct problems in the theory.
Field jottings
Descriptive notes that researchers write—in a notepad or through more discrete means—while they are observing in the field or during an interview. Their field jottings supply the raw material that they later use to draft more formal field notes.
Field note
A memo that researchers write to describe and summarize their observations (and sometimes also the content of their interviews) during a particular period of time. Field notes (also written as fieldnotes) often contain the researchers’ personal reactions and preliminary analysis as well. To ensure adequate recall, researchers try to write field notes right after their last session of data collection, drawing on the field jottings they wrote while observing.
Reliability
The consistency of a measure. A measure is said to be reliable if it gives the same result upon repeated applications to the same phenomenon.
Validity
The accuracy of a measure (also called construct validity). A measure is said to be valid if it accurately reflects the meaning of the concept under study.
Test-retest reliability
A method of assessing the reliability of a measure by collecting data from a sample and then retesting the same sample after a period of time. If the measure is reliable, the two measurements should be consistent.
Inter-rater reliability
A method of assessing the reliability of a measure by examining the degree to which two or more observers (raters) agree on the measurement of one or more cases (i.e., on the values assigned to those cases).
Internal consistency
The degree to which participants’ answers to items within a multiple-item measure are consistent. Specifically, the answers for each item in an index or scale should be correlated with each other, as they all are supposed to measure aspects of the same overall concept.
Content validity
A method used to assess the validity of a measure where a researcher evaluates whether the measure covers all the possible meanings, domains, and dimensions of a concept.
Predictive validity
A method of assessing the reliability of a measure by determining if it predicts future phenomena that it should be able to predict. For example, a standardized test that successfully predicts a student’s grades in college would arguably have predictive validity as a measure of academic ability.
Convergent validity
A method used to assess the validity of a measure by comparing scores on that measure to those derived from an existing measure of the same or a similar concept. A strong correlation between the two measures is evidence that they are both measuring the same thing, and that the new measure is therefore valid in this way. (Compare to discriminant validity.)