1/139
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Research Design
The plan or blueprint for a study that includes the who, what, where, when, why, and how of an investigation.
Experimental Design
A research design aimed at controlling for invalidity through the use of experimental procedures.
Equivalence
The selection and assignment of subjects to experimental and control groups in a way that they are as similar as possible.
Randomization
The random assignment of subjects from a similar population to either the experimental or control group.
Experimental Group
The group that is exposed to stimuli or experimental arrangements during a research study.
Control Group
The group that is not exposed to the treatment or stimuli in an experiment.
Independent Variable
The variable that is manipulated in an experiment to observe its effect on the dependent variable.
Dependent Variable
The variable that is measured in an experiment to see how it is affected by the independent variable.
Pre-experimental Designs
Research designs that lack one or two of the three major elements of experimental designs—equivalence or experimental and control groups.
Quasi-experimental Designs
Research designs that rely on matching groups but do not involve random assignment.
True/Classic Experimental Design
A type of experimental design that includes random assignment to treatment and control groups.
Validity Issues
Concerns regarding whether the conclusions drawn from experimental results accurately reflect the reality of the study.
Internal Validity
The extent to which the conclusions of an experiment accurately reflect what has occurred within the study itself.
External Validity
The extent to which the findings of an experiment can be generalized to real-world settings.
Spurious Relationship
A false relationship that can be explained away by the presence of other variables.
Causality Problem
The challenge in establishing a cause-and-effect relationship between two variables.
Conflict of Interest
A situation in which the person conducting the research has a vested interest in the outcome.
Deception in Research
The practice of misleading subjects about the true purpose or nature of the experiment, often used to prevent bias.
Matching
Assuming equivalence by selecting subjects on the basis of matching certain characteristics such as age, sex, and race
Three elements of Classic Experimental Design
Equivalence, pretests/initial observation and posttests/final observation, and experimental and control groups
field experiments
formal experiments conducted in a natural setting (and not in a lab)
natural experiments
experiments that occur outside controlled settings, often in the course of normal social events
time series designs, multiple interrupted time series designs, counter-balanced designs
quasi-experimental designs rely on matching groups
steps in evaluation research
problem formulation, design of instruments, research design, data collection, data analysis, findings & conclusions, utilization
1 - specifcy variables you think are related
2 - specify measurement of variables
3 - hypothesize correlation, strength of relationship, statistical significance
4 - specify tests for spuriousness
to test a hypothesis
demonstrate that a relationship exists between the key variables
specify the time order of the relationship
eliminate rival casual factors
resolution of the casuality problem
variables related to internal validity
history, maturation, testing, instrumentation, statistical regression, selection bias, experimental mortality selection
variables related to external validity
testing effects, selection bias, reactivity or awareness of being studied, multiple treatment interferences
statistical regression
The tendency for subjects in your study that over the course of time, regress towards the average score especially if high or low at pre-test
maturation
the biological or psychological changes in the respondents during the course of a study that are not due to the experimental variable
placebo effect
the tendency of control groups to react to believed treatment in a positive manner
rival causal factors
variables other than X (the treatment) that may be responsible for the outcome
cross sectional
involve studies of one group at one time
Policy Analysis
The study of proposals, programs, decisions, and effects of policy.
Evaluation Research
Applied research that is intended to supply scientifically valid information with which to guide public policy - provide feedback to policymakers in concrete, measurable terms.
Impact Evaluation
Which of the types of evaluation specifcally focuses on the outcome and is the most common type of evaluation research?
Cost/Benefit Studies
Determine whether the results of a program can be justified by its expenses.
Process Evaluation
Establishment of relationships between results and project inputs and activities.
Monitoring
An investigation as to whether the activities are related to the inputs is known as
Logistical Problems
getting subjects to do what they are supposed to do, logistical details usually fall to program administrators and they may not follow the evaluation procedure as strictly as needed, evaluation research occurs within the contect of real life
Ethics in Evaluation Research
Involves researchers in the political, ideological, and ethical issues of program/policy topics/content, involves choosing who gets interventions and who does not, pressure to modify results; exclude negative results
Needs Assessment Studies
Aim to determine the existence and extent of problems.
Quasi Experimental Design
Research design often used when classical experimental design is not feasible.
Research Designs for Evaluations
Includes classic experimental, quasi-experimental, and preexperimental designs.
Utilization
The application of findings from evaluation research to improve programs.
identifying problems
demands are expressed for government action. (usually social, but can be fiscal, economic, even environmental)
formulate policy to alleviate problem
agenda is set for public discussion (program proposals are developed to resolve problem)
legitmate policy
select proposal
build polticial support
enact law
organizing bureaucracies
providing payments or services
levying taxes
evaluate policy
studying programs
reporting “outputs” of government programs
evaluating “impacts” of programs on society
suggesting changes and adjustments
assessment
the enumeration of the need for an activity or resource
problem formulation, research design, design of instruments, data collection, data/statistical analysis, findings and conclusions, utilization
steps in evaluation research
prefer classic experimental
randomization, control and experimental groups
more money, programs do not want to withhold services to participants, can result in non-equivalent control and experimental groups, reliance on program is high
problems with prefer classic experimental
purpose of research, duration of evaluation, data to be collected, what evaluator needs from host agency/program, form and timing of results
before research/evaluation begins, need agreement on
ensuring research design does not interfere with operations of host agency/program, arranging for/ensuring host agency/program recieves timely feedback on research progress and results, considering to reimburse the host agency/program on their expenses in dealing with evaluation
funders/agencies should assist researchers with establishing favorable relationship with host agency/program by
pitfalls in evaluation research
poor evaluation design and methodology, poor data analysis, unethical evaluations, naive or unprepared evaluation staff, poor relationships between evaluation and program staff, co-optation of evaluation staff and/or design, poor-quality data, poor literature reviews, focus on method rather than process
implications may not be presented in a way that is understandable to the non-researcher, results may contradict deeply held beliefs, program administrators may have a vested interest in the results
why evaluation research results are not always put into practice
evaluation
________ is a form of applied research
needs assessment studies
____ aim to determine the existence and extent of problems
nonequivalent control group
Professor Yee wants to do an evaluation study of the effects of a patient education program on patient anxiety. He uses one wing in a hospital for the experiment and compares the results with a similar group of patients in a similar wing in another hospital. Which design would be best?
outcomes
in the systems approach to evaluation research, the accomplishment of broader range societal goals such as better justice or safety is called
evaluation researchers encounter more logistical problems than other researchers because evaluation research
true
evaluation research differs from other types of applied research in that the data is used for decision making regarding a specific program rather than to represent the findings of theoretical interest
researchers working for the host agency being evaluated
Which practice would likely cause relations between a researcher and the host agency to deteriorate?
inputs
resources (people, dollars, buildings)
activities
what is being provided (services or products)
results
outputs (list of things that happened because of the activites)
outcomes
broader (example: improve health, higher education) things you want to happen in society
unintended consequences
most policies have
feedback
inputs, results, outcomes (make changes/adjustments)
explanatory (purpose of research)
evaluation research is
Population
The entire group of individuals or instances about whom we hope to learn.
Sampling
The process of selecting a subset of individuals from a population to estimate characteristics of the whole population.
Sample
A subset of the population selected for analysis.
Sampling Frame
A complete list of the population or universe under investigation.
Probability Samples
Samples chosen using a method where every individual has an equal probability of being selected.
Nonprobability Samples
Samples that do not give all individuals in the population a chance to be selected.
Representativeness
The degree to which a sample accurately reflects the characteristics of the population from which it was drawn.
Simple Random Sampling
A sampling method where each member of the population has an equal chance of being selected.
Stratified Random Sampling
Dividing population into homogeneous (usually based on demographics) subgroups and then taking a simple random sample in each sub group
Cluster Sampling
Divide population into clusters, like based on geography, then randomly sample the clusters
Systematic Sampling
A sampling method where a starting point is selected at random and subsequent members are chosen at regular intervals.
Every nth U of A is included in the sample; random start; and sampling interval
Sample Size
The number of observations in a sample, which can affect the accuracy and reliability of results.
Snowball Sampling
A nonprobability sampling method used to locate hard-to-find groups of people or whereby each person interviewed may be asked to suggest additional people for interviewing
Quota Sampling
A nonprobability sampling method where the researcher ensures equal representation of certain characteristics in the sample.
when the researcher deliberately sets the % or strata within the sample to insure the inclusion of a particular segment of the population characteristics
Anticipated Subclassification
The expected categories or classifications within the variables in a study.
Confidence Level
The probability that the sample accurately reflects the population within a specific margin of error.
Confidence Interval
A range of values that is likely to contain the population parameter with a certain level of confidence.
sample size
____ does NOT depend on the size of your population
simple random sampling
What is the simplest form of sampling?
True
Snowball sampling is a type of strategy employed particulary in exploratory studies of little-known or hard-to-obtain subjects.
True
The size of the sample is statistically determined by the size of the sampling error to be tolerated rather than the total size of the population
quota sampling
What is one type of nonprobability sampling method?
purposive samples (can also mean judgment)
In ______, the selection of samples is based on the researcher’s skill, judgment, and needs.
the size of the population
The choice of sample size does NOT depend on
simple random samples
Which sample does each element of the population have an equal probability of being selected?
Sampling
______ is a procedure used in research by which a select subunit of a population is studied in order to analyze the entire population.
sampling frame
The resource used in the selection of a sample (Ex. phone book, college admission records) is
the sampling frame include all members of the population
If the sample is to be representative of the population, it is essential that
simple random, stratified random, cluster, systematic , multistage
What are probability sampling methods?