1/124
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Delimitation
are decisions made by an evaluator or researcher that ought to be mentioned because they are used to help the evaluator identify the parameters and boundaries set for a study. Examples of delimitations include why some literature is not reviewed, populations are not studies, and certain methods are not used.
Evaluation
is a series of steps that evaluators use to assess a process or program to provide evidence and feedback about the program
Limitations
are phenomena the evaluator or researcher cannot control that place restrictions methods and, ultimately, conclusions. Ex. of possible limitations might be time, nature of data collection, instrument, samples and analysis
Logic Model
take a variety of forms but generally depict aspects of a program such as inputs, outputs, and outcomes. Offer a scaled down, somewhat linear, visual depiction of programs. Can be created and used as an evaluation tool to facilitate evaluation design decisions that will impact or influence the type of data and analysis available.
inputs: are the resources, contributions, and other investments that go into a program
activities: are the keystones of the program
outputs: are the activities, services, and products that will reach the participants of a program as a result of carefully leveraging resources through skillful planning
outcomes: are often stepwise and labeled short-term, intermediate, or long-term outcomes
short-term outcomes: sometimes described as impact- are quantifiable changed in knowledge, skills, and access to resources that happen if planned activities are successfully carried out
intermediate outcomes: are measured in terms of changes in behaviors related to disease or health status
long-term outcomes: are measures in terms of fundamental changes in conditions leading to morbidity or mortality
Research
is an organized process in which a researcher uses the scientific method to generate new knowledge
Reliability
refers to the consistency, dependability, and stability of the measurement process
Validity
is the degree to which a test or assessment measures what it is intended to measure. Using a valid instrument increases the chances of measuring what was intended.
Variables
are operational forms of a construct. Researchers use variables to designate how the construct will be measured in designated scenarios
Unit of Analysis
is what or who is being studied or evaluated ( the individual, group, organization, or program)
Formative Evaluation
is a process that evaluators or researchers use to check an ongoing process of evaluation from planning through implementation phases.
Process Evaluation
is any combination of measures that occurs as a program is implemented to assure or improve the quality of performance or delivery
Summative Evaluation
is often associated with measures or judgements that enable the investigator to conclude impact and outcome evaluations
Impact Evaluation
is the immediate and observable effects of a program leading to the desired outcome
Outcome Evaluation
focused on the ultimate goal, product, or policy and is often measured in terms of health status, morbidity, mortality
Inputs
are the resources, contributions, and other investments that go into a program
Activities
are the keystones of the program
Outputs
are the activities, services, and products that will reach the participants of a program as a result of carefully leveraging resources through skillful planning
Outcomes
are often stepwise and labeled short-term, intermediate, or long-term outcomes.
Short-term outcomes
sometimes described as impact- are quantifiable changes in knowledge, skills, and access to resources that happen if planned activities are successfully carried out
Intermediate outcomes
are measured in terms of changes in behaviors related to disease or health status
Long-term outcomes
are measured in terms of fundamental changes in conditions leading to morbidity or mortality
Purpose Statement
usually a sentence of two written with specificity and detail. It helps evaluators focus and guide efforts involved with data collection and analysis. Used to guide the selection and/or creation of program goals
Goal
usually long-term and represent a more global vison (e.g to reduce morbidity or mortality)
Objectives
Define measurable strategies used to attain progress towards a goal
Evaluation Questions
precise questions that carefully align with the statement of program operations, intentions, and stakeholders. Helps to establish boundaries for the evaluation by stating what aspects of the program will be addressed
Process Questions
helps the evaluator understand phenomena: such as internal and external forces that affect program activities.
Long-term evaluation questions
provide vital links between intervention activities, products, and services rendered, and changes in risk factors, morbidity or mortality
Well-developed evaluation questions
offer a guide for selecting appropriate data sources, which in turn, help to guide an effective analysis plan
Quantitative Methods
are focused on measuring things related to health education programs using numerical data to help describe, explain, or predict phenomena
Qualitative Methods
are descriptive with the aim of the researcher/ evaluator to discover meaning or insight
Probability Sampling Techniques
are those methods in which each member of the priority population has a known chance, or probability, of being selected.
Simple random sampling
an inclusive list of the priority population is used to randomly (such as with a list of random numbers) select a certain number of potential participants from the list
Systemic random sampling
an inclusive list of the priority population is used, and starting with a random number, every nth potential participant is selected ( such as every 14th participant)
Stratified random sampling
the sample is split into groups based on a variable of interest, and an equal number of potential participants from each group are selected randomly ( such as in a simple random sample)
Cluster sampling
is when naturally occurring groups (such as schools) are selected instead of individuals
Multistage cluster sampling
in several steps, groups are selected using cluster sampling (i.e in a state, counties are selected at random, then schools within the county are selected are random)
Stratified multistage cluster sampling
in several steps, a variable of interest is used to split the sample, and then groups are randomly selected from this sample (i.e., in a state, counties are selected at random, then an equal number of elementary schools and secondary schools are randomly selected in each county)
Non-probability sampling
not all units from the priority populations have an equal change of being selected and thus their representativeness to the population is unknown
Non-probability sampling technique
Convenience
selection of individuals or groups who are available
Non-probability sampling technique
Purposive
researcher makes judgments about who to include in the sample based on study needs
Non-probability sampling technique
Quota
Selecting individuals who have a certain characteristic up to a certain number (i.e. selecting 50 females from the worksite)
Non-probability sampling technique
Network Sampling (also called snowball sampling)
when respondents identify other potential participants who might have desired characteristics for the study
Validity
the degree to which a test or assessment measures what it is intended to measure
Reliability
refers to the consistency, dependability, and stability of the measurement process
Pilot Test
is to gain insights on whether a data collection instrument consistently measures whatever it should measure.
Statement of purpose
used to clearly and succinctly define the goal of the research project
Elements of a purpose statement include the following:
research design (quantitative study) or method of inquiry (qualitative)
Variables (quantitative study) or phenomena under investigation (quantitative)
The priority population
Research setting (e.g., university, worksite)
Research Question
is an interrogative statement that reflects the central questions the research study is designed to answer
Hypotheses
Quality research questions are developed in such a way that can be translated into a testable statement
Null hypothesis
is a hypothesis of skepticism, in which it is stated that there is no relationship between variables
Alternative hypothesis
it is stated that there is a relationship between variables.
May also be directional, for ex. if the research team theorizes a program may reduce or increase the quantity or a targeted behavior
Quantitative research
inferential statistical tests are used to determine if differences or relationships exist between variables
Statistical Test
is a procedure that, when data are fed into, is used to either reject or fails to reject a null hypothesis
Fundamental concepts for human subjects protection
Respect for persons (protection of individual autonomy and for those who have diminished autonomy)
Beneficence (protecting people from hard and working toward enhancing well-being)
Justice (equals should be treated equally)
Informed Consent
designed to allow participants to choose what will or will not happen to them, and it is signed by participants to indicate their choice
Informed consent includes the following information
nature and purpose of the program
any inherent risks or dangers associated with participation in the program
any possible discomfort that be experienced from participation in the program
expected benefits of participation
alternative programs or procedures that would accomplish the same results
option of discontinuing participation at any time
Institutional Review Board (IRB)
function is to ensure physical and psychological protection of human subjects involved in research
It reviews, approves, and monitors biomedical and behavioral research involving humans
Evaluation Model
Attainment: focused on program objectives and the program goals; serves as standards for evaluation
Decision-making: based on four components designed to provide the user with the context, input, processes, and products with which to make decisions
Goal-free: not based on goals, evaluator searches for all outcomes including unintended positive
Evaluation
An evaluation of programs is used by health education specialists to determine the value or worth of the programs
Formative evaluation
is a process that evaluators or researchers use to check an ongoing process of the evaluation from planning through implementation phases
Process evaluation
is any combination of measures that occurs as a program is implemented to assure or improve the quality of performance or delivery
Summative evaluation
is often associated with measures or judgments that enable the investigator to conclude impact and outcome evaluations
Impact evaluation
is the immediate and observable effects of a program leading to the desired outcomes
Outcome Evaluation
is focused on the ultimate goal, product, or policy and is often measured in terms of health status, morbidity, and mortality
Employing evaluation
procedures that are explicit, formal, and justifiable is desirable for program improvement
Data gathering instruments or scripts
are used for both quantitative and qualitative data collection
Common data collection strategies include: face-to-face surveys, telephone surveys, self-administered surveys, traditional mail-in surveys, and electronic platforms
Evaluation
is a process that health education specialist use to check to ensure that they are reaching the desired outcome.
Program Logic Model
is a visual outline of how program components( e.g resources, activities, and outputs) are linked to outcomes.
Purpose Statement (statement of purpose)
used to identify in detail what health education specialist want to learn throughout an evaluation. It is usually a sentence or two written with specificity and detail. Helps. evaluators focus and guide involved with data collection and analysis
Goals
are usually long-term and represent a more global vision (e.g to reduce morbidity or mortality)
Objectives
Define measurable strategies used to attain progress towards a goal
Evaluation Questions
Follow an understanding of program operations, intentions, and stakeholders. Questions that carefully align with the statement of purpose, goals, and objectives of a program.
Help to establish boundaries for the evaluation by stating what aspects of the program will be addressed.
Used to monitor and measure processes, activities, outputs, and expected outcomes.
Purpose Statement (statement of purpose)
Usually a sentence or two written with specificity and detail. Helps evaluators focus on guide efforts involved with data collection and analysis. used to guide the selection and/or creation or program goals.
Longer-term evaluation questions
provide vital links between intervention activities, products, and services rendered, and changes in risk factors, morbidity, or mortality
Quantitative Methods
are focuses on measuring things related to health education programs using numerical data to help describe, explain, or predict phenomena
Qualitative Methods
are descriptive with the aim of the researcher/ evaluator to discover meaning or insight
Mixed method
means using a combination of different methods and strategies to examine evaluation question from multiple different perspective and vantage points.
Attainment
focused on program objectives and the program goals; serves as standards for evaluation
Decision-making
based on four components designed to provide the user with the context, input, processes, and products with which to make decisions
goal-free
not based on goals; evaluator searches for all outcomes including unintended positive and negative side effects
Naturalistic
focused on qualitative data and responsive information from participants in a program is used; most concerned with narrative explaining “why” behavior did or did not change
Systems Analysis
Based on efficiency that cost benefits or cost effectiveness analysis is used to quantify effects of a program
Utilization-focused
accomplished for and with a specific population
CDC’s six-step framework : Steps in Evaluation Practice
Step 1: Engage Stakeholders
Step 2: Describe the program
Step 3: Focus on the evaluation design
Step 4: Gather credible evidence
Step 5: Justify conclusions
Step 6: Ensure use and lessons learned
Standards for Effective Evaluation
Utility: serve the information needs of intended users
Feasibility: be realistic, prudent, diplomatic and frugal
Propriety: behave legally, ethically, and with due regard for the welfare of those involved and those affected
Accuracy: reveal and convey technically accurate information
Two categories of sampling techniques
probability & non-probability
Probability sampling techniques
are those methods in which each member of the priority population has a known chance, or probability of being selected
Random selection
which reduces the chance of sampling bias, is paramount in probability sampling
Simple random sampling
an inclusive list of the priority population is used to randomly (such as with a list of random numbers) select a certain number of potential participants from the list
Systemic Random Sampling
an inclusive list of the priority population is used, and starting with a random number, ever nth potential participant is selected (such as every 14th participant)
Stratified Random Sampling
the sample is split into groups based on variable of interest, and an equal number of potential participants from each group are selected randomly (such as in a simple random sample)
Cluster Sampling
is when naturally occurring groups (such as schools) are selected instead of individuals
Multistage cluster sampling
in several steps, groups are selected using cluster sampling (i.e in a state, counties are selected at random, then schools within the county are selected at random)
Stratified multistage cluster sampling
in several steps, a variable of interest is used to split the sample, and then groups are randomly selected from this sample (i.e in a state, counties are selected at random, then an equal number of elementary schools, and secondary schools are randomly selected in each county)
Non-probability sampling
frames are readily assessable to the research team. With non-probability samples, not all units from the priority population have an equal chance of being selected, and thus representativeness to the population in unknown
Convenience
selection of individuals or groups who are available
Purposive
researcher makes judgments about who to include in the sample based on study needs
Quota
selecting individuals who have certain characteristics up to a certain number (i.e selecting 50 females from a worksite)
Networking sampling (aka snowball sampling)
when respondents identify other potential participants who might have desired characteristics for the study
Validity
the degree to which a test or assessment measures what it is intended to measure