1/74
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
The survey process involves
setting objectives, selecting respondents, determining best delivery method, developing and pretesting the questionnaire, ensuring validity and reliability, and collecting and analyzing results
What is NHANES?
the National Health and Nutrition Examination Survey, a program designed to assess the health and nutritional status of adults and children in the United States.
When are surveys useful?
Info that is not readily available from other sources, to clearly identify who the target population is (and get a representative sample of that population), to define the research questionnaire, and to gather anonymous feedback.
Paper survey
Delivered to respondents via the mail, or handed to individuals to complete
Electronic survey
delivered to respondents and completed using computers, tablets, and or cell phones (least expensive)
Oral Survery
Completed with the help of an interviewer either on the phone or in person (most expensive)
How do you pick which type of survey?
Budget, characteristics of respondents (what do they use, where they’re located, language/literacy level, willingness to participate, etc), complexity/length, or sensitivity of information being requested
What survey method is most accurate?
Electronic methods
Retrospective survey
Administered at one point in time, but it asks respondents to report past past behaviors, beliefs, events, etc
Cross-sectional survey
Administered at one point in time; useful to compare groups at a single point in time
Longitudinal survey
Administered more than once
Repeated cross-sectional survey
Asks the same questions at several points in time to a new group of respondents each time
Panel survey
Asks the same questions at different points in time but to the SAME respondents
Cohort survey
Completed by the same respondents at a numerous points over a period of time (some questions change from time to time)
Simple random sampling
selecting individuals on a numbered list using either a table of random numbers or an online random sample generator
Systematic random sampling
Selecting every nth individual from a list after a random start point
Straitified random sampling
dividing the accessible population into groups or strata and then using simple random sampling to select from each group
Cluster random sampling
Dividing the population into clusters, such as geographic clusters, randomly picking some of the clusters, and then randomly sampling within each of those clusters
Multistage cluster sampling
using cluster sampling that is carried out in stages using smaller and smaller sampling units at each stage
convenience sampline
when members of the population are chosen simply because they are easy to reach and the researcher may have a comfort level asking them to complete a survey
quota sampling
determine the groups within the population and figure out how many should be drawn from each group, which is done using convenience sampling
Purposive sampling
when researcher choose respondents based on whether or not the respondent is a good representative of the population
network/snowball sampling
when researchers find a few good respondents and then ask them to direct you to other potential respondents
What is a sampling error
When a characteristic from your sample does not match the population being sampled
What is the best way to control for sampling error?
Make sure you have a large enough sample size (increased sample size=reduced sampling error)
What is sample bias
when members of the sample are systematically different from the population sample bias does NOT improve by increasing sample size)
What is coverage bias?
When a segment of the population is completely excluded from the sample (thinking about geographical areas)
sample selection bias
When some groups in the population have a higher or lower chance of being selected; probability sampling reduces the risk of sample selection bias
Nonresponse bias
when the percentage of people who do not respond to the survey varies among the groups in the sample
What is a cover letter?
Introduces the survey to the respondent and (when done correctly), increases the response rate
Cover letters include
What the study is about/why it’s important, why the respondent is important/how they were selected, voluntary nature of participation, promise of confidentiality, incentive (if used), estimate of time, and contact information
What are the steps in developing and pretesting a questionnaire?
Develop questions/responses, sequence the questions, layout the questions, pretest the questionnaire for face validity, use expert panel to assess content validity, do pilot test
What should you do before writing questionnaires?
Check if there are valid and reliable questionnaires that have already been developed that could be used or modified.
What could you measure when writing questions?
Knowledge, attitudes, opinions, personal attributes, or behaviors
What is an open-ended question?
Allows respondent to answer in their own words, not restrictive, may yield more and richer info, more time-consuming than close-ended, for researcher (answers need to read to be read, interpreted, and coded), and an increased risk or researcher bias
What is a close-ended question?
Fixed answers, harder to develop questions but quicker for respondent to answer, easier for researcher to enter answers into software and analyze, and most common form of survey question
How should you determine the sequence of questions?
Start with easy/most importan questions, organize similar questions together and use section heading as needed, go from general to more specific questions, easy to difficult (when testing knowledge), put questions on sensitive issues and demographic questions at the end
What is a pretest?
To test your questionnaire on individuals similar to your respondent population to get feedback on the questionnaire
What is pilot testing?
A full-dress rehearsal of the survey in actual field conditions
What is a cognitive interview?
A qualitative pre testing method used to evaluate survey questions. THey assure that a survey instrument is appropriate and clear for potential respondents.
What is content validity?
Evaluates whether the survey item comprehensively covers all aspects of the construct being measured (usually assessed by subject matter experts)
What is face validity?
A subjective assessment of whether a survey appears to measure what it is intended to measure
Why is face validity considered a week form of validity?
It’s assessed subjectively without any systematic testing or statistical analyses, and is at risk for researcher bias
What is criterion validity?
Assesses how well survey results correlate with an external, established criterion
What is concurrent validity?
Refers to the degree to which a survey correlates with another previously established survey (comparing a new survey to a gold-standard survey)
What is predictive validity?
Evaluates how well a survey predicts a specific outcome.
What is construct validity?
An experimental demonstration that a survey is measuring the construct that it is intended to measure (can be difficult to assess but is extremely valuable)
What is factor analysis?
A statistical technique used to measure construct validity
What does reliability measure?
Consistency and stability
What does internal consistency reliability measure?
It measures how well different items on a survey/questionnaire produce similar results (they are designed to measure the same construct)
What is cronbach’s alpha?
Used to measure internal consistency reliability. Range is from 0-10, a value of 0.7 or higher is considered acceptable
What are some potential causes of low scores for Cronbach’s alpha?
Low number of items, lack of one-dimensionality, sample size, and poorly worded questions.
What is test-retest reliability?
Shows if a tool yields similar results when administered twice under the same conditions over a short period.H
How is test-retest reliability measured?
It’s measured by having the same respondents take the survey at two different times to see how consistent or stable their answers are (also known as stability reliability). Uses Perason’s r (interval data), spearman’s rho (ordinal data), and intra-class correlation coefficient (ICC)
What is equivalence reliability?
Looks at whether measurements from two versions of a test or from two observers observing the same event are consistent.
What does parallel forms reliability measure?
Whether two versions of a survey/questionnaire are consistent (online vs paper)
What does inter-rater reliability measure?
The consistency of two or more raters who may be observing an event or coding answers to questions
Why is a power analysis done?
To calculate minimum sample size to detect differences/relationships between groups in a study
Why is the type I error set lower than the type II error?
Because science prioritizes avoiding false positives—concluding an effect exists when it does not—over false negatives, which are missed discoveries
What is a type I error?
Incorrectly rejecting a true null hypothesis (false positive)
What is a type II error?
Fails to reject a false null hypothesis (false negative)
What does the evidence analysis library (EAL) contain?
Systematic reviews (including evidence summary, conclusion, and grade) and evidence-based nutrition practice guidelines
Systematic reviews are the foundation for recommendations and practice/clinical guidelines.
True
What is step one in the evidence analysis process?
To formulate the evidence analysis question
What is step two in the evidence analysis process?
To gather and classify the evidence
What is step three in the evidence analysis process?
Critically appraise each article (risk of bias)
What is step four in the evidence analysis process?
To summarize evidence
What is step five in the evidence analysis process?
To write and grade teh conclusion statement
What is step one of the EAL guideline development process?
To review the conclusion statements
What is step two of the EAL guideline development process?
To develop recommendation statements
What is step three of the EAL guideline development process?
References not graded in the academy’s evidence analysis process
What are the most common forms of dissemination of academic information?
Posters, presentations, and publications
What does a call for abstracts include?
Detailed submission guidelines (including a written abstract), submission categories, deadline date, and abstract is usually peer reviewed.
What do you have to consider when creating a quality poster?
Layout, color, text, and graphics
Intraclass correlation coefficient (ICC)
Descriptive statistic that measures the reliability of ratings or the similarity of data within clusters