Research Methods in Nutrition Exam 4

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/74

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 8:39 PM on 5/3/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

75 Terms

1
New cards

The survey process involves

setting objectives, selecting respondents, determining best delivery method, developing and pretesting the questionnaire, ensuring validity and reliability, and collecting and analyzing results

2
New cards

What is NHANES?

the National Health and Nutrition Examination Survey, a program designed to assess the health and nutritional status of adults and children in the United States.

3
New cards

When are surveys useful?

Info that is not readily available from other sources, to clearly identify who the target population is (and get a representative sample of that population), to define the research questionnaire, and to gather anonymous feedback.

4
New cards

Paper survey

Delivered to respondents via the mail, or handed to individuals to complete

5
New cards

Electronic survey

delivered to respondents and completed using computers, tablets, and or cell phones (least expensive)

6
New cards

Oral Survery

Completed with the help of an interviewer either on the phone or in person (most expensive)

7
New cards

How do you pick which type of survey?

Budget, characteristics of respondents (what do they use, where they’re located, language/literacy level, willingness to participate, etc), complexity/length, or sensitivity of information being requested

8
New cards

What survey method is most accurate?

Electronic methods

9
New cards

Retrospective survey

Administered at one point in time, but it asks respondents to report past past behaviors, beliefs, events, etc

10
New cards

Cross-sectional survey

Administered at one point in time; useful to compare groups at a single point in time

11
New cards

Longitudinal survey

Administered more than once

12
New cards

Repeated cross-sectional survey

Asks the same questions at several points in time to a new group of respondents each time

13
New cards

Panel survey

Asks the same questions at different points in time but to the SAME respondents

14
New cards

Cohort survey

Completed by the same respondents at a numerous points over a period of time (some questions change from time to time)

15
New cards

Simple random sampling

selecting individuals on a numbered list using either a table of random numbers or an online random sample generator

16
New cards

Systematic random sampling

Selecting every nth individual from a list after a random start point

17
New cards

Straitified random sampling

dividing the accessible population into groups or strata and then using simple random sampling to select from each group

18
New cards

Cluster random sampling

Dividing the population into clusters, such as geographic clusters, randomly picking some of the clusters, and then randomly sampling within each of those clusters

19
New cards

Multistage cluster sampling

using cluster sampling that is carried out in stages using smaller and smaller sampling units at each stage

20
New cards

convenience sampline

when members of the population are chosen simply because they are easy to reach and the researcher may have a comfort level asking them to complete a survey

21
New cards

quota sampling

determine the groups within the population and figure out how many should be drawn from each group, which is done using convenience sampling

22
New cards

Purposive sampling

when researcher choose respondents based on whether or not the respondent is a good representative of the population

23
New cards

network/snowball sampling

when researchers find a few good respondents and then ask them to direct you to other potential respondents

24
New cards

What is a sampling error

When a characteristic from your sample does not match the population being sampled

25
New cards

What is the best way to control for sampling error?

Make sure you have a large enough sample size (increased sample size=reduced sampling error)

26
New cards

What is sample bias

when members of the sample are systematically different from the population sample bias does NOT improve by increasing sample size)

27
New cards

What is coverage bias?

When a segment of the population is completely excluded from the sample (thinking about geographical areas)

28
New cards

sample selection bias

When some groups in the population have a higher or lower chance of being selected; probability sampling reduces the risk of sample selection bias

29
New cards

Nonresponse bias

when the percentage of people who do not respond to the survey varies among the groups in the sample

30
New cards

What is a cover letter?

Introduces the survey to the respondent and (when done correctly), increases the response rate

31
New cards

Cover letters include

What the study is about/why it’s important, why the respondent is important/how they were selected, voluntary nature of participation, promise of confidentiality, incentive (if used), estimate of time, and contact information

32
New cards

What are the steps in developing and pretesting a questionnaire?

Develop questions/responses, sequence the questions, layout the questions, pretest the questionnaire for face validity, use expert panel to assess content validity, do pilot test

33
New cards

What should you do before writing questionnaires?

Check if there are valid and reliable questionnaires that have already been developed that could be used or modified.

34
New cards

What could you measure when writing questions?

Knowledge, attitudes, opinions, personal attributes, or behaviors

35
New cards

What is an open-ended question?

Allows respondent to answer in their own words, not restrictive, may yield more and richer info, more time-consuming than close-ended, for researcher (answers need to read to be read, interpreted, and coded), and an increased risk or researcher bias

36
New cards

What is a close-ended question?

Fixed answers, harder to develop questions but quicker for respondent to answer, easier for researcher to enter answers into software and analyze, and most common form of survey question

37
New cards

How should you determine the sequence of questions?

Start with easy/most importan questions, organize similar questions together and use section heading as needed, go from general to more specific questions, easy to difficult (when testing knowledge), put questions on sensitive issues and demographic questions at the end

38
New cards

What is a pretest?

To test your questionnaire on individuals similar to your respondent population to get feedback on the questionnaire

39
New cards

What is pilot testing?

A full-dress rehearsal of the survey in actual field conditions

40
New cards

What is a cognitive interview?

A qualitative pre testing method used to evaluate survey questions. THey assure that a survey instrument is appropriate and clear for potential respondents.

41
New cards

What is content validity?

Evaluates whether the survey item comprehensively covers all aspects of the construct being measured (usually assessed by subject matter experts)

42
New cards

What is face validity?

A subjective assessment of whether a survey appears to measure what it is intended to measure

43
New cards

Why is face validity considered a week form of validity?

It’s assessed subjectively without any systematic testing or statistical analyses, and is at risk for researcher bias

44
New cards

What is criterion validity?

Assesses how well survey results correlate with an external, established criterion

45
New cards

What is concurrent validity?

Refers to the degree to which a survey correlates with another previously established survey (comparing a new survey to a gold-standard survey)

46
New cards

What is predictive validity?

Evaluates how well a survey predicts a specific outcome.

47
New cards

What is construct validity?

An experimental demonstration that a survey is measuring the construct that it is intended to measure (can be difficult to assess but is extremely valuable)

48
New cards

What is factor analysis?

A statistical technique used to measure construct validity

49
New cards

What does reliability measure?

Consistency and stability

50
New cards

What does internal consistency reliability measure?

It measures how well different items on a survey/questionnaire produce similar results (they are designed to measure the same construct)

51
New cards

What is cronbach’s alpha?

Used to measure internal consistency reliability. Range is from 0-10, a value of 0.7 or higher is considered acceptable

52
New cards

What are some potential causes of low scores for Cronbach’s alpha?

Low number of items, lack of one-dimensionality, sample size, and poorly worded questions.

53
New cards

What is test-retest reliability?

Shows if a tool yields similar results when administered twice under the same conditions over a short period.H

54
New cards

How is test-retest reliability measured?

It’s measured by having the same respondents take the survey at two different times to see how consistent or stable their answers are (also known as stability reliability). Uses Perason’s r (interval data), spearman’s rho (ordinal data), and intra-class correlation coefficient (ICC)

55
New cards

What is equivalence reliability?

Looks at whether measurements from two versions of a test or from two observers observing the same event are consistent.

56
New cards

What does parallel forms reliability measure?

Whether two versions of a survey/questionnaire are consistent (online vs paper)

57
New cards

What does inter-rater reliability measure?

The consistency of two or more raters who may be observing an event or coding answers to questions

58
New cards

Why is a power analysis done?

To calculate minimum sample size to detect differences/relationships between groups in a study

59
New cards

Why is the type I error set lower than the type II error?

Because science prioritizes avoiding false positives—concluding an effect exists when it does not—over false negatives, which are missed discoveries

60
New cards

What is a type I error?

Incorrectly rejecting a true null hypothesis (false positive)

61
New cards

What is a type II error?

Fails to reject a false null hypothesis (false negative)

62
New cards

What does the evidence analysis library (EAL) contain?

Systematic reviews (including evidence summary, conclusion, and grade) and evidence-based nutrition practice guidelines

63
New cards

Systematic reviews are the foundation for recommendations and practice/clinical guidelines.

True

64
New cards

What is step one in the evidence analysis process?

To formulate the evidence analysis question

65
New cards

What is step two in the evidence analysis process?

To gather and classify the evidence

66
New cards

What is step three in the evidence analysis process?

Critically appraise each article (risk of bias)

67
New cards

What is step four in the evidence analysis process?

To summarize evidence

68
New cards

What is step five in the evidence analysis process?

To write and grade teh conclusion statement

69
New cards

What is step one of the EAL guideline development process?

To review the conclusion statements

70
New cards

What is step two of the EAL guideline development process?

To develop recommendation statements

71
New cards

What is step three of the EAL guideline development process?

References not graded in the academy’s evidence analysis process

72
New cards

What are the most common forms of dissemination of academic information?

Posters, presentations, and publications

73
New cards

What does a call for abstracts include?

Detailed submission guidelines (including a written abstract), submission categories, deadline date, and abstract is usually peer reviewed.

74
New cards

What do you have to consider when creating a quality poster?

Layout, color, text, and graphics

75
New cards

Intraclass correlation coefficient (ICC)

Descriptive statistic that measures the reliability of ratings or the similarity of data within clusters