1/53
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Qualitative COntent Analysis
the quantitative study of human communication and involves taking observed items or units of something and placing them into defined categories
Use for Content Analysis
Its an efficient way to analyze large amounts of data, provides context to content, and makes a connection between experiment and content analysis
Myths of content analysis
content analysis is easy
The term “content analysis” applies to all examinations of message content
anyone can do content analysis and it doesn’t take any specil preparations
its for academic use only
Goals of Content Analysis
Generality(theoretical relevance), descriptions (whats the problem/phenomenon), and explanation—what inferences can we make about people/sources that created the material
Content Analysis is used to
compare message prevalence, flow, or dominance over time
compare message content and real life
analyze message creators
Content Analysis STEPS
Develop a proposition to test
Review the literature
Develop hypotheses and/or research questions
Use previous measures or adjust/adapt/create coding instructions/classification systems
Define your population, sampling units
Code Messages
intercoder reliability: the level of agreement among coders
Cohens Kappa (nominal)
Scott’s Pi (two coders)
Krippendorf’s alpha (two or more coders, any type of measurement
Analyze
Interpret
Strengths of Content Analysis
Experiments: casual mechanism determines
Surveys: casualty determination not absolute, working with perceptions
Content Analysis: hard to determine source motivations, casual effects on human behavior
Intercoder/interrater Reliability
a way to measure the extent to which 2 or more independant coders agree on the same coding decisions when analyzing the characteristics of messages. This ensures that the coding scheme is not limited to a single individuals opinion or idea (enhancing reliability and trustworthiness)
Threats to reliability (in content analysis)
Time and resource constraints
Sampling
selecting events (often people) from a population
Population
universe of events
Confidence Interval
a range of values from the sample statistic that is likely to include the population parameter
Probability sampling
Each event in the population has an equal chance of being selected. First two require a sampling frame
Stratified Random Sample
Sampling in a way that represents known portions within a population (ex. race, gender, age, ect.)
Random Sampling
Each event (person) in the population has an equal chance of being selected
Cluster Sampling
requies moving through the different stages within a sample (ex. school distrcit—> elementary school —> first grade class
Non random sampling (nonprobability
Simple convenience sampling (you) ex. volunteer sampling, exclusion/inclusion criteria
Quota Sampling
nonrandom version of stratified sampling
Purposive/known group sampling
groups that possess some known characteristic
Snowball Sampling
asking participants to help
Problems with Random Samples
sometimes impossible
requires resources
definition of population
Problems with nonrandom sampling
greater bias
limits conclusions
not representative
Distribution
a way of organizing data to show how frequently each value occurs
Distribution helps us understand
whether data is normally distributed, if results can be generalized, whether results are due to chance or not
Standard Normal Curve
a bell-shaped, symmetric distribution where the mean=0, standard deviation=1 (Predictable percentages of scores fall within 1, 2, and 3 standard deviations (68%, 95%, 99.7%))
Kurtosis
refers to the “peakedness” of a distribution
Leptokurtic
tall and thin
Platykurtic
flat and wide
Mesokurtic
Normal Kurtosis (the standard shape)
Skewness
the asymmetry in the distribution
Positive Skew
Tail is on the right
Negative Skew
Tail is on the left
Probability
helps us determine how likely it is that an observed result happened by chance
the lower the probability
the more likely the effect is real
p < .05
we accept less than a 5% chance the result is due to randomness
We use distributions to
understand where a score falls and how probable that score is (for example, scores in the tails of the distribution are less probable and may indicate a significant result)
Significance Level
the threshold for deciding whether an effect is real
0.05
we accept a 5% chance of being wrong if we reject the null hypothesis
Critical Region
The oart of the distribution where if a statistic falls there, we reject the null hypothesis (marks the most extreme values)
Critical Value
the boundary score that separates the critical region from the rest
depends on the type of test and the alpha level
if your test statistic is more extreme than the critical value, you reject the null hypothesis
Type I error (a error)
rejecting a true null hypothesis
saying there’s an effect when there isn’t
controlled by the alpha (usually 0.05)
Type II Error (B error)
failing to reject a false null hypothesis
saying there is no effect when there actually is one
can be reduced with larger sample sizes or stronger experimental design
When to use Chi Square Test
when comparing frequencies or proportions between categorical variables (often use din contingency tables ex. genders vs voting preference)
Goal of Chi Square Test
to test whether there is a significant association or independence between two categorical variables
Chi Square Assumptions
observations are independant
categories are mutually exclusive
expected frequency in each cell is typically > or equal to 5 for validity
Chi Square Limitations
cannot be used with small sample sizes (due to expected count assumptions)
only detects association, not casual relationships
assumes nominal level data— can’t be used for ordinal or interval without simplification
Independent Samples T-Test
When comparing the means of two independent groups (ex. men vs women on test scores)
Goal of Independent T-Test
to determine if the difference in means is statistically significant
Independent T-Test Assumptions
two groups are independent
dependent variable is interval or ratio scale
approximately normally distributed data
homogeneity of variances (Levene’s test can check this_
Independent T-Test Limitations
Sensitive to violations of normality or unequal variances
assumes random sampling
not suitable for more than two groups (ANOVA needed)
Dependent (paired) Samples T-Test
compares the means of the same group at two time points (ex. pre-test an post-test)
Goal of Dependent Samples T-Test
to assess if the mean difference within the sae group is statistically significant
Dependent Samples T-Test Assumptions
paired observations
differences are normally distributed
dependent variable is interval or ratio scale
Dependent Samples T-Test Limitations
Only compares two time points or conditions
sensitive to outliers in the difference scores
requiers that the measurement conditions are equivalent