1/14
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Why might a one-tailed test be used for ERP analysis?
if you specify in the null that you are looking for an N1, it only makes sense to check for a negative component value, so just check for the one value and if it was positive you have something else
statistical power
probability that the study will give a significant result if the research hypothesis is true (a real effect will be found)
Familywise error rate
the probability that a family of tests contains at least one Type I error
What are the biggest issues for stats of ERPs?
inflated error rates due to multiple tests being run and from cherry picking
What is the best indicator of an effect being real?
replication of results
What can help improve power and validity of statistics for ERPs?
collecting clean data with larger effects, using a simple analysis approach, running follow ups to replicate findings, consulting a statistician
data-driven approach (cherry picking)
an issue somewhat specific to ERP analysis where experimenters look at data to pick out a result to run stats on. This is inflating error rate many times over without any correction
How can issues of multiple implicit comparisons be solved?
1. picking an electrode and time window a priori
2. applying multiple comparison corrections
3. using measures that are more window independent (peak amplitude, signed area)
How can the problem of measuring average latency when some subjects don't have a certain component be solved and why?
use the jackknife approach, removes subjects for each sub average so the missing data has less of an impact and you have higher power
Jackknife approach
making sub-grand averages that each remove one subject and then using those for a grand average and stats in order to improve statistical power with a safe, legit method
What are Luck's main rules for experimental design?
1. peaks and components are not the same thing
2. you can't know the timing of underlying ERP components by looking at a single ERP waveform
3. an effect during the time period of a peak may not reflect a modulation of the peak's underlying component
4. peak amplitude/latency differences may not indicate differences in the actual component
5. averaged ERP waveforms don't accurately represent single trial ERP waveforms
How can difference waves be misleading?
when two waveforms are mostly overlapping but not entirely, the difference wave can amplify the areas that don't overlap and create two peaks that didn't really exist.
What are common confounds in ERP studies?
- sensory (differences in displays)
- motor (attention or lack of motor response)
- arousal
- overlap of ERPs from sequentially presented stimuli
- number of stimuli/trials
What are some ways to avoid misinterpreting ERPs?
- focus on only a couple components
- use well-studied experimental manipulations
- focus on large components
- isolate components with difference waves (but look at regular ERPs too)
- focus on easily isolated components
What are rules of thumb for number of trials based on component size?
- large do 30-60 trials
- medium do 150-200 trials
- small do 400-800 trials
- if you think fatigue will be an issue based on trial number, run more participants