Statistics and Experimental Design for ERPs

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/14

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

15 Terms

1
New cards

Why might a one-tailed test be used for ERP analysis?

if you specify in the null that you are looking for an N1, it only makes sense to check for a negative component value, so just check for the one value and if it was positive you have something else

2
New cards

statistical power

probability that the study will give a significant result if the research hypothesis is true (a real effect will be found)

3
New cards

Familywise error rate

the probability that a family of tests contains at least one Type I error

4
New cards

What are the biggest issues for stats of ERPs?

inflated error rates due to multiple tests being run and from cherry picking

5
New cards

What is the best indicator of an effect being real?

replication of results

6
New cards

What can help improve power and validity of statistics for ERPs?

collecting clean data with larger effects, using a simple analysis approach, running follow ups to replicate findings, consulting a statistician

7
New cards

data-driven approach (cherry picking)

an issue somewhat specific to ERP analysis where experimenters look at data to pick out a result to run stats on. This is inflating error rate many times over without any correction

8
New cards

How can issues of multiple implicit comparisons be solved?

1. picking an electrode and time window a priori
2. applying multiple comparison corrections
3. using measures that are more window independent (peak amplitude, signed area)

9
New cards

How can the problem of measuring average latency when some subjects don't have a certain component be solved and why?

use the jackknife approach, removes subjects for each sub average so the missing data has less of an impact and you have higher power

10
New cards

Jackknife approach

making sub-grand averages that each remove one subject and then using those for a grand average and stats in order to improve statistical power with a safe, legit method

11
New cards

What are Luck's main rules for experimental design?

1. peaks and components are not the same thing
2. you can't know the timing of underlying ERP components by looking at a single ERP waveform
3. an effect during the time period of a peak may not reflect a modulation of the peak's underlying component
4. peak amplitude/latency differences may not indicate differences in the actual component
5. averaged ERP waveforms don't accurately represent single trial ERP waveforms

12
New cards

How can difference waves be misleading?

when two waveforms are mostly overlapping but not entirely, the difference wave can amplify the areas that don't overlap and create two peaks that didn't really exist.

13
New cards

What are common confounds in ERP studies?

- sensory (differences in displays)
- motor (attention or lack of motor response)
- arousal
- overlap of ERPs from sequentially presented stimuli
- number of stimuli/trials

14
New cards

What are some ways to avoid misinterpreting ERPs?

- focus on only a couple components
- use well-studied experimental manipulations
- focus on large components
- isolate components with difference waves (but look at regular ERPs too)
- focus on easily isolated components

15
New cards

What are rules of thumb for number of trials based on component size?

- large do 30-60 trials
- medium do 150-200 trials
- small do 400-800 trials
- if you think fatigue will be an issue based on trial number, run more participants