Ch 5: Evidence
attrition bias: a selection effect (similar to survivor bias) in which some patients drop out of a research study, or data is lost in some other way that can make the evidence unreliable.
echo chambers: the feedback loop that occurs when our sources of information and opinion have all been selected to support our opinions and preferences. This includes our own selection of media and friends with similar viewpoints, but also results from the fact that social media tailors what we see, based on an algorithm designed to engage us.
evidence for H: when a fact is more probable given H than given not-H, it constitutes at least some evidence for H. By the first rule of evidence, this means we should increase our degree of confidence in H at least a small bit.
evidence test: if we are wondering whether a new fact or observation is evidence for a hypothesis H, we can ask whether that fact or observation is more likely given H or given not-H. If the former, it's at least some evidence for H. If the latter, it's at least some evidence for not-H. If neither, it's independent of H.
file-drawer effect: this is a selection effect caused by the researchers themselves, who might not even bother to write up and send in a study that is unlikely to be published (viz. a boring study), but instead leave it in their file drawers. See the related entry for publication bias.
heads I win, tails we're even: This error involves failing to treat a new fact as evidence against our favored position, even though its opposite would have readily been welcomed as evidence for our position. There are various ways we might make this error, including by ignoring the evidence or being inconsistent with what we assign to the strength factor values. But the key to this particular pitfall is the inconsistency between how we respond to the evidence at hand, and how we would have responded to the opposite evidence.
hypothesis: this is any claim under investigation, often denoted with the placeholder letter "H."
independent of H: see evidence test.
media bias: although this term is generally used in reference to the political biases of the media, we use it here to cover the general bias toward engaging content, though this may manifest in content of special interest to viewers with a certain political orientation or even outrightly slanted content. The general category of media bias also includes the highly tailored algorithms of social media.
one-sided strength testing: the strength of a piece of evidence for a claim is a matter of how much more likely the evidence would be if the claim were true than if it were false. If we pay attention only to how likely the evidence would be if the claim were true, and treat a high value as constituting evidence, we are making the mistake called "one-sided strength testing."
opposite evidence rule: to help us avoid ignoring evidence against a view that we find plausible, it can be useful to ask ourselves how we would have reacted to the opposite observation. If we would have treated the opposite observation as evidence for our view, then we should treat the evidence we have as evidence against our view. (Though the amount of evidence can be different.) In other words, if E is evidence for H, then not-E is evidence for not-H. Forgetting this leads to the error we call heads I win, tails we're even.
publication bias: the tendency for academic books and journals to publish research that is surprising in some way. A piece of research can do this by providing evidence against conventional wisdom, or for a surprising alternative. Meanwhile studies that support conventional wisdom, or fail to provide support for alternatives, can be passed over.
selection effect: a factor that systematically selects which things we can observe. This can make our evidence unreliable if we are unaware it's happening.
selective noticing: when observations that support a hypothesis bring that hypothesis to mind, causing us to notice that they support it, whereas observations that disconfirm it do not bring it to mind. The result is that we are more likely to think about the hypothesis when we are getting evidence for it, and not when we are getting evidence against it. So, it will seem to us like we are mainly getting evidence for it. This can happen even if the hypothesis is just something we've considered or heard about—it needn't be something we already believed. (So selective noticing can happen without confirmation bias, although it seems to be exacerbated when we do antecedently accept the hypothesis.)
serial position effect: the tendency to remember the very first and last events in a series (or the first and last parts of an extended event).
strength factor: a measure of the strength of a piece of evidence for some hypothesis H. It's obtained by asking how probable the evidence is given H, and dividing that by how probable the evidence is given not-H. The higher the strength factor, the stronger the evidence provided for H. (A strength factor of less than 1 is also possible, when the evidence is less likely given H than not-H: this means it's actually evidence against H.) A traditional but arbitrary threshold for "strong evidence" is about a strength factor of 10.
strength test: a test of the strength of a piece of evidence for some hypothesis H. We ask how much more (or less) likely the evidence is given H than given not-H. Note that we need a comparative answer to the strength test, so we are asking how probable the evidence is given H, and dividing that by how probable the evidence is given not-H. The resulting value is the strength factor.
survivor bias: this is a more specific term for bias arising from an extreme form of selection effect, when there is a process that actually eliminates some potential sources of information, and we only have access to those that survive. For example, suppose that I've met lots of elderly people who have smoked all their lives and are not sick, so I decide that smoking is fairly safe. I may be forgetting that the people who smoked and died are not around for me to meet.