knowt logo

Week 7 tuesday

lec 7.mp4

Week 7 lec tuesday

Few sessions in your discussion sections, and I think that would be a great time. You know, as a class, as a community, to try to build some study exam guides together. And then if you have other suggestions or feel that your views are not represented, complete the survey, we'll still take a look and I appreciate all of those who already did so. So let's talk about public opinion. So the textbook defines public opinion. As the collection of attitudes and preferences of the mass public, so on this next slide I have a crazy little sort of mind map schematic that's going to clearly define and provide examples of both attitudes and preferences. Let's get specific about what we mean by this. So preferences, let's get specific about what we mean by this. So for thinking about attitudes and preferences and how they feed into public opinion. Let's go ahead and start with attitudes. Attitudes are comprised of two different types of evaluations. First, we have opinions. So the textbook defines opinions as subjective evaluations or beliefs that. Are often a little bit more short term. They can fluctuate overtime. So here anytime you have sort of a personal belief that's not super deeply rooted, maybe it's about a specific passing event or maybe it's something where you have an opinion, but it could possibly change. Then we're in this realm of opinions and I'll provide some examples in just a second. But this is in contrast to other types of attitudes which the textbook calls predispositions. So the difference between opinions and predispositions is that these predispositions are deeper values or strongly held beliefs that are stable. Cool. Overtime. So opinions are sort of more subjective, flippant, short term predispositions are your deeply held core beliefs that really sort of define your political orientation towards the world. You can think of this as being maybe a structuring set of of normative beliefs, so examples of. Both of these, something like. You know, I like Kamala Harris. You know, like I have a favorable opinion of Kamala Harris. Great. It's sort of your subjective evaluation of her. You probably didn't have an opinion of her before she became vice president, you know, or unless you were somehow represented by her in California before it's sort of. Not super strong, deeply held belief that you've had all your life. It's just an evaluation. I trust the Supreme Court. If somebody asked you on a public opinion question how much you trust the Supreme Court, maybe on this particular day you're feeling like you trusted, this probably might fluctuate over time depending on what the court is doing. You may be thinking about the court more or less on a day-to-day basis. It's probably not a deeply held part of your. Identity having a little bit of trust in the Supreme Court, I think the minimum wage is too high. So again, this is just a subjective evaluation of a particular policy. It is possible that if you are a minimum wage worker, maybe like a Union Leader who's been working in this policy your entire life, that, like maybe this could somehow become a predisposition, but. This is also the kind of thing where this could change over time if minimum wage policy changes, it's probably not central to your identity that you hold this belief. This is just, you know, a subjective opinion might change as new politicians come in as new policies come in. Predispositions though these are your deep stories about yourself and your orientation towards the world, and we have sort of two different components. The 1st is ideology. So again we have a ton of definitions. Thank the textbook for offering them. I'm trying to distill them down, make it easy, but. The textbook defines ideology as a coherent, organized set of principles on which beliefs are based. A coherent organized set of principle on which beliefs are based. And this is 1 type of predisposition that you can have and we'll talk about some examples. The other type of predisposition that's common in American politics. Is having a party identification, so your party identification is a psychological attachment or loyalty. To a political. Party. Psychological attachment or loyalty to a political party, and I want to point out that having a party ID and having attachment to a particular party, this does not necessarily mean this is not necessarily the same thing as having a strong, coherent ideological. Commitments. It is the case that there are different ideological sides of the spectrum that both parties tend to embody, but they're not always super coherent. You know, you can think about among Democrats a working class union person who really cares about minimum wages. And progressive registry. Solution that's not necessarily the same set of concerns as someone who's a Democrat because they care a lot about trans rights. In one case, you want the government to provide more intervention, raise minimum wages like protect tariffs, protect workers. In the other case, you want the government to not. Interfere with individuals rights to their gender identity. It's completely fine that these are both part of the democratic camp, but it's not like there's some deep lying underlying ideology that ties these two together ideologies. Are things similar to our normative frameworks where you have some core set of principles that structure and guide your subsequent opinions and your subsequent preferences? Libertarianism. You know this idea that in general we should always just try to protect and enhance liberty and. Whenever possible, we should have as little government input as possible. That's an ideology that might then shape your party ID. It might shape your opinions and preferences, but the overlying structure, the ideology, is something about valuing freedom, valuing liberty. You could think of utilitarianism as an ideology. As well, if you really believe that action should be justified in terms of how much good they do for the general public rather than if they are good in and of themselves, if you think about, well, welfare maximization, utility maximization, that could be your underlying ideology, which again can structure all these other things. Conservatism. If you have sort of a a deep seated ideology that believes in preserving traditions and preserving traditional structures and and institutions that could be your your governing ideology party idea is really just. Are you a Republican? Democrat or an independent? If you were in a different country, maybe there's more things on offer, but here we really have the D's and the R's and your party identification is often more like cheering for a sports team. You have a psychological affinity for the party. You often vote with them, but your ideology may or may not always agree. With what party leaders? Espouse again because of polarization and sorting, it is often the case that Republicans are more conservative, as and Democrats are more liberal. But historically it was possible to have more conservative Democrats in the South before the southern realignment. Or you could have more liberal Republicans in the northeast. And then so attitudes have opinions and predispositions. Predispositions have this ideology and party ID component and then preferences. The textbook defines as just whenever you have different outcomes that you are ranking. This is a preference, so a preference is if you prefer one thing over another, and there has to be some sort of comparison or ranking for there to be a preference. So for example, if you are someone who would rather pay lower taxes rather than having more public parts, that is a preference. You have a preference for lower taxes. And less public service provision. So the easy rule of thumb here is if there's some sort of comparison, if there's some sort of ranking, it's a preference. Otherwise, if it's a short term, sort of not so important, not so deeply held, just evaluation or belief, this is probably an opinion. If it's a deep seated, deeply held structuring belief that you've had for a long time and it's pretty stable you. Are probably in the land of either ideology. If it's structured along a set of principles or party ID's, if it's based around rooting for that team that you are affiliated with. So I know this is a ton of definitions, hopefully this slide helps put them all together in the textbook. They're sort of spread out throughout the chapter. Let's just go through. Make sure you can provide. Some different examples of these different sources of public opinion. Unfortunately, we're not really going to have time today to go over this question of where political attitudes come from. This is an important section in the textbook the the textbook offers these five different sources of where your different attitudes and opinions. Come from. And if we have time at the end of the day, I'll return to this. Otherwise, you know, in the textbook just it, none of this is tricky. It's it's all exactly what you would think. Like socialization. Your parents are an important early source of your political beliefs. Personal interest, your occupation or your relative income. Like, these are important sources of beliefs. So just go through be able to provide one example of each. This won't be super important for the quiz and I really want to focus most of today on measuring public opinion because this is where it gets fun. But just I'm going to sort of expect you to be able to have a sense of this. The textbook, just in terms of background knowledge. So let's get into how political scientists measure public opinion, and then we'll do a bunch of interactive examples, which I'm excited about. Oh, did we have a visitor? Oh, that was very exciting. All right, so let's get into this question of measuring public opinion so. The very first time that we sort of saw public polling in a presidential election to try to get a sense of how voters were thinking about the task was in 1824. So this was very contested election. Between Adams, Jackson, and actually a bunch of other. For people, it was a very tight and uncertain race. It was really the first time since 1800 where it was completely unclear who was going to win the original party structure. Of the US. Was completely breaking down and we ended up with the John Quincy Adams winning the Presidency even though he didn't win the popular vote. And this was the first time in history that this happened where he won enough Electoral College votes ultimately to to become president. But he didn't win the. Vote. And so this was a chaotic election. And because there was so much uncertainty, both sides realized that it could be quite useful to go to state legislators in the different States and to try to do these informal straw polls of, like, you know, how are all the people in this town or county voting. And so this was very informal. A straw poll just means that it's an informal poll that is not scientific and is sort of based on just. Convenient samples, but this is the first time that you saw parties sort of going state to state and trying to figure out which direction is the public's leaning. So not scientific, not not super sophisticated, but we do. You can Google like straw pool of 1824 and find some scholarly articles. Talking about this first attempt by presidential candidates to measure public opinion. But in the modern era where things really get exciting is with this gentleman FDR, who we've talked about quite a bit this quarter, he's our new deal friend, he's, you know, a President very responsible for expanding federal power and another unique feature of Roosevelt was that he was the first presidential candidate. To really rely on private polling to inform his election strategy, as well as his public. Policy. I mean, he had a big agenda with the New Deal and he needed to be assured that there was public backing behind him in order to do all of the stuff he wanted to do. And so he would surround himself by statisticians, which was a new branch of math that was sort of just emerging and. He was really obsessed with knowing what his approval ratings were. He was the first candidate to try to strategically change where he was campaign. Being based on these early public opinion polls and ever since then, of course, it's become standard practice for presidential candidates to use this type of data in shaping their strategy. And one of the things that helped Roosevelt accomplish these goals was that George Gallup, who we've met. Via the Gallup polls at several points over the quarter, he created his famous institute on the American Institute of Public. Kenyan in 1936 and FDR and Gallup were friends. FDR was an early adopter of using the Gallup poll data to change the way that he ran for president. So we have this new field of statistics. We have politicians who are recognizing that this data is useful to them, and we have new public policy institutes. That are tasked with trying to perform surveys of Americans to gauge public opinion. This is very exciting and of course today we're. In an era 01 more important thing. The 1940s was the period where we really sort of realized scientifically the value of random sampling. So as statistics was emerging as a field, we were trying to figure out just how reliable and valid are the sorts of data coming out of these polling. Efforts and in 1940s we realized that random sampling is actually an incredible way to get very valid data and draw good inferences about what millions of Americans are thinking. Even if you only have 1000 respondents, as long as you randomly sample, we'll return to this concept when we get into our own. Examples of design but Fast forward. Now we're in this era where we typically have obsessive and pretty accurate polling for almost any election that matters for almost any election that people are paying attention to, you can probably go and find a public opinion poll, and some people really don't like this. They make it feel as though the election is preterm. And before you even go to the polls, it can feel demobilizing. But there we are. This this is a figure from the New York Times, which has their famous election night needle that updates in terms of which candidate is predicted to win as exit polls come in as. As the Poles are counted. And even though we are in this era where we can pretty accurately predict most elections before they happen, we do still sometimes have some surprises. So 2016 was an election where experts all predicted that Clinton would win all of the public. Opinion Poll suggested that she had about a three percentage point margin over Trump. Most of the models predicted that Clinton would win, and actually. She did win the popular vote and most of the polls were within the margin of 1.5 percentage points. They just they missed the states that were gonna go for Trump and the Electoral College went for Trump. So we can have some embarrassments. We're not always right about everything but the science of polling has gotten to such a place where we do have high degrees of accuracy. So let's talk about measurement challenges. When it comes to wanting to understand what policies and politicians the public supports, we're going to talk about. 3. The first is random fluctuation. The second is the fact that a lot of the public is uninformed or doesn't have strong preferences. And then the third comes from the actual design of how surveys are written and how they're conducted. So we'll go through all three of these, define what they mean, provide examples, and then we'll really focus on this last point of how can we conduct good surveys with good questions. That allows us to draw good inferences, so random fluctuation. This is just the idea that from day-to-day. If you ask the public about any topic, whether it's how much you like pizza or how much you like Donald Trump, there is going to be just some random fluctuation in how folks respond. This figure, you don't need to read all the fine text, it's just showing. I believe Trump's public opinion approval ratings. Biden's and Congress. And the key thing here is just you should notice that the line just sort of squiggles around a lot. Yes, there are some trends. Trends might indicate something more meaningful going on, but from day-to-day, if you notice just a little jump up, a little, jump down. Like, do you really think people are changing their minds that much? It's probably just that their sports team lost that day and so they're feeling negative about everything or on this particular day, it's rainy and so sort of people often have a little bit of random fluctuation and how they respond to opinion based questions just based. On the day and this type of random fluctuation is what we can address really well with random sampling and effective survey design. So this it's the type of thing where this would be a challenge if we were in a land of small samples and if we didn't know as much as we did about Staffs. But this, we're going to be able. To address, but you should just be aware that from day-to-day things fluctuate a little bit. The 2nd measurement challenge. Is the idea that the public is often relatively uninformed or might not have particularly strong attitudes or opinions and. The idea here really originated in the 1960s, when political scientists got serious about trying to track and measure public opinion in a more systematic way. And some scholars at University of Michigan discovered that if you ask a bunch of people. Repeated overtime, the same political questions you often get a high proportion of folks who seem to basically just be giving random answers. They sort of just seem to like. Answer whatever. The first option is, and if you ask them a week or two later, they just change their mind and they're just not super stable or. Consistent and very high rates of folks were behaving in this way once we started doing this within person, over time analysis. So this was quite concerning. How can we measure public opinion if people don't actually have real opinions? If they're just changing their and? Others, some other. Just sort of sad facts about the state of knowledge that maybe call into question whether this enterprise is even worth embarking on at all. Only 37% of Americans can name their congressional representative. Hopefully. Hopefully all of you I don't know. Did we have an assignment in this class to ask them to research their congressional representative? That might have known in our Congress class. We did everyone in my Congress. Ask can do this if you are one of the the folks who can't go look it up. The extra credit assignment is going to ask you to draft a letter to a representative in some level of office, whether it's your congressional representative or your mayor, and there's links to provide you with information about who your representatives are. 35% of Americans can't name a single branch of the federal government. I believe everyone in this room can now do this because we have covered all three branches of government. They will be on the final. So I know you all are good on this. 27% know it takes a 2/3 vote of the House and Senate to override a presidential veto. This is pretty arcane knowledge and 21% incorrectly think a 5 to 4 Supreme Court decision is sent back to Congress for reconsideration. Which would be very democratic, but. No, this is not the case. The Supreme Court is sort of the sole arbiter of deciding constitutionality. Congress would just have to draft a new law that is constitutional if they want to change it. So these are just some examples of the fact that people don't always have super coherent and valid opinions. And at the end of today's class, we're going to return to the question of. How worried should we be? Can democracy still function? Even if folks don't know that much, political scientists tend to actually be pretty optimistic and think that actually this is totally rational and fine. People know about things when it actually affects them, and for the most part, it doesn't affect your day-to-day life to know this. But be thinking about this and and we'll come back to this. At the end of class. So. On these first two points of random fluctuation and uninformed public, let's just sort of put these aside and let's assume. That people have, at least. Some knowledge and some opinions that are valid or stable over time and worth measuring. The question now is how do we elicit that? How do we construct good samples and good surveys to actually learn about public opinion, assuming that there's some signal in the noise and we do know that there is some noise? So this is going to be a more interactive. Parts of class I'll define some key terms, and then we'll do some surveys together. We'll fix some survey questions, but let's start with just some definitions to make sure we're all. On the same page all in. The textbook as well. When you're talking about public opinion, the first thing that you want to determine is what is your population of interest and a population is just a group that you want to learn about. And there are different groups or populations that you might be interested about for different things for a presidential election, you might be interested in the opinions of all voters living in the US. For a policy that UCLA is trying to pass, maybe the population of interest is UCLA students. Or maybe it's UCLA students and faculty. You know, maybe we care about citizens. Maybe we care about non citizens. Maybe we care about children. You know, you have to get clear about. Who are you actually trying to measure the preferences of? Because if you don't have a well defined population, you won't be able to construct a good sample to actually measure that. Group. So your population is just the group you care about. It should be theoretically relevant. And then for the most part, you're not usually going to be able to measure the preferences of the entire population. Usually that's a pretty big group. So in general you are going to be drawing a sample which is just a subset of the population which you are going to sample to provide information about the broader group. When you go in to get a blood test to see if you're healthy from the doctor, the doctor doesn't need to take all of your blood to test for different diseases. They can take a small sample of your blood and that sample of your blood represents, you know, the various viruses and antibodies and platelets and etcetera that are. In your broader body. Similarly, if we're trying to learn about UCLA students or American voters or whoever it might be, we're usually not going to contact all of them. We will contact a sample. The gold standard of constructing a sample is a random sample, so a random sample is a particular type of sample where each person from the population has an equal chance of being part of the sample each person. In the population has an equal chance of being part of the sample, and we'll talk and show some examples of what happens when a sample isn't random. If a sample isn't random. Then you almost certainly have a biased sample, so a biased sample is instead one that doesn't represent the population accurately. Have some examples on just the the next slide. So if the population that I was trying to measure. Was say all the students in this room say you were my population. If my sample were only the people sitting in the 1st 2 rows. That might not be a particularly good random sample of the entire room, because there might be a particular type of person who tends to sit at the front, and I might not be capturing sort of an accurate distribution of everyone who's actually in this room. There's a few main ways that you can end up with bias samples that don't. Actually represent your population of interest. The 1st is if you use non random sampling. So this idea of, you know, just grabbing the folks in the first three rows that would be an example of. A non random sample. If you were trying to. Measure public opinion of American voters, if you. Went through the phone book and. Just. Called everyone who had an alphabetical name that started with a. This would be another example of like a non random approach where if there's a particular type of group that is more likely to have that last name, they might appear disproportionately in. Your sample and you might not actually get to represent the population of of interest. Another problem is in addition, if you don't contact people randomly in a way that puts everyone having an equal chance of appearing, another problem would be that not everyone wants to take your survey or participate. And if you have voluntary responses, you can end up with a weird set of people who are willing. To talk to you, does anyone have any guesses about, you know, for political surveys when? Survey people I I'm not someone who actually conducts surveys from my research. But when pollsters call up a landline and say, hey, are you willing to talk to me to take my political survey? What sorts of people do you think are more likely to voluntarily participates? Yes. And then here. Yeah, they're first elderly people tend to be home, right. If you're retired, you're more likely to be at home rather than at work. And so we often get an overrepresented station of senior citizens for these types of at home landline calls. Is that what you were going to say?

People who have. Like.

Strong people with strong opinions. Yes, if you are someone who doesn't care about politics or doesn't have strong views, you're going to say sorry. Like I just don't have time. I'm not interested. So we tend to get people who have strong opinions to participate. Any others? Did yours all get taken? By now, yeah, I think those are are both obvious. I mean, this is silly, but just like for a phone sample like it's you're only going to reach people who have land lines. So anyone who only has a cell phone and doesn't have a landline won't have a possibility of appearing in your sample. And and yeah, when it comes to voluntary responses, often the people who care the most are the ones who participate. And then even if you have a random sample, and even if you try to rule out any sort of bias that creeps in from the voluntary nature of participating, if you have a super low response rate. This can also bias your sample, because if you end up with just a small number of participants, it becomes increasingly likely that they don't represent the views of the entire group, so you know we have a class of 300 folks in this survey where I was asking for feedback. If only 10 people participated, first of all, it's voluntary. And so they're probably people with really strong views, but with that low response rate, we just don't have any sense of if those people accurately reflect the views of the broader class. So let's go through an example of what can happen when we have a biased sample, and then we'll doctor some questions together. 1. So I was trying to find an example of, like, a UCLA climate survey where they interviewed or surveyed students. To to look at what the response rate was like, and it turns out they did do a campus. That study, but they did it back in 2014 and they haven't done a systematic like good random sample poll since 2014, which was shocking to me. But I got the data from the 2014 campus climate study, and we'll see some issues of both. Biasness in the sample and in low response rates. So here the population of interest was the entire UCLA community and the population broken down by gender was actually slightly more woman than man. So 54% of the UCLA population is women. And 45% is men and this is both students and faculty. And I think this might this gender gap might have even grown over. Time. So that's in the population. That's the group that we want to study. But in the sample. So this is the group that actually responded to the survey. We see that only 40% were men and 58% were women. So there are more women responding to the climate. Survey than men, we can see that the response rate for men was 18.9% response rate for women was 23%. Any theories for why women might be more likely to participate in a climate study relative to men? I mean, there's lots, but let's put on our social science hats and like, what are some hypotheses for why women are overrepresented in this voluntary sample? Yes.

Tend. To be more liberal. Want to share their views? Study like that?

Our Conservative association care as much about. Oh and yeah, so this this is actually accurate and I want to clarify. So the the the hypothesis was if women are more liberal and liberals tend to care more about the issues than we might expect an over representation of women. And this survey was actually not about climate change. This was a survey about the climate at UCLA in terms of being like a welcoming, inclusive place. So like the the environment among the students. But same exact thing. Like if women are somehow caring more about these issues than men, then they might be disproportionately likely to want to speak. And in particular, you know if women are maybe like less, more likely to have experienced gender bias or sexual assaults or any sort of issue that they want to report about in a climate survey, that could lead to their over representation, they might have stronger views and want to participate. Undergrads were 36% of the university's population. But they were only 33% of the sample, whereas grad students were 15% of the population, but 18% of the sample. So grad students were more likely to respond to this campus climate study than undergrads. Any theories here grad students? Why do you participate in such high rates? Yeah, I think this is probably right. The UCLA grad students are very underpaid, and so they are aggrieved, and we should all support higher pay for our RA's. And I suspect that this imbalance might come from the fact that grad students care a lot about this issue. Yeah, definitely. So anyway, this is just one example of you know, you have some population, but then once you have a voluntary response to a survey and once you end up having sort of low response rates, you can end up drawing inferences that may or may not accurately reflect the entire population as a whole. OK, but let's say that we get a really good random representative sample. Are we finally in the clear? Can we measure public opinion? No. So. Even if you have the perfect sample and it's perfectly representative of your population, it's random. You have a high response rate. There are really big variations that you can get in answers based on question format, question wording, etcetera. So I have. I have something I want to try. I hope this works. I'm going to divide the class in half here so don't click on the QR code until you know which side to click on. So I'm saying that this is the left side of the room, like from Raymond on to this side. This is the left side of the room and then over here is the right side of the room where Clayton. So make sure you get the correct QR code and only take one of these surveys. There's some questions at your link if you're in the middle, just pick flip a coin. But we have our left hand side over here right hand side, over with. The TA's go to the link. And fill out the survey. And I really hope that we get some variation. I've gotten it in the. Past. Don't share with each other. You have to don't buy us the. The results? It has to be an independent. OK, we've gotten enough responses that I think hopefully we should be able to draw some comparisons. Feel free to keep responding. I know the last question is outdated at this point. This exercise comes from the Gallup poll and I wanted to use the. Exact question wording to try to be scientifically valid so. Let's. OK, so there were some subtle differences in the question, wording that I'm hoping lead to some differences and answers, and I can already see that that is the case, which I'm happy about. So on the left hand side it says, you know, imagine the US is preparing for an outbreak of an unusual new disease which is expected to kill 600 people. 2 alternative programs. Have been proposed. Which option do you prefer? And a lot of people say that they want a program where 200 people will be saved for sure only 30% of people like a program where there's a 1/3 probability that 600 people will be saved and a 2/3 probability that no people will be saved. However, if you just change the language over here. And invert. So this is the exact same set of circumstances program a. 200 people will be saved. Program a over here. 400 people will die if you emphasize being saved versus dying, even though it's literally the exact same numbers. You're just flipping the order in which you're presenting it. People don't like. Saying that they want a program where 400 people will die even though they're happy with a program where 200 people will be saved. So you know we're having completely different answers to this question just based on whether you're emphasizing one thing or the other thing, even though the numbers are identical. The next question about. Free speech and the Holocaust of this question on the left says do you think the US government should allow public speeches denying the Holocaust? 46% say yes here. If you say, do you think the US government should forbid public speeches denying the Holocaust? Only 34% say yes. So if you say allow versus forbid, we're getting like a 13 percentage point difference. Just emphasizing allow or forbid, even though they're the the the same. Types of policy. Please this next question how or do you get headaches frequently and if so, how many? A month. Let's see if this actually worked. So when you say frequently. Oh, I really can't tell here. Let's see. So that one says frequently. And this one says occasionally. And normally, when you say frequently, you get many more people saying that they get a lot of headaches. When you say occasionally. Well, I actually truly cannot interpret this. You.

Fewer 0. Yeah, 301.

No, I I feel like there's more 0 ones. Normally, like the frequently should have. Yeah, I feel like I, I don't know what I was trying. I think there's too many options there. Ignore that one. But the idea the distributions are different, I just don't know if they're different in the way they should be. How tall. Was Napoleon and interests inches versus how short was he in inches? When you say tall normally people OK, so yeah. When you say how short was he? More people say less than five. So you see, 20% of people said less than five feet when you prime the word short, whereas only 14% of people said less than five feet. When you say how tall. So even just swapping in tall versus short is triggering people to like guest that he's short. And then finally on the Afghani question, which is already completely outdated. What is the difference here? Do you think, oh, here, it's emphasizing. Civil order. And here it's emphasizing many years regardless of what happens on, oh, yeah, here. It's saying civil order will deteriorate. And here it's not saying that. So it's like you're priming. One thing I don't know. I will send all of these results to you so that you can take a look, but the the whole idea here is that we're seeing differences and responses to questions that should sort of be substantively the same, but they just have different focuses.

Whoops.

So now how do I get? Out of this? Let's see. Great. OK. So let's talk about some of these differences that survey researchers have discovered can lead to big changes in how people respond. One common issue is that open-ended versus closed ended questions often yield very different results. If you ask people what you think the most important issue. Facing this country. Is if you ask us in a close ended format, most people say the economy, so the close ended format is the right hand column here and you get 58% of people saying the economy if you just give them a list of possible issues and say what's the most important issue, people tend to gravitate towards that. But if you ask. In an open-ended way where you say like list, the most important issue and you don't constrain them only 35. 5% of people say the economy. All of these examples are historical examples from the Pew Center in discussion section this week. You will be doing some of your own public opinion research. The TA's will introduce you to a resource where you can actually look up your own public opinion polls. These are all from the last. Ten, 20-30 years, but open versus closed ended questions can make a big difference. Another issue is something that researchers call acquiescence bias. So with acquiescence bias, if you only give people two choices. This there is this tendency to just want to agree. People like to be agreeable and they like to agree. If you give them two chances. So here the question says first the best way to ensure peace is through military strength. And when you say just that 55% of people agree, but if you give people options so option a, the best way to ensure peace is through military strength or diplomacy is the best way to ensure peace. Suddenly we have a majority saying they prefer diplomacy. You just didn't offer it to them before, before you gave them an agreed disagree. And people have a bias towards wanting to agree. This is called acquiescence bias, so it's better to give sort of more choices, or even better yet, ask in an open-ended way. Another issue that can emerge is that depending on the order that you ask questions, you can get very different responses. So this is back in 2003. Like obviously nowadays, no one cares about gay marriage, but back in the day it was like, oh, gay marriage or civil unions. So in the olden days. If you first ask about civil unions, you get about 37% of Americans saying that they favor civil unions. However, if you first ask about gay marriage and solicit responses on gay marriage, and then ask about civil unions, suddenly you get 45% of people saying that support, civil union. Usually if you ask about some strong policy issue and then ask about a weaker version of it, you can get more people agreeing or being favorable. Like if you prime them to think of some extreme policy, you can often solicit more agreement with a more moderate policy so question or. Ordering. Letters. And then finally question wording matters a lot. So here are two sample questions. One question says as you may know, this week in January 2016, Barack Obama announced several executive orders that changed the nation's gun laws so that background checks are required for more gun purchases. Online and at gun shows, and which makes it easier for the FBI to complete background checks efficiently overall, do you favor or oppose these change? So CNN asked this question and 67% of poll respondents said that they favored these executive orders from Obama to do these things. However, the very same week a different polling company ran a different. Survey and this Rasmussen report survey said when it comes to gun control, should President Obama take action alone if Congress does not approve the initiatives he has proposed? Or should the government do only what the President and Congress agree on? And when you emphasize this idea of unilateral action and working with Congress, only 34% said Obama should act alone. So in this first version of the question, you're emphasizing the policies like should Obama act to address these policies? Big majority say yes. When you remind people. Oh, but should he act alone, or should he wait for? Congress, suddenly people say, Oh no, he should not act alone. He should wait for Congress. So just depending on what you emphasize and how you word the question, you can get very different results. So I have some examples of bad questions and as a group we will fix them and come up with better versions of the question to try to solicit better answers. So. A bad question would be how short was Napoleon by using the word short, we're sort of priming people to think in. A particular way. What would be a different way to ask this question if you were trying to get people like this is obviously a silly example. We don't really care what people think about this, but for the purposes of learning how to conduct good questions, what would be a way to rephrase this in a more scientifically accurate way? Yes. Perfect. What do you think Napoleon's height was so you aren't triggering? In fact, that's almost exactly what I had as an example of a good question. Just how can you describe Napoleon's height? What do you think the points height is? You don't want to, it just needs to be neutral. You don't want to trigger people of thinking of him as being particularly tall. Or particularly short, so just ask it in a way that illuminates any sorts of bias and and sort of just be as neutral as you can. OK, another example. Should concern the parents, use infant car seats. What would be a way? To well, first of all, what what would be the problem with this question? And then what would be a way to ask the question in a better way to try to get more accurate answers. Yes.

Concern would be the problem and. I think better question just takes out the word.

OK. So when you say should concern parents use car seats, then this is implying that if you don't think that they should use car seats that you are not a concerned parent like everyone wants to think of themselves. Selves as a concerned parents and by inserting this word, you're sort of priming folks in a way that could sway the answer. So, you know, just say, do you think infant car seats should be required? So don't make any value judgment about the types of parents who use them or not, because of course, if you say, oh, should concern parents use. Car seats. Everyone wants to think of themselves as a concerned parent parent, so just focus on only what you need in the question. Where do you enjoy drinking beer? So what is the issue with this question and what would be a better way to phrase this question? Yes.

Because. Like talking.

So couple things you've already picked up on one thing by saying, do you enjoy drinking beer? You're already assuming that the people the respondents enjoy drinking beer. What about people who don't enjoy drinking beer? They wouldn't know how to answer this question. Is there even more? Could we go even farther to address this? Beautiful. Yes. So here we're doing a couple of things. We're making assumptions about the respondent and then forcing them to answer even in cases where it might not apply to them. So you could start by saying, do you drink beer just first, get that? And if the answer is no, then the rest of the question isn't relevant. Right? You don't need to ask any follow up questions. The person doesn't drink beer. If yes, you could ask something like where do you usually drink beer? You know? Say you're a company that is trying to figure out where to market your bud light advertisements like maybe you need to do polling to figure this out. But yeah, you should deconstruct the question and don't force people to respond to questions that aren't relevant to them because they don't have an accurate. Way that they can respond. Excellent. OK, here's another question. How satisfied or dissatisfied are you with the pay and work benefits of your current job? What is the issue and how could we? Fix the question. Yes, please.

Like you can either be.

And not satisfied with your benefits or? Great. You're bundling together pay and work benefits here and not allowing for the cases where maybe people are satisfied with one of those things but not satisfied with the other things. So this is called a bundled question and the answer is very simple. You should first, you can just ask you know, how satisfied are you with your pay? And how satisfied are you with your work benefits? Just break it out so that you can allow people to have different answers to different parts of the question. Don't force them to respond in this bundled type of way. OK. And the very last example here. Do you always eat breakfast? Yes or no. Yes, yes.

Always. Like there should be more.

Perfect. So how would you? How would you ask this question? You're good. Perfect. Yeah. If you have a super inflexible category like always, it's very difficult to answer accurately, because even someone who eats breakfast, you know, 99% of the time and prefers to eat breakfast, probably hasn't literally eaten breakfast every single day of your life, right? Some days you sleep in, you're sick. You have an early air. Plane flight. And so you don't want to have these really strict, all-encompassing categories that make it difficult to be 100% accurate. You should ask in a more flexible open-ended way that gives people more variety of choices. So you know you could say how many days a week do you usually eat. Breakfast every day, five to six days, three to four days. I usually don't eat breakfast, so exactly the solution that you've proposed, which is just give a broader range of options instead of rigid, inflexible categories that almost certainly aren't always or never true. Awesome. Really good job with all of these. So the last. Issue that I want to talk about is going back to this question of wording and I want to introduce this idea of priming versus framing. So political psychologists are really into this topic. If you take Professor Perez's political psychology class, you'll talk a lot more about this. So let's define these two terms. And provide some examples and I think priming and framing are important because this is often how the media and political elites shape public opinion questions in a way that can influence the way people think about different I. So priming is the psychological process of shaping peoples initial perceptions or gut reactions. Priming is a psychological process and it's all about your perceptions, your gut reactions, the way you feel. You can think of priming as being anything that sort of addresses your first level associational. Responses in the way where you sort of naturally often feel like you like or dislike something, or are drawn towards it or drawn away from it. Priming is anything that's triggering happy feelings versus bad feelings. It's a very gut level, not a whole lot of cognitive processes. It's more about emotion. Framing, on the other hand, is about establishing context to emphasize certain features over others. So framing is more cognitive. It's more more overt. It's often based on details and argumentation. It's really more thought based. It's, you know, about providing information that leads people to adjust the way they think. Priming is more subtle, it's. Pictures, body language, feelings, emotions, framing is facts, argument logic, more cognitive. Oftentimes this might involve providing a particular narrative, emphasizing a particular perspective. Or offering particular types of argumentation. And politicians use both of these types of things to try to shape public opinion. You know, when you have an advertisement about a candidate, they might be talking about why they support policies that they do and presenting facts and figures. But they might also have, you know, particular type of music in the background. That invokes nice thoughts and makes you have positive association with that candidate. We'll watch some examples next week in our campaign and election unit. Just to provide an example of sort of priming versus framing, imagine that a community is trying to decide whether to approve local funding to build a new jail, and imagine that you're asking people in a focus group whether they support or oppose building a new jail. And imagine that. When you ask the question, you provide one of two pictures. In the first case, you could provide a picture of sort of aggressive looking incarcerated people sort of looking, you know, a little macho, a little intense or like in a different picture you could present. Incarcerated folks who are hard at work, you know, contributing to society. Here you asked the same question and you're not presenting any sort of explicit argumentation or logic, you're just altering a photo that is going to prime people to have either positive or negative associations with the idea of the new jail. So it's very primitive. It's just about feelings. It's just a picture. There's no text. There's not a lot of thought that goes into it. Framing would be more along the lines of.

Of.

Getting into argumentation that would trigger people to think one way or another. So for example, you could imagine two sides to the argument of building a new jail. Maybe one person says. Ohh well, building the new jail will result in increased taxes for local residents. So this is an argument maybe against the jail. If you don't want to pay more taxes. Another person could say, Oh well, building the new jail will spur job growth in the area for guards, cooks and social workers. So here you're emphasizing different aspects of the consequences of this policy. You're engaged in cognitive debate, and by framing the issue in these two different terms, you can get people to elicit different response. Chances. So the line between priming and framing can be a little bit blurry because often when you frame issues in a particular way, you might also prime emotions or associations. So yes, it is completely possible for priming and framing to interact, for example. Say that you have the question to alleviate the country's unemployment problems. Some have suggested trying to convince the state to build the new jail here. Do you support or oppose building a new jail in our county? So here this is framing where you're framing it in terms of unemployment. But by framing the question this way, you might be priming people to make a positive association. So honestly, go take the political psychology class if you want to figure out all this stuff more. I truly sort of don't fully understand the difference, and you will not be asked this on. The quiz I mean you should know the definition of each of these, but I'm not going to give you an example and say is this priming or framing. Unless it's truly just like. Like good music like that is priming, there's no argumentation if you're playing nice music, but the way I think about it is framing is this context narrative cognitive argument? Priming is more gut reaction, more associational framing can certainly influence priming, and both can influence your. Opinions and beliefs. All right, so we have 3 minutes left, so I'll go ahead and give you some practice quiz questions with I clicker. So let's go ahead and get set up. You can go to the QR code, enter the code, set up iclicker. We'll do a few practice. Quiz questions. OK. And then on each slide, the QR code is there, so I'm just going to go to the next slide and if you haven't gotten it yet, you can still get it. The QR code will always be down in the corner. So let me. Start this pole here. All right, so hopefully you can see the question now is this statement an opinion, a predisposition or a preference opinion predisposition or preference? All right, let's take a look. Predisposition. Yes. Because predispositions include deep seated root beliefs, such as party ID and ideology, identifying as a Republican as a good example of a predisposition. So it's not a particular attitude that vacillates over time. It's your loyalty to the Republican Party. Good.

Whoops.

How do I do this? All right. Is this statement an opinion, a predisposition or a preference? I have a favorable view of the Supreme Court decision to overturn Roe versus Wade. Opinion. Yeah. Good. I agree that this is an opinion. So remember preferences have to have comparisons. There's no comparison here. This is something where this is an opinion. This was a recent thing. You didn't have an opinion about it or all your life. So it's not a predisposition. It's possible that this is like the. Only part of your personality and political identity that matters, but it's much more likely, like this is just your subjective evaluation of this policy, and it's possible it might change. It's an opinion. Excellent. Random sampling is sufficient to ensure that responses to a survey will be unbiased representation of the population. Oh, OK, this is false and it is false because even if you have a random sample, you could still have response bias in terms of voluntary responsiveness or low response rate. So even if you have a perfect random sample, if people refuse to talk to you, or if you have bad questions. You could still end up with problems of biasness, and then the last one and then I'll let you all out of here. Which survey question is written according to best practices? And I will reveal the results at this our next class because I have to get you out of here on time. So I promise I'll show the results discussion section, bring your laptops because you're doing this interactive. Public opinion exploration there is an extra hey, I have just three seconds. Check out the extra credit. Check out the final paper. This is very important. The final paper is posted and there's no lecture next Tuesday. We have a documentary I will post. An announcement about this.

 

 

 

 

A

Week 7 tuesday

lec 7.mp4

Week 7 lec tuesday

Few sessions in your discussion sections, and I think that would be a great time. You know, as a class, as a community, to try to build some study exam guides together. And then if you have other suggestions or feel that your views are not represented, complete the survey, we'll still take a look and I appreciate all of those who already did so. So let's talk about public opinion. So the textbook defines public opinion. As the collection of attitudes and preferences of the mass public, so on this next slide I have a crazy little sort of mind map schematic that's going to clearly define and provide examples of both attitudes and preferences. Let's get specific about what we mean by this. So preferences, let's get specific about what we mean by this. So for thinking about attitudes and preferences and how they feed into public opinion. Let's go ahead and start with attitudes. Attitudes are comprised of two different types of evaluations. First, we have opinions. So the textbook defines opinions as subjective evaluations or beliefs that. Are often a little bit more short term. They can fluctuate overtime. So here anytime you have sort of a personal belief that's not super deeply rooted, maybe it's about a specific passing event or maybe it's something where you have an opinion, but it could possibly change. Then we're in this realm of opinions and I'll provide some examples in just a second. But this is in contrast to other types of attitudes which the textbook calls predispositions. So the difference between opinions and predispositions is that these predispositions are deeper values or strongly held beliefs that are stable. Cool. Overtime. So opinions are sort of more subjective, flippant, short term predispositions are your deeply held core beliefs that really sort of define your political orientation towards the world. You can think of this as being maybe a structuring set of of normative beliefs, so examples of. Both of these, something like. You know, I like Kamala Harris. You know, like I have a favorable opinion of Kamala Harris. Great. It's sort of your subjective evaluation of her. You probably didn't have an opinion of her before she became vice president, you know, or unless you were somehow represented by her in California before it's sort of. Not super strong, deeply held belief that you've had all your life. It's just an evaluation. I trust the Supreme Court. If somebody asked you on a public opinion question how much you trust the Supreme Court, maybe on this particular day you're feeling like you trusted, this probably might fluctuate over time depending on what the court is doing. You may be thinking about the court more or less on a day-to-day basis. It's probably not a deeply held part of your. Identity having a little bit of trust in the Supreme Court, I think the minimum wage is too high. So again, this is just a subjective evaluation of a particular policy. It is possible that if you are a minimum wage worker, maybe like a Union Leader who's been working in this policy your entire life, that, like maybe this could somehow become a predisposition, but. This is also the kind of thing where this could change over time if minimum wage policy changes, it's probably not central to your identity that you hold this belief. This is just, you know, a subjective opinion might change as new politicians come in as new policies come in. Predispositions though these are your deep stories about yourself and your orientation towards the world, and we have sort of two different components. The 1st is ideology. So again we have a ton of definitions. Thank the textbook for offering them. I'm trying to distill them down, make it easy, but. The textbook defines ideology as a coherent, organized set of principles on which beliefs are based. A coherent organized set of principle on which beliefs are based. And this is 1 type of predisposition that you can have and we'll talk about some examples. The other type of predisposition that's common in American politics. Is having a party identification, so your party identification is a psychological attachment or loyalty. To a political. Party. Psychological attachment or loyalty to a political party, and I want to point out that having a party ID and having attachment to a particular party, this does not necessarily mean this is not necessarily the same thing as having a strong, coherent ideological. Commitments. It is the case that there are different ideological sides of the spectrum that both parties tend to embody, but they're not always super coherent. You know, you can think about among Democrats a working class union person who really cares about minimum wages. And progressive registry. Solution that's not necessarily the same set of concerns as someone who's a Democrat because they care a lot about trans rights. In one case, you want the government to provide more intervention, raise minimum wages like protect tariffs, protect workers. In the other case, you want the government to not. Interfere with individuals rights to their gender identity. It's completely fine that these are both part of the democratic camp, but it's not like there's some deep lying underlying ideology that ties these two together ideologies. Are things similar to our normative frameworks where you have some core set of principles that structure and guide your subsequent opinions and your subsequent preferences? Libertarianism. You know this idea that in general we should always just try to protect and enhance liberty and. Whenever possible, we should have as little government input as possible. That's an ideology that might then shape your party ID. It might shape your opinions and preferences, but the overlying structure, the ideology, is something about valuing freedom, valuing liberty. You could think of utilitarianism as an ideology. As well, if you really believe that action should be justified in terms of how much good they do for the general public rather than if they are good in and of themselves, if you think about, well, welfare maximization, utility maximization, that could be your underlying ideology, which again can structure all these other things. Conservatism. If you have sort of a a deep seated ideology that believes in preserving traditions and preserving traditional structures and and institutions that could be your your governing ideology party idea is really just. Are you a Republican? Democrat or an independent? If you were in a different country, maybe there's more things on offer, but here we really have the D's and the R's and your party identification is often more like cheering for a sports team. You have a psychological affinity for the party. You often vote with them, but your ideology may or may not always agree. With what party leaders? Espouse again because of polarization and sorting, it is often the case that Republicans are more conservative, as and Democrats are more liberal. But historically it was possible to have more conservative Democrats in the South before the southern realignment. Or you could have more liberal Republicans in the northeast. And then so attitudes have opinions and predispositions. Predispositions have this ideology and party ID component and then preferences. The textbook defines as just whenever you have different outcomes that you are ranking. This is a preference, so a preference is if you prefer one thing over another, and there has to be some sort of comparison or ranking for there to be a preference. So for example, if you are someone who would rather pay lower taxes rather than having more public parts, that is a preference. You have a preference for lower taxes. And less public service provision. So the easy rule of thumb here is if there's some sort of comparison, if there's some sort of ranking, it's a preference. Otherwise, if it's a short term, sort of not so important, not so deeply held, just evaluation or belief, this is probably an opinion. If it's a deep seated, deeply held structuring belief that you've had for a long time and it's pretty stable you. Are probably in the land of either ideology. If it's structured along a set of principles or party ID's, if it's based around rooting for that team that you are affiliated with. So I know this is a ton of definitions, hopefully this slide helps put them all together in the textbook. They're sort of spread out throughout the chapter. Let's just go through. Make sure you can provide. Some different examples of these different sources of public opinion. Unfortunately, we're not really going to have time today to go over this question of where political attitudes come from. This is an important section in the textbook the the textbook offers these five different sources of where your different attitudes and opinions. Come from. And if we have time at the end of the day, I'll return to this. Otherwise, you know, in the textbook just it, none of this is tricky. It's it's all exactly what you would think. Like socialization. Your parents are an important early source of your political beliefs. Personal interest, your occupation or your relative income. Like, these are important sources of beliefs. So just go through be able to provide one example of each. This won't be super important for the quiz and I really want to focus most of today on measuring public opinion because this is where it gets fun. But just I'm going to sort of expect you to be able to have a sense of this. The textbook, just in terms of background knowledge. So let's get into how political scientists measure public opinion, and then we'll do a bunch of interactive examples, which I'm excited about. Oh, did we have a visitor? Oh, that was very exciting. All right, so let's get into this question of measuring public opinion so. The very first time that we sort of saw public polling in a presidential election to try to get a sense of how voters were thinking about the task was in 1824. So this was very contested election. Between Adams, Jackson, and actually a bunch of other. For people, it was a very tight and uncertain race. It was really the first time since 1800 where it was completely unclear who was going to win the original party structure. Of the US. Was completely breaking down and we ended up with the John Quincy Adams winning the Presidency even though he didn't win the popular vote. And this was the first time in history that this happened where he won enough Electoral College votes ultimately to to become president. But he didn't win the. Vote. And so this was a chaotic election. And because there was so much uncertainty, both sides realized that it could be quite useful to go to state legislators in the different States and to try to do these informal straw polls of, like, you know, how are all the people in this town or county voting. And so this was very informal. A straw poll just means that it's an informal poll that is not scientific and is sort of based on just. Convenient samples, but this is the first time that you saw parties sort of going state to state and trying to figure out which direction is the public's leaning. So not scientific, not not super sophisticated, but we do. You can Google like straw pool of 1824 and find some scholarly articles. Talking about this first attempt by presidential candidates to measure public opinion. But in the modern era where things really get exciting is with this gentleman FDR, who we've talked about quite a bit this quarter, he's our new deal friend, he's, you know, a President very responsible for expanding federal power and another unique feature of Roosevelt was that he was the first presidential candidate. To really rely on private polling to inform his election strategy, as well as his public. Policy. I mean, he had a big agenda with the New Deal and he needed to be assured that there was public backing behind him in order to do all of the stuff he wanted to do. And so he would surround himself by statisticians, which was a new branch of math that was sort of just emerging and. He was really obsessed with knowing what his approval ratings were. He was the first candidate to try to strategically change where he was campaign. Being based on these early public opinion polls and ever since then, of course, it's become standard practice for presidential candidates to use this type of data in shaping their strategy. And one of the things that helped Roosevelt accomplish these goals was that George Gallup, who we've met. Via the Gallup polls at several points over the quarter, he created his famous institute on the American Institute of Public. Kenyan in 1936 and FDR and Gallup were friends. FDR was an early adopter of using the Gallup poll data to change the way that he ran for president. So we have this new field of statistics. We have politicians who are recognizing that this data is useful to them, and we have new public policy institutes. That are tasked with trying to perform surveys of Americans to gauge public opinion. This is very exciting and of course today we're. In an era 01 more important thing. The 1940s was the period where we really sort of realized scientifically the value of random sampling. So as statistics was emerging as a field, we were trying to figure out just how reliable and valid are the sorts of data coming out of these polling. Efforts and in 1940s we realized that random sampling is actually an incredible way to get very valid data and draw good inferences about what millions of Americans are thinking. Even if you only have 1000 respondents, as long as you randomly sample, we'll return to this concept when we get into our own. Examples of design but Fast forward. Now we're in this era where we typically have obsessive and pretty accurate polling for almost any election that matters for almost any election that people are paying attention to, you can probably go and find a public opinion poll, and some people really don't like this. They make it feel as though the election is preterm. And before you even go to the polls, it can feel demobilizing. But there we are. This this is a figure from the New York Times, which has their famous election night needle that updates in terms of which candidate is predicted to win as exit polls come in as. As the Poles are counted. And even though we are in this era where we can pretty accurately predict most elections before they happen, we do still sometimes have some surprises. So 2016 was an election where experts all predicted that Clinton would win all of the public. Opinion Poll suggested that she had about a three percentage point margin over Trump. Most of the models predicted that Clinton would win, and actually. She did win the popular vote and most of the polls were within the margin of 1.5 percentage points. They just they missed the states that were gonna go for Trump and the Electoral College went for Trump. So we can have some embarrassments. We're not always right about everything but the science of polling has gotten to such a place where we do have high degrees of accuracy. So let's talk about measurement challenges. When it comes to wanting to understand what policies and politicians the public supports, we're going to talk about. 3. The first is random fluctuation. The second is the fact that a lot of the public is uninformed or doesn't have strong preferences. And then the third comes from the actual design of how surveys are written and how they're conducted. So we'll go through all three of these, define what they mean, provide examples, and then we'll really focus on this last point of how can we conduct good surveys with good questions. That allows us to draw good inferences, so random fluctuation. This is just the idea that from day-to-day. If you ask the public about any topic, whether it's how much you like pizza or how much you like Donald Trump, there is going to be just some random fluctuation in how folks respond. This figure, you don't need to read all the fine text, it's just showing. I believe Trump's public opinion approval ratings. Biden's and Congress. And the key thing here is just you should notice that the line just sort of squiggles around a lot. Yes, there are some trends. Trends might indicate something more meaningful going on, but from day-to-day, if you notice just a little jump up, a little, jump down. Like, do you really think people are changing their minds that much? It's probably just that their sports team lost that day and so they're feeling negative about everything or on this particular day, it's rainy and so sort of people often have a little bit of random fluctuation and how they respond to opinion based questions just based. On the day and this type of random fluctuation is what we can address really well with random sampling and effective survey design. So this it's the type of thing where this would be a challenge if we were in a land of small samples and if we didn't know as much as we did about Staffs. But this, we're going to be able. To address, but you should just be aware that from day-to-day things fluctuate a little bit. The 2nd measurement challenge. Is the idea that the public is often relatively uninformed or might not have particularly strong attitudes or opinions and. The idea here really originated in the 1960s, when political scientists got serious about trying to track and measure public opinion in a more systematic way. And some scholars at University of Michigan discovered that if you ask a bunch of people. Repeated overtime, the same political questions you often get a high proportion of folks who seem to basically just be giving random answers. They sort of just seem to like. Answer whatever. The first option is, and if you ask them a week or two later, they just change their mind and they're just not super stable or. Consistent and very high rates of folks were behaving in this way once we started doing this within person, over time analysis. So this was quite concerning. How can we measure public opinion if people don't actually have real opinions? If they're just changing their and? Others, some other. Just sort of sad facts about the state of knowledge that maybe call into question whether this enterprise is even worth embarking on at all. Only 37% of Americans can name their congressional representative. Hopefully. Hopefully all of you I don't know. Did we have an assignment in this class to ask them to research their congressional representative? That might have known in our Congress class. We did everyone in my Congress. Ask can do this if you are one of the the folks who can't go look it up. The extra credit assignment is going to ask you to draft a letter to a representative in some level of office, whether it's your congressional representative or your mayor, and there's links to provide you with information about who your representatives are. 35% of Americans can't name a single branch of the federal government. I believe everyone in this room can now do this because we have covered all three branches of government. They will be on the final. So I know you all are good on this. 27% know it takes a 2/3 vote of the House and Senate to override a presidential veto. This is pretty arcane knowledge and 21% incorrectly think a 5 to 4 Supreme Court decision is sent back to Congress for reconsideration. Which would be very democratic, but. No, this is not the case. The Supreme Court is sort of the sole arbiter of deciding constitutionality. Congress would just have to draft a new law that is constitutional if they want to change it. So these are just some examples of the fact that people don't always have super coherent and valid opinions. And at the end of today's class, we're going to return to the question of. How worried should we be? Can democracy still function? Even if folks don't know that much, political scientists tend to actually be pretty optimistic and think that actually this is totally rational and fine. People know about things when it actually affects them, and for the most part, it doesn't affect your day-to-day life to know this. But be thinking about this and and we'll come back to this. At the end of class. So. On these first two points of random fluctuation and uninformed public, let's just sort of put these aside and let's assume. That people have, at least. Some knowledge and some opinions that are valid or stable over time and worth measuring. The question now is how do we elicit that? How do we construct good samples and good surveys to actually learn about public opinion, assuming that there's some signal in the noise and we do know that there is some noise? So this is going to be a more interactive. Parts of class I'll define some key terms, and then we'll do some surveys together. We'll fix some survey questions, but let's start with just some definitions to make sure we're all. On the same page all in. The textbook as well. When you're talking about public opinion, the first thing that you want to determine is what is your population of interest and a population is just a group that you want to learn about. And there are different groups or populations that you might be interested about for different things for a presidential election, you might be interested in the opinions of all voters living in the US. For a policy that UCLA is trying to pass, maybe the population of interest is UCLA students. Or maybe it's UCLA students and faculty. You know, maybe we care about citizens. Maybe we care about non citizens. Maybe we care about children. You know, you have to get clear about. Who are you actually trying to measure the preferences of? Because if you don't have a well defined population, you won't be able to construct a good sample to actually measure that. Group. So your population is just the group you care about. It should be theoretically relevant. And then for the most part, you're not usually going to be able to measure the preferences of the entire population. Usually that's a pretty big group. So in general you are going to be drawing a sample which is just a subset of the population which you are going to sample to provide information about the broader group. When you go in to get a blood test to see if you're healthy from the doctor, the doctor doesn't need to take all of your blood to test for different diseases. They can take a small sample of your blood and that sample of your blood represents, you know, the various viruses and antibodies and platelets and etcetera that are. In your broader body. Similarly, if we're trying to learn about UCLA students or American voters or whoever it might be, we're usually not going to contact all of them. We will contact a sample. The gold standard of constructing a sample is a random sample, so a random sample is a particular type of sample where each person from the population has an equal chance of being part of the sample each person. In the population has an equal chance of being part of the sample, and we'll talk and show some examples of what happens when a sample isn't random. If a sample isn't random. Then you almost certainly have a biased sample, so a biased sample is instead one that doesn't represent the population accurately. Have some examples on just the the next slide. So if the population that I was trying to measure. Was say all the students in this room say you were my population. If my sample were only the people sitting in the 1st 2 rows. That might not be a particularly good random sample of the entire room, because there might be a particular type of person who tends to sit at the front, and I might not be capturing sort of an accurate distribution of everyone who's actually in this room. There's a few main ways that you can end up with bias samples that don't. Actually represent your population of interest. The 1st is if you use non random sampling. So this idea of, you know, just grabbing the folks in the first three rows that would be an example of. A non random sample. If you were trying to. Measure public opinion of American voters, if you. Went through the phone book and. Just. Called everyone who had an alphabetical name that started with a. This would be another example of like a non random approach where if there's a particular type of group that is more likely to have that last name, they might appear disproportionately in. Your sample and you might not actually get to represent the population of of interest. Another problem is in addition, if you don't contact people randomly in a way that puts everyone having an equal chance of appearing, another problem would be that not everyone wants to take your survey or participate. And if you have voluntary responses, you can end up with a weird set of people who are willing. To talk to you, does anyone have any guesses about, you know, for political surveys when? Survey people I I'm not someone who actually conducts surveys from my research. But when pollsters call up a landline and say, hey, are you willing to talk to me to take my political survey? What sorts of people do you think are more likely to voluntarily participates? Yes. And then here. Yeah, they're first elderly people tend to be home, right. If you're retired, you're more likely to be at home rather than at work. And so we often get an overrepresented station of senior citizens for these types of at home landline calls. Is that what you were going to say?

People who have. Like.

Strong people with strong opinions. Yes, if you are someone who doesn't care about politics or doesn't have strong views, you're going to say sorry. Like I just don't have time. I'm not interested. So we tend to get people who have strong opinions to participate. Any others? Did yours all get taken? By now, yeah, I think those are are both obvious. I mean, this is silly, but just like for a phone sample like it's you're only going to reach people who have land lines. So anyone who only has a cell phone and doesn't have a landline won't have a possibility of appearing in your sample. And and yeah, when it comes to voluntary responses, often the people who care the most are the ones who participate. And then even if you have a random sample, and even if you try to rule out any sort of bias that creeps in from the voluntary nature of participating, if you have a super low response rate. This can also bias your sample, because if you end up with just a small number of participants, it becomes increasingly likely that they don't represent the views of the entire group, so you know we have a class of 300 folks in this survey where I was asking for feedback. If only 10 people participated, first of all, it's voluntary. And so they're probably people with really strong views, but with that low response rate, we just don't have any sense of if those people accurately reflect the views of the broader class. So let's go through an example of what can happen when we have a biased sample, and then we'll doctor some questions together. 1. So I was trying to find an example of, like, a UCLA climate survey where they interviewed or surveyed students. To to look at what the response rate was like, and it turns out they did do a campus. That study, but they did it back in 2014 and they haven't done a systematic like good random sample poll since 2014, which was shocking to me. But I got the data from the 2014 campus climate study, and we'll see some issues of both. Biasness in the sample and in low response rates. So here the population of interest was the entire UCLA community and the population broken down by gender was actually slightly more woman than man. So 54% of the UCLA population is women. And 45% is men and this is both students and faculty. And I think this might this gender gap might have even grown over. Time. So that's in the population. That's the group that we want to study. But in the sample. So this is the group that actually responded to the survey. We see that only 40% were men and 58% were women. So there are more women responding to the climate. Survey than men, we can see that the response rate for men was 18.9% response rate for women was 23%. Any theories for why women might be more likely to participate in a climate study relative to men? I mean, there's lots, but let's put on our social science hats and like, what are some hypotheses for why women are overrepresented in this voluntary sample? Yes.

Tend. To be more liberal. Want to share their views? Study like that?

Our Conservative association care as much about. Oh and yeah, so this this is actually accurate and I want to clarify. So the the the hypothesis was if women are more liberal and liberals tend to care more about the issues than we might expect an over representation of women. And this survey was actually not about climate change. This was a survey about the climate at UCLA in terms of being like a welcoming, inclusive place. So like the the environment among the students. But same exact thing. Like if women are somehow caring more about these issues than men, then they might be disproportionately likely to want to speak. And in particular, you know if women are maybe like less, more likely to have experienced gender bias or sexual assaults or any sort of issue that they want to report about in a climate survey, that could lead to their over representation, they might have stronger views and want to participate. Undergrads were 36% of the university's population. But they were only 33% of the sample, whereas grad students were 15% of the population, but 18% of the sample. So grad students were more likely to respond to this campus climate study than undergrads. Any theories here grad students? Why do you participate in such high rates? Yeah, I think this is probably right. The UCLA grad students are very underpaid, and so they are aggrieved, and we should all support higher pay for our RA's. And I suspect that this imbalance might come from the fact that grad students care a lot about this issue. Yeah, definitely. So anyway, this is just one example of you know, you have some population, but then once you have a voluntary response to a survey and once you end up having sort of low response rates, you can end up drawing inferences that may or may not accurately reflect the entire population as a whole. OK, but let's say that we get a really good random representative sample. Are we finally in the clear? Can we measure public opinion? No. So. Even if you have the perfect sample and it's perfectly representative of your population, it's random. You have a high response rate. There are really big variations that you can get in answers based on question format, question wording, etcetera. So I have. I have something I want to try. I hope this works. I'm going to divide the class in half here so don't click on the QR code until you know which side to click on. So I'm saying that this is the left side of the room, like from Raymond on to this side. This is the left side of the room and then over here is the right side of the room where Clayton. So make sure you get the correct QR code and only take one of these surveys. There's some questions at your link if you're in the middle, just pick flip a coin. But we have our left hand side over here right hand side, over with. The TA's go to the link. And fill out the survey. And I really hope that we get some variation. I've gotten it in the. Past. Don't share with each other. You have to don't buy us the. The results? It has to be an independent. OK, we've gotten enough responses that I think hopefully we should be able to draw some comparisons. Feel free to keep responding. I know the last question is outdated at this point. This exercise comes from the Gallup poll and I wanted to use the. Exact question wording to try to be scientifically valid so. Let's. OK, so there were some subtle differences in the question, wording that I'm hoping lead to some differences and answers, and I can already see that that is the case, which I'm happy about. So on the left hand side it says, you know, imagine the US is preparing for an outbreak of an unusual new disease which is expected to kill 600 people. 2 alternative programs. Have been proposed. Which option do you prefer? And a lot of people say that they want a program where 200 people will be saved for sure only 30% of people like a program where there's a 1/3 probability that 600 people will be saved and a 2/3 probability that no people will be saved. However, if you just change the language over here. And invert. So this is the exact same set of circumstances program a. 200 people will be saved. Program a over here. 400 people will die if you emphasize being saved versus dying, even though it's literally the exact same numbers. You're just flipping the order in which you're presenting it. People don't like. Saying that they want a program where 400 people will die even though they're happy with a program where 200 people will be saved. So you know we're having completely different answers to this question just based on whether you're emphasizing one thing or the other thing, even though the numbers are identical. The next question about. Free speech and the Holocaust of this question on the left says do you think the US government should allow public speeches denying the Holocaust? 46% say yes here. If you say, do you think the US government should forbid public speeches denying the Holocaust? Only 34% say yes. So if you say allow versus forbid, we're getting like a 13 percentage point difference. Just emphasizing allow or forbid, even though they're the the the same. Types of policy. Please this next question how or do you get headaches frequently and if so, how many? A month. Let's see if this actually worked. So when you say frequently. Oh, I really can't tell here. Let's see. So that one says frequently. And this one says occasionally. And normally, when you say frequently, you get many more people saying that they get a lot of headaches. When you say occasionally. Well, I actually truly cannot interpret this. You.

Fewer 0. Yeah, 301.

No, I I feel like there's more 0 ones. Normally, like the frequently should have. Yeah, I feel like I, I don't know what I was trying. I think there's too many options there. Ignore that one. But the idea the distributions are different, I just don't know if they're different in the way they should be. How tall. Was Napoleon and interests inches versus how short was he in inches? When you say tall normally people OK, so yeah. When you say how short was he? More people say less than five. So you see, 20% of people said less than five feet when you prime the word short, whereas only 14% of people said less than five feet. When you say how tall. So even just swapping in tall versus short is triggering people to like guest that he's short. And then finally on the Afghani question, which is already completely outdated. What is the difference here? Do you think, oh, here, it's emphasizing. Civil order. And here it's emphasizing many years regardless of what happens on, oh, yeah, here. It's saying civil order will deteriorate. And here it's not saying that. So it's like you're priming. One thing I don't know. I will send all of these results to you so that you can take a look, but the the whole idea here is that we're seeing differences and responses to questions that should sort of be substantively the same, but they just have different focuses.

Whoops.

So now how do I get? Out of this? Let's see. Great. OK. So let's talk about some of these differences that survey researchers have discovered can lead to big changes in how people respond. One common issue is that open-ended versus closed ended questions often yield very different results. If you ask people what you think the most important issue. Facing this country. Is if you ask us in a close ended format, most people say the economy, so the close ended format is the right hand column here and you get 58% of people saying the economy if you just give them a list of possible issues and say what's the most important issue, people tend to gravitate towards that. But if you ask. In an open-ended way where you say like list, the most important issue and you don't constrain them only 35. 5% of people say the economy. All of these examples are historical examples from the Pew Center in discussion section this week. You will be doing some of your own public opinion research. The TA's will introduce you to a resource where you can actually look up your own public opinion polls. These are all from the last. Ten, 20-30 years, but open versus closed ended questions can make a big difference. Another issue is something that researchers call acquiescence bias. So with acquiescence bias, if you only give people two choices. This there is this tendency to just want to agree. People like to be agreeable and they like to agree. If you give them two chances. So here the question says first the best way to ensure peace is through military strength. And when you say just that 55% of people agree, but if you give people options so option a, the best way to ensure peace is through military strength or diplomacy is the best way to ensure peace. Suddenly we have a majority saying they prefer diplomacy. You just didn't offer it to them before, before you gave them an agreed disagree. And people have a bias towards wanting to agree. This is called acquiescence bias, so it's better to give sort of more choices, or even better yet, ask in an open-ended way. Another issue that can emerge is that depending on the order that you ask questions, you can get very different responses. So this is back in 2003. Like obviously nowadays, no one cares about gay marriage, but back in the day it was like, oh, gay marriage or civil unions. So in the olden days. If you first ask about civil unions, you get about 37% of Americans saying that they favor civil unions. However, if you first ask about gay marriage and solicit responses on gay marriage, and then ask about civil unions, suddenly you get 45% of people saying that support, civil union. Usually if you ask about some strong policy issue and then ask about a weaker version of it, you can get more people agreeing or being favorable. Like if you prime them to think of some extreme policy, you can often solicit more agreement with a more moderate policy so question or. Ordering. Letters. And then finally question wording matters a lot. So here are two sample questions. One question says as you may know, this week in January 2016, Barack Obama announced several executive orders that changed the nation's gun laws so that background checks are required for more gun purchases. Online and at gun shows, and which makes it easier for the FBI to complete background checks efficiently overall, do you favor or oppose these change? So CNN asked this question and 67% of poll respondents said that they favored these executive orders from Obama to do these things. However, the very same week a different polling company ran a different. Survey and this Rasmussen report survey said when it comes to gun control, should President Obama take action alone if Congress does not approve the initiatives he has proposed? Or should the government do only what the President and Congress agree on? And when you emphasize this idea of unilateral action and working with Congress, only 34% said Obama should act alone. So in this first version of the question, you're emphasizing the policies like should Obama act to address these policies? Big majority say yes. When you remind people. Oh, but should he act alone, or should he wait for? Congress, suddenly people say, Oh no, he should not act alone. He should wait for Congress. So just depending on what you emphasize and how you word the question, you can get very different results. So I have some examples of bad questions and as a group we will fix them and come up with better versions of the question to try to solicit better answers. So. A bad question would be how short was Napoleon by using the word short, we're sort of priming people to think in. A particular way. What would be a different way to ask this question if you were trying to get people like this is obviously a silly example. We don't really care what people think about this, but for the purposes of learning how to conduct good questions, what would be a way to rephrase this in a more scientifically accurate way? Yes. Perfect. What do you think Napoleon's height was so you aren't triggering? In fact, that's almost exactly what I had as an example of a good question. Just how can you describe Napoleon's height? What do you think the points height is? You don't want to, it just needs to be neutral. You don't want to trigger people of thinking of him as being particularly tall. Or particularly short, so just ask it in a way that illuminates any sorts of bias and and sort of just be as neutral as you can. OK, another example. Should concern the parents, use infant car seats. What would be a way? To well, first of all, what what would be the problem with this question? And then what would be a way to ask the question in a better way to try to get more accurate answers. Yes.

Concern would be the problem and. I think better question just takes out the word.

OK. So when you say should concern parents use car seats, then this is implying that if you don't think that they should use car seats that you are not a concerned parent like everyone wants to think of themselves. Selves as a concerned parents and by inserting this word, you're sort of priming folks in a way that could sway the answer. So, you know, just say, do you think infant car seats should be required? So don't make any value judgment about the types of parents who use them or not, because of course, if you say, oh, should concern parents use. Car seats. Everyone wants to think of themselves as a concerned parent parent, so just focus on only what you need in the question. Where do you enjoy drinking beer? So what is the issue with this question and what would be a better way to phrase this question? Yes.

Because. Like talking.

So couple things you've already picked up on one thing by saying, do you enjoy drinking beer? You're already assuming that the people the respondents enjoy drinking beer. What about people who don't enjoy drinking beer? They wouldn't know how to answer this question. Is there even more? Could we go even farther to address this? Beautiful. Yes. So here we're doing a couple of things. We're making assumptions about the respondent and then forcing them to answer even in cases where it might not apply to them. So you could start by saying, do you drink beer just first, get that? And if the answer is no, then the rest of the question isn't relevant. Right? You don't need to ask any follow up questions. The person doesn't drink beer. If yes, you could ask something like where do you usually drink beer? You know? Say you're a company that is trying to figure out where to market your bud light advertisements like maybe you need to do polling to figure this out. But yeah, you should deconstruct the question and don't force people to respond to questions that aren't relevant to them because they don't have an accurate. Way that they can respond. Excellent. OK, here's another question. How satisfied or dissatisfied are you with the pay and work benefits of your current job? What is the issue and how could we? Fix the question. Yes, please.

Like you can either be.

And not satisfied with your benefits or? Great. You're bundling together pay and work benefits here and not allowing for the cases where maybe people are satisfied with one of those things but not satisfied with the other things. So this is called a bundled question and the answer is very simple. You should first, you can just ask you know, how satisfied are you with your pay? And how satisfied are you with your work benefits? Just break it out so that you can allow people to have different answers to different parts of the question. Don't force them to respond in this bundled type of way. OK. And the very last example here. Do you always eat breakfast? Yes or no. Yes, yes.

Always. Like there should be more.

Perfect. So how would you? How would you ask this question? You're good. Perfect. Yeah. If you have a super inflexible category like always, it's very difficult to answer accurately, because even someone who eats breakfast, you know, 99% of the time and prefers to eat breakfast, probably hasn't literally eaten breakfast every single day of your life, right? Some days you sleep in, you're sick. You have an early air. Plane flight. And so you don't want to have these really strict, all-encompassing categories that make it difficult to be 100% accurate. You should ask in a more flexible open-ended way that gives people more variety of choices. So you know you could say how many days a week do you usually eat. Breakfast every day, five to six days, three to four days. I usually don't eat breakfast, so exactly the solution that you've proposed, which is just give a broader range of options instead of rigid, inflexible categories that almost certainly aren't always or never true. Awesome. Really good job with all of these. So the last. Issue that I want to talk about is going back to this question of wording and I want to introduce this idea of priming versus framing. So political psychologists are really into this topic. If you take Professor Perez's political psychology class, you'll talk a lot more about this. So let's define these two terms. And provide some examples and I think priming and framing are important because this is often how the media and political elites shape public opinion questions in a way that can influence the way people think about different I. So priming is the psychological process of shaping peoples initial perceptions or gut reactions. Priming is a psychological process and it's all about your perceptions, your gut reactions, the way you feel. You can think of priming as being anything that sort of addresses your first level associational. Responses in the way where you sort of naturally often feel like you like or dislike something, or are drawn towards it or drawn away from it. Priming is anything that's triggering happy feelings versus bad feelings. It's a very gut level, not a whole lot of cognitive processes. It's more about emotion. Framing, on the other hand, is about establishing context to emphasize certain features over others. So framing is more cognitive. It's more more overt. It's often based on details and argumentation. It's really more thought based. It's, you know, about providing information that leads people to adjust the way they think. Priming is more subtle, it's. Pictures, body language, feelings, emotions, framing is facts, argument logic, more cognitive. Oftentimes this might involve providing a particular narrative, emphasizing a particular perspective. Or offering particular types of argumentation. And politicians use both of these types of things to try to shape public opinion. You know, when you have an advertisement about a candidate, they might be talking about why they support policies that they do and presenting facts and figures. But they might also have, you know, particular type of music in the background. That invokes nice thoughts and makes you have positive association with that candidate. We'll watch some examples next week in our campaign and election unit. Just to provide an example of sort of priming versus framing, imagine that a community is trying to decide whether to approve local funding to build a new jail, and imagine that you're asking people in a focus group whether they support or oppose building a new jail. And imagine that. When you ask the question, you provide one of two pictures. In the first case, you could provide a picture of sort of aggressive looking incarcerated people sort of looking, you know, a little macho, a little intense or like in a different picture you could present. Incarcerated folks who are hard at work, you know, contributing to society. Here you asked the same question and you're not presenting any sort of explicit argumentation or logic, you're just altering a photo that is going to prime people to have either positive or negative associations with the idea of the new jail. So it's very primitive. It's just about feelings. It's just a picture. There's no text. There's not a lot of thought that goes into it. Framing would be more along the lines of.

Of.

Getting into argumentation that would trigger people to think one way or another. So for example, you could imagine two sides to the argument of building a new jail. Maybe one person says. Ohh well, building the new jail will result in increased taxes for local residents. So this is an argument maybe against the jail. If you don't want to pay more taxes. Another person could say, Oh well, building the new jail will spur job growth in the area for guards, cooks and social workers. So here you're emphasizing different aspects of the consequences of this policy. You're engaged in cognitive debate, and by framing the issue in these two different terms, you can get people to elicit different response. Chances. So the line between priming and framing can be a little bit blurry because often when you frame issues in a particular way, you might also prime emotions or associations. So yes, it is completely possible for priming and framing to interact, for example. Say that you have the question to alleviate the country's unemployment problems. Some have suggested trying to convince the state to build the new jail here. Do you support or oppose building a new jail in our county? So here this is framing where you're framing it in terms of unemployment. But by framing the question this way, you might be priming people to make a positive association. So honestly, go take the political psychology class if you want to figure out all this stuff more. I truly sort of don't fully understand the difference, and you will not be asked this on. The quiz I mean you should know the definition of each of these, but I'm not going to give you an example and say is this priming or framing. Unless it's truly just like. Like good music like that is priming, there's no argumentation if you're playing nice music, but the way I think about it is framing is this context narrative cognitive argument? Priming is more gut reaction, more associational framing can certainly influence priming, and both can influence your. Opinions and beliefs. All right, so we have 3 minutes left, so I'll go ahead and give you some practice quiz questions with I clicker. So let's go ahead and get set up. You can go to the QR code, enter the code, set up iclicker. We'll do a few practice. Quiz questions. OK. And then on each slide, the QR code is there, so I'm just going to go to the next slide and if you haven't gotten it yet, you can still get it. The QR code will always be down in the corner. So let me. Start this pole here. All right, so hopefully you can see the question now is this statement an opinion, a predisposition or a preference opinion predisposition or preference? All right, let's take a look. Predisposition. Yes. Because predispositions include deep seated root beliefs, such as party ID and ideology, identifying as a Republican as a good example of a predisposition. So it's not a particular attitude that vacillates over time. It's your loyalty to the Republican Party. Good.

Whoops.

How do I do this? All right. Is this statement an opinion, a predisposition or a preference? I have a favorable view of the Supreme Court decision to overturn Roe versus Wade. Opinion. Yeah. Good. I agree that this is an opinion. So remember preferences have to have comparisons. There's no comparison here. This is something where this is an opinion. This was a recent thing. You didn't have an opinion about it or all your life. So it's not a predisposition. It's possible that this is like the. Only part of your personality and political identity that matters, but it's much more likely, like this is just your subjective evaluation of this policy, and it's possible it might change. It's an opinion. Excellent. Random sampling is sufficient to ensure that responses to a survey will be unbiased representation of the population. Oh, OK, this is false and it is false because even if you have a random sample, you could still have response bias in terms of voluntary responsiveness or low response rate. So even if you have a perfect random sample, if people refuse to talk to you, or if you have bad questions. You could still end up with problems of biasness, and then the last one and then I'll let you all out of here. Which survey question is written according to best practices? And I will reveal the results at this our next class because I have to get you out of here on time. So I promise I'll show the results discussion section, bring your laptops because you're doing this interactive. Public opinion exploration there is an extra hey, I have just three seconds. Check out the extra credit. Check out the final paper. This is very important. The final paper is posted and there's no lecture next Tuesday. We have a documentary I will post. An announcement about this.