1/7
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What is a summary of this paper?
Aim: This paper evaluates the impact of deactivating Facebook for the four weeks preceding a national election in the US on time use, news knowledge and political views, subjective well-being, and views and use of Facebook in subsequent weeks.
Empirical strategy: The authors run a randomised control trial (RCT) where randomisation into treatment occurred at the individual level. They look at impacts on a wide range of outcomes based on surveys, text messages, observable online activity, and administrative voting records.
Findings: The authors’ findings suggest that deactivating Facebook reduces online activity more broadly, reduces knowledge about relevant news but also reduces political polarisation, increases subjective well-being, and reduces the use of Facebook even after the treatment ends. However, treated individuals still value Facebook a lot after the experiment.
In your own words, explain how the authors elicit the participants willingness-to-accept to stay deactivated for four weeks rather than 24 hours, and why, with this approach, participants have an incentive to tell the truth (i.e., to reveal their true willingness-to-accept).
The approach that the authors use is a variation of what is a called a Becker–DeGroot–Marschak (BDM) mechanism in economics. Their approach essentially worked as follows.
First, participants were told that a computer had randomly chosen a payment value. Second, participants were asked to state the minimum payment that they would accept (their Willingness-To-Accept) to deactivate their Facebook account for 4 weeks rather than 24 h, under the condition that, if they state an amount smaller than the computer-selected payment value, they might receive that payment value but be asked to actually deactivate their account for 4 weeks in exchange. Third, participants would receive some additional payment for completing all surveys and, if in the treatment group, for keeping their account deactivated for the full 4-week period.
With this approach, participants have an incentive to tell the truth, i.e., reveal their true value for deactivating their Facebook account. The key element is that the computer-selected payment value is independent of both their true WTA or their stated WTA. To see that let’s consider the following cases.
• Suppose that someone reports WTA_stated < WTA_true. If WTA_stated < WTA_true < payment, nothing changes. If payment < WTA_stated < WTA_true, nothing changes. If WTA_stated < payment < WTA_true, the individual will be asked to deactivate their account but receive in exchange a value that is below their true WTA. So, if they comply, they are not compensated enough. If they don’t comply, they lose the additional payments. The individual would have been better off reporting their true WTA, and not been asked to deactivate their Facebook account.
• Suppose that someone reports WTA_stated > WTA_true. If WTA_stated > WTA_true > payment, nothing changes. If payment > WTA_stated > WTA_true, nothing changes. If WTA_stated > payment > WTA_true, the individual will not be asked to deactivate their account and will not receive any deactivation payment, even though that payment value was above their true WTA. They would thus have been better off telling the truth to be asked to deactivate their account in exchange for a payment value above their true WTA.
When studying the treatment effect of Facebook deactivation, why aren’t the authors using the data from participants who had a willingness-to-accept above $102 in the midline survey?
This is because the randomisation was only done among the participants with WTA < 102. In particular, comparing treated individuals to individuals with WTA > 102 would imply comparing people who might be systematically different. WTA might be, for instance, correlated with expected use of Facebook for the next 4 weeks, such that treated individuals and individuals with WTA > 102 might have had different outcomes after the midline survey in the absence of the treatment.
What is the purpose of using a “pre-analysis plan” for a randomised control trial such as the one in this paper?
The use of pre-analysis plans is a common practice in RCTs nowadays, not only in medical fields anymore, but also in social sciences. This is to prevent what is sometimes called “data-mining” or “specification search” in economics. We typically treat a single result as statistically significant if 0 does not belong to the 95% confidence interval. What that means is that there is only a 5% chance that the result is actually not different from zero. But what that also means is that if you look at many different (independent) results, and those results would actually be equal to zero if you had an infinite sample, 5% of the time you are going to conclude that the result is statistically significant (even though it is not). The risk with empirical analyses is that researchers might try thousands of different regressions with many different outcomes and many different ways to look at each outcome, and then only report the result for which they conclude that the result is statistically significant. If they do that, it is hard to tell if these results are actually significant or if they are purely by chance. A pre-analysis plan limits this problem because it asks researchers to say ex-ante (before starting to use the data) which specific outcome they will look at and how they will look at each outcome.
In your own words, explain what are the main takeaways from Figure 2, regarding what type of activities are crowded-out by time spent on Facebook.
Figure 2 suggests that Facebook crowds-out offline activities such as watching TV alone, spending time alone not watching TV, and spending time with family and friends, although it does not seem to increase the number of friends one interacts with or the “diversity” of their interactions. Interestingly, Facebook does not seem to be crowding-out the use of other sources of news. Facebook also seems to crowd-in other online activities such as non-Facebook social-media time and non-social online time. This result is a bit more difficult to interpret, however. Indeed, individuals often use their Facebook account to log into other online applications, so Facebook deactivation might “mechanically” reduce participants’ time spent on these applications (the result might have been different if individuals were not using their Facebook account to log into these applications).
In your own words, explain what are the main takeaways from Figure 6, regarding whether, as a society, we should get people to reduce their use of Facebook or not.
Figure 6 first shows that treated individuals seem to have learned about the value of using Facebook less through the experiment: they plan to use Facebook less at the end of the experiment, they actually use Facebook less in the subsequent weeks, and they believe that deactivation was good for them. So there might be an “internality” problem in how much people use Facebook. At the same time, the treatment does not seem to have changed participants’ views about Facebook. In fact, participants seem to have also learned that Facebook is useful to follow the news and that people would miss Facebook if they did not have access to it. So these results suggest that helping individuals reduce their use of Facebook might be good for individuals themselves, but these results do not support the view that individuals would be better off without Facebook at all. In fact, their stated WTA for deactivation is still very high after the experiment.
What is the main takeaway/surprise from this paper?
The most interesting results are those in Figure 6. People realise that using Facebook less is good for them, but it is also clear that they see a lot of value in using Facebook even after having to learn to live without Facebook for 4 weeks.
What is the main weakness of this paper?
In terms of internal consistency (how to interpret the results in the context of the experiment), it is hard to know how much of the results come from Facebook itself vs. from other applications requiring a Facebook account to log in. In terms of external validity, it remains a relatively short intervention: 4 weeks is not enough to re-adjust social life without Facebook, finding good alternatives, etc. So some of these effects could be quite different for longer deactivations. Moreover, the results could be different if, as in this experiment, an individual is deactivated but their contacts are still using Facebook, or if both the individual and their contacts would reduce Facebook use at the same time.