The key dependent variable was the accuracy of participants’ beliefs related to the COVID-19 pandemic. This variable consisted of responses to a number of statements about the pandemic, which were sourced from preprints of early research on public perceptions of COVID-19 (eg, [26]), public health agencies and medical institutes (eg, the WHO), media tracking organizations (eg, NewsGuard), and expert reports in established media (eg, CNBC); a comprehensive list of these resources is available in Multimedia Appendix 2. Only statements based on scientific claims were included in order to make sure that there was compelling evidence that the claims were either true or false.
At T0, participants were exposed to 10 statements, of which five were scientifically accurate (eg, “Fever is one of the symptoms of COVID-19”) and five were at odds with the best available evidence (eg, “Radiation from 5G cell towers is helping spread the coronavirus”). Participants responded by indicating the accuracy of a statement as follows: false, probably false, don’t know, probably true, or true. In each subsequent wave, four new statements were added to the list of statements: two accurate ones and two inaccurate ones. This allowed us to keep the belief accuracy measure current, reflecting contemporary insights and discussion points. The order of the statements was randomized per participant and varied per wave.
A belief accuracy score was calculated by converting the response to each statement to a number reflecting how accurate the response was; a correct judgment was counted as 1 and an incorrect judgment was counted as –1. A less certain but correct probably true or probably false counted as 0.5 and an incorrect one as –0.5. Finally, a don’t know response was counted as 0. Average scores were calculated per wave per participant, resulting in a repeated measure of belief accuracy. Internal consistency was acceptable to good across the four waves; the McDonald ωt was between 0.75 and 0.87 in all waves.
Coronavirus-related behavior aimed at preventing the coronavirus from spreading was measured by asking participants to indicate their agreement with three statements. The statements were “To prevent the coronavirus from spreading...” (1) “I wash my hands frequently,” (2) “I try to stay at home/limit the times I go out,” and (3) “I practice social distancing (also referred to as ‘physical distancing’) in case I go out”; agreement was measured on a scale from 1 (strongly disagree) to 7 (strongly agree). Scores were averaged per wave per participant. Internal consistency was acceptable to good across the four waves; the McDonald ωt was between 0.77 and 0.83 in all waves.
Trust in scientists was measured in all four waves with responses to the statement “I trust scientists as a source of information about the coronavirus.” Participants responded on a 7-point scale ranging from 1 (strongly disagree) to 7 (strongly agree).
Participants’ primary news source for information about the COVID-19 pandemic was identified by asking them at T0 what their main source of news about the coronavirus was. Participants could choose one option from a list of 11 news sources, based on data from the Pew Research Center on Americans’ news habits [27].
Finally, we included a manipulation check at T3. This consisted of asking participants how they evaluated the truthfulness of the statements about the coronavirus and coronavirus disease in the study over the past weeks. We asked them to name the steps that they took to evaluate the claims in three open text boxes, of which at least one had to be used. These answers were coded by the first author to indicate whether they mention consensus—or something similar—or not. A second coder coded a random subset of 120 answers, with Krippendorff α indicating good (α=.85) interrater reliability. Therefore, the complete coding from the first author was used in the analyses.
Not all measures included in the study are listed here, because not all measures are relevant here. Please see the material on the project page on the OSF for the remaining measures [23].
The boosting intervention that was included at the end of T0, T1, and T2 consisted of a short infographic that was aimed at empowering participants to use scientific consensus when evaluating claims related to the COVID-19 pandemic. The infographic set out three steps that can be used to evaluate a claim: (1) searching for a statement indicating consensus among scientists, (2) checking the source of this consensus statement, and (3) evaluating the expertise of the consensus. The infographic can be found in Multimedia Appendix 3. Participants in the control condition were not exposed to the infographic.
Demographics including political orientation, age, gender, ethnicity, and education were asked about at T0. Political orientation was measured by combining political identity (ie, strong Democrat, Democrat, independent lean Democrat, independent, independent lean Republican, Republican, or strong Republican) and political ideology (ie, very liberal, liberal, moderate, conservative, or very conservative) into one numeric, standardized measure centered on 0 (ie, moderate, independent), based on Kahan [28].
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.