Procedure

EN Elizabeth A. Necka
SC Stephanie Cacioppo
GN Greg J. Norman
JC John T. Cacioppo
request Request a Protocol
ask Ask a question
Favorite

All procedures were approved by the University of Chicago IRB. Participants read and signed an informed consent document that specified they would be compensated for their participation as long as they completed the study.

Participants then saw a list of problematic responding behaviors (see Table 1) and were randomly assigned to either report how frequently they engaged in each behavior (frequency estimate for self condition) or to report how frequently other participants engaged in each behavior (frequency estimates for other condition, similar to the manipulation used by [22]). We included a condition in which we asked participants to report on the behavior of other participants rather than themselves because we reasoned that participants may have been motivated to misreport their behavior (under-reporting engagement in socially undesirable respondent behaviors and over-reporting engagement in socially desirable respondent behaviors) if they inferred that their responses could influence future opportunities for paid participation in research (c.f. [3132]). We expected that participants’ inferences of others’ behaviors would be egocentrically anchored upon their own behavior [33] but less influenced by self-serving reporting biases [34,35] and so could serve as more precise estimates of their own behavior.

In the frequency estimate for self (FS) condition (NMTurk = 425, NCampus = 42, NCommunity = 49), participants reported how frequently they engaged in each problematic responding behavior. Specifically, participants were asked, “When completing behavioral sciences studies [on MTurk / at the Psychology Department of the University of Chicago / at the Booth Chicago Research Lab], what percentage of the time that you have spent [on MTurk / completing studies] have you engaged in each of the following practices?”

In the frequency estimate for others (FO) condition (NMTurk = 423, NCampus = 42, NCommunity = 49), participants rated how frequently the average participant engaged in each problematic responding behavior. Specifically, participants were asked, “When completing behavioral sciences studies [on MTurk / at the Psychology Department of the University of Chicago / at the Booth Chicago Research Lab], what percentage of time spent [on MTurk / completing studies] does the average [MTurk / research / Booth research] participant spend engaging in each of the following practices?”

In the MTurk sample, which was collected before data collection from the campus and community samples began, we collected an additional 432 participants for a third condition in which participants rated how prevalent each problematic responding behavior was among other participants. We chose not to include this condition in the campus or community samples because it neither directly assessed participants’ own behavior nor could be used statistically to test the auxiliary hypothesis which is not presented in the current manuscript. In the campus and community samples, we also collected information about the frequency with which participants engaged in six additional behaviors, which were unrelated to completing psychology studies, to test the auxiliary hypothesis. Neither these questions nor the third MTurk condition are assessed further in the present manuscript.

Because we were interested in which factors might moderate participants’ engagement in each of the problematic responding behaviors, we also asked participants to answer a number of questions designed to assess their perceptions of psychological studies, frequency of completing studies, and financial incentives for completing studies. First, participants reported the extent to which survey measures represent a legitimate investigation of meaningful psychological phenomena. In the FS condition, participants reported what percent of the time that they believed that survey measures [on MTurk / in psychology studies / in Booth research studies] represented meaningful psychological phenomena. In the FO condition, participants reported what percent of the time that the average [MTurk / Psychology Department / Booth research] participant believed that survey measures [on MTurk / in psychology studies / in Booth research studies] represent meaningful psychological phenomena.

Next, participants in the FS condition reported whether or not they relied on [MTurk / Psychology Department studies / Booth research studies] as their primary form of income (yes or no) and how many hours a week they spent [completing HITS on MTurk / completing studies in the Psychology Department/ completing studies at the Booth Chicago Research Lab]. Participants in the FO condition instead reported what percentage of [MTurk / Psychology Department research / Booth research] participants relied on [MTurk / compensation from Psychology Department studies / compensation from Booth research studies] as their primary form of income, and reported how many hours a week the average [MTurk / Psychology Department research / Booth research] participant spent [completing HITs on MTurk / completing studies in the Psychology Department / completing studies at the Booth Chicago Research Lab].

All participants also reported whether or not each of the behaviors listed in Table 1 was defensible among MTurk, Psychology Department research, or Booth research participants (on a scale of No = 1, Possibly = 2, or Yes = 3), with the opportunity to explain their response in a free-response box. Because these data were intended to help test the auxiliary hypothesis which is not the focus of the present manuscript, these data are not presently analyzed further. Summaries of the qualitative data are available in the S1 File.

Finally, participants answered two items to assess their numeracy ability with percentages, as people with higher numeracy abilities tend to be more accurate in their frequency-based estimates [36]. Participants reported what percent 32 is of 100 and what percentage of time a standard American quarter would come up heads, using the same scale as they used in reporting how frequently they engaged in potentially problematic respondent behaviors. We reasoned that if participants successfully completed these problems, then there was a strong chance that they were capable of accurately responding to our percentage response scale as well. Throughout the study, participants completed three instructional manipulation checks, one of which was disregarded due to its ambiguity in assessing participants’ attention.

All items assessing percentages were assessed on a 10-point Likert scale (1 = 0–10% through 10 = 91–100%).

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A