    # Also in the Article

Numerical simulations
This protocol is extracted from research article:
How to detect high-performing individuals and groups: Decision similarity predicts accuracy

Procedure

From each of the different populations of decision makers (Fig. 2A), we repeatedly sampled sets of 10 decision makers. Each of these 10 decision makers was characterized by a single value pi indicating its average individual accuracy. Next, given this average accuracy, each of these decision makers evaluated M cases (10, 25, or 100). To illustrate, a decision maker with pi = 0.7 evaluating 100 cases would be characterized by a vector of 100 values, where each value is either 0 (incorrect decision) or 1 (correct decision), drawn from a Bernoulli distribution with probability of 0.7 for a correct decision.

To study the effect of correlations between the decisions of different decision makers, we used an “opinion leader” approach (25). One of the 10 individuals was randomly assigned as the opinion leader, and we fixed the sequence of this individual’s decisions (i.e., the sequence of 0s and 1s). Then, for all remaining individuals, the sequence of their decisions was paired to the opinion leader’s sequence, depending on a correlation parameter pc (0 ≤ pc ≤ 1). In particular, starting at case i = 1, for each case, with probability (1 − pc), we randomly selected a decision from the set of remaining cases from that individual (i.e., decisions from cases j that have not yet been selected, ji), and with probability pc, we took the same decision as the decision of the opinion leader from this set. If the same decision was not present in the set of remaining decisions of that individual, we randomly selected a decision from this set. We then moved on to the next case i + 1. This procedure, thus, introduces different levels of correlation between decision makers, ranging from 0 (maximum amount of independence) to 1 (maximum amount of dependence) while not changing the frequency of 0s and 1s for each decision maker. Note that even if pc = 1, there can still be disagreement between a pair of raters, namely, when the numbers of 0s and 1s in their respective vectors are not equal.

Next, we calculated, for each individual, his or her average percentage of agreement with the other nine decision makers over all M cases. Last, we calculated the Spearman’s rank correlation coefficient between the average percentage agreement and average individual accuracy (pi) across the 10 decision makers. For each unique combination of (i) number of cases M, (ii) level of correlations pc, and (iii) population accuracy distribution (Fig. 2A), we repeated this procedure 2500 times, and we show values averaged across all repetitions (color codes in Fig. 2B).

Note: The content above has been extracted from a research article, so it may not display correctly.

Q&A