The goal of group-based cross-validation is to train the multidomain classifiers with the -ERPs of a set of subjects and test the -ERPs of the individual subjects not included in the training set. In order to do so in a systematic fashion, -fold cross-validation is modified so that the folds are defined with respect to subjects. In this cross-validation approach, which we refer to as “-subject-fold cross-validation,” each fold consists of the -ERPs of (/) subjects, where and are the number of subjects and folds, respectively. The classifier is trained with the -ERPs in (1) folds and validated (tested) on the -ERPs of each subject in the left-out fold. As in regular -fold cross-validation, the process is repeated times so that the ERPs of all subjects are tested. The final result is obtained by averaging the classification accuracies within and across the repetitions. The process can be repeated several times and averaged by first shuffling the order of the subjects so that the subjects fall in different folds. For the special case (), that is, each fold contains the -ERPs of only one subject, the procedure reduces to leave-one-subject-out cross-validation.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.