2.4. Group-Based Cross-Validation

XC Xiaoqian Chen
RG Resh S. Gupta
LG Lalit Gupta
ask Ask a question
Favorite

The goal of group-based cross-validation is to train the multidomain classifiers with the m-ERPs of a set of subjects and test the m-ERPs of the individual subjects not included in the training set. In order to do so in a systematic fashion, k-fold cross-validation is modified so that the folds are defined with respect to subjects. In this cross-validation approach, which we refer to as “k-subject-fold cross-validation,” each fold consists of the m-ERPs of (B/k) subjects, where B and k are the number of subjects and folds, respectively. The classifier is trained with the m-ERPs in (k1) folds and validated (tested) on the m-ERPs of each subject in the left-out fold. As in regular k-fold cross-validation, the process is repeated k times so that the ERPs of all subjects are tested. The final result is obtained by averaging the classification accuracies within and across the k repetitions. The process can be repeated several times and averaged by first shuffling the order of the subjects so that the subjects fall in different folds. For the special case (k=B), that is, each fold contains the m-ERPs of only one subject, the procedure reduces to leave-one-subject-out cross-validation.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A