request Request a Protocol
ask Ask a question
Favorite

For PSTF implementation, the data were filtered bi-directionally (zero-phase shift) by IIR Butterworth low- and high-pass filters. The candidate frequency cut-offs were {1, 2, 3, 4, 5, 6, 7 Hz} for the low-pass filter, and {none, 0.1, 0.3, 0.5} for the high-pass filter. For the FC beamformer, the candidate regularization coefficients were {0, 1, 10, 103, 104, 105, 106}. In the temporal filtering stage, the candidate window sizes were {5, 10, 25}. The optimal combination of frequency cut-off, regularization coefficient, and time window size were determined by five-fold cross validation on training data for each participant.

Each trial from the experimental protocol, presented in Experimental setup above, was composed of an idle period and either a left or a right movement period. This protocol produced more samples of the idle class than left or right movement classes. Therefore, for classifier training, where applicable, the idle class was subsampled to create a balanced training set. Also, due to limited number of available trials (average of 195 left or right movement valid trials per participant), the dataset was split into 10 equal and distinct chunks, 9 used for training and 1 for testing. Testing was repeated 10 fold, each time using a distinct chunk of data.

As candidate classification algorithms for the ODCS, five well-known classification algorithms were selected (Bishop, 2006a,b): linear probabilistic Gaussian model (LPGM), support vector machines (SVM), Fisher linear discriminant analysis (FLDA), logistic regression (LR), and regularized least square (RLS). The following is a brief description of these algorithms. In this paper, the ODCS was applied with each class pair in the DDAG structure for either the selection of the best classification algorithm (ODCS1-DDAG) or fusion of the results of the best two (ODCS2-DDAG), three (ODCS3-DDAG), four (ODCS4-DDAG), and five (ODCS5-DDAG) classification algorithms.

LPGM models the training features by unimodal Gaussian distributions, and assumes that classes share a common covariance matrix, leading to linear decision boundaries. This assumption limits model complexity and improves generalization by reducing the risk of noise over-fitting. Following the estimation of the class-conditional Gaussian distributions from training data, Bayes' theorem is used to infer the posterior probability of a test case belonging to a certain class. We presented a complete mathematical formulation of LPGM in Abou Zeid and Chau (2015).

SVM is a linear classifier that seeks to find the maximum margin hyperplane that separates one class from another (Bishop, 2006b). It can deal with non-linear classification problems by mapping the input space through kernel functions to a higher dimensional space. SVM formulates a convex optimization problem that can be solved through sequential minimal optimization. The radial basis function (RBF) kernel was used.

FLDA projects the input features onto a one-dimensional space that maximizes class separation. The projection vector is computed following the Fisher's criterion (Bishop, 2006a) by maximizing the between-class variance while minimizing the within-class variance. A discriminant is computed on the projected data by using LPGM.

LR forms a logistic function of a linear combination of the input variables whose output is in the range of [0, 1]. The best weight of the combination can be estimated by maximizing the likelihood function on the training data and using gradient decent methods (Bishop, 2006a).

RLS estimates the linear model (i.e., weights) associated with each of the classes by minimizing the sum-of-squared error function on training data (Bishop, 2006a). A regularizer λ was used to limit the growth of the weights, which facilitated model training with a modestly sized data set while mitigating the risk of severely over-fitting the model to noise. For λ, the candidate values were {0, 0.5, 1, 10, 102, 103, 104, 105 }. The optimal λ was determined by 5 fold cross-validation on training data for each individual participant.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A