Deep learning-based real-time BCI game system
This protocol is extracted from research article:
Leaf-inspired homeostatic cellulose biosensors
Sci Adv, Apr 16, 2021; DOI: 10.1126/sciadv.abe7432

The experiment was conducted to apply the CSs to a real-time game control system. The game system used in this experiment is a software called Brain Runner that controls a virtual avatar (i.e., an obstacle race game using BCI; BCI racing) (6). The experiment was conducted with a 16-channel CSs setting. Eight BCI naïve individuals participated. The user performed four classes of MI (right hand, left hand, foot, and resting state) to collect datasets of a total of 30 trials corresponding to each class. Conventional augmented common spatial patterns and convolution neural network (CNN) architectures (52) were applied for feature extraction and classifier parameter learning of the MI datasets. Two CNN architectures learned the datasets using two convolutional layers and one fully connected layer before the output layer. The input data to the network were an array of frequency bands, pattern size, and the number of the epoch. The first CNN training involved 32 feature maps (size: 32 × 1) in the first convolution layer. Next, the convolution layer also involved 32 feature maps (size: 1 × 20). The number of units in the fully connected layer was 480, and the training involved feeding a training batch of 20 and epochs of approximately 20. The second CNN training involved 48 feature maps (size: 1 × 96) in the first convolution layer. Next, the convolution layer also involved 48 feature maps (size: 1 × 80). The number of units in the fully connected layer was 1280 and involved feeding a training batch of 5 and epochs of approximately 50. In the testing phase, we applied the first CNN model to classify the brain patterns of MI and the resting state. Then, the other CNN model was applied to classify the patterns of three-class MI. In the classifier training phase, the classification accuracy for multiclass MI (right hand, left hand, foot, and rest) averaged 51.4 ± 6.6% through fivefold cross-validation. For real-time operation, features were calculated every 40 ms with a 3-s sliding window. The categorized patterns were converted into commands, and the game avatar was controlled via user datagram protocol communication.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.