2.3. Signal Recognition
This protocol is extracted from research article:
A New Recognition Method for the Auditory Evoked Magnetic Fields
Comput Intell Neurosci, Feb 9, 2021; DOI: 10.1155/2021/6645270

In the first AEFs dataset, there are a total of 200 single trial AEFs, 3 of which are seriously polluted by the noise, so they are screened out (see Appendix C). The remaining 197 single trial AEFs are training data source. Each intercepted AEFs segment lasts 0.3 s, ensuring it contains P50, N100, and P200 peaks. At the same time, we randomly intercept 200 equal-length segments from the third noise dataset without overlapping. All 397 signal segments are processed with the signal enhancement method and 397 2D images can be obtained as training data. Similarly, 199 single trial AEFs can be obtained from the second AEFs dataset. 200 other equal-length noise segments are also randomly intercepted from the noise dataset. Therefore, a total of 399 2D images can be obtained as testing data.

Pretrained GoogLeNet is utilized to recognize auditory activation patterns in the single trial data. GoogLeNet is a 144-layer convolutional neural network (CNN). The input image is filtered by each layer of the network to get its features. The initial layer is used primarily to identify common features of the image, such as blobs, edges, and colours. The subsequent layers focus on more specific features to divide the images into different categories. For the single trial AEFs recognition problem, 3 layers of GoogLeNet should be readjusted.

The first adjusted layer is the final dropout layer in the network, which aims to prevent overfitting. The original dropout layer randomly sets input elements to zero with a given probability of 0.5, which is set as 0.6 in the new layer. The second one is the last connected layer that decides how to combine the features that the network extracts into class probabilities, a loss value, and predicted labels. In order to retrain GoogLeNet to classify noise and AEFs 2D images, the last connected layer is replaced with a new fully connected layer with the number of filters equal to the number of classes (noise and AEFs). The third adjusted layer is the final classification layer that is utilized to specify the output classes of the network. The classification layer is replaced with a new one without class labels, which will be automatically set as the output classes during the network training. Then, we retrain GoogLeNet for the single trial AEFs recognition problem, which means that it is trained based on the network parameters obtained from pretraining. We set the initial learning rate as 0.0001, which determines the variation range of parameters in the ANN. The epoch is set as 10, which represents how many times ANN is trained with the same set of training data. We use 80% images for training and the remainder for validation. A random seed is set as the default value in Matlab to generate random numbers.

Note: The content above has been extracted from a research article, so it may not display correctly.



Q&A
Please log in to submit your questions online.
Your question will be posted on the Bio-101 website. We will send your questions to the authors of this protocol and Bio-protocol community members who are experienced with this method. you will be informed using the email address associated with your Bio-protocol account.



We use cookies on this site to enhance your user experience. By using our website, you are agreeing to allow the storage of cookies on your computer.