Definition of loss function and weight optimization

SS Sangmin Seo
JC Jonghwan Choi
SP Sanghyun Park
JA Jaegyoon Ahn
request Request a Protocol
ask Ask a question
Favorite

In the proposed neural network model, the input flows to the output layer in a feedforward fashion. The mean squared error was used as a loss function to train the weights and biases. To prevent overfitting, we applied L2 regularization, so the norm of weights is added to the loss. The Adam optimizer was used for training the network (learning rate 0.005, batch size 256).

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A