Neural network characteristics

DV Dániel Varga
SS Szilárd Szikora
TN Tibor Novák
GP Gergely Pap
GL Gábor Lékó
JM József Mihály
ME Miklós Erdélyi
request Request a Protocol
ask Ask a question
Favorite

The Artificial Neural Network (ANN) used for the classification of the data points contained one hidden layer with 32 units using the ReLU activation function as non-linearity. The loss function was binary cross-entropy with a single sigmoid output neuron. Adam was used as an optimizer. The model was trained for 10 epochs with a batch size of 32. Implementation and training was completed in ‘keras 2.2.4’ using a consumer grade Intel Core i7-6700K CPU without utilizing any GPU. With such a setup training takes less than a day. Classification roughly scales with N*log(N) on large datasets, bounded by the k-nearest neighbor algorithm. The classification of a single double-line structure takes around one minute on a regular dataset, while the object detection of a dataset contaning 20 doubleline structures requires less than a minute.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A